Signiant’s Core Acceleration Technology

Each of Signiant’s four products leverages our core acceleration technology, which improves on standard Internet transmission speeds up to 200 fold and is increasingly impactful with longer distances, higher bandwidth and more congested networks. As illustrated in the chart below, Signiant’s performance is incrementally faster compared to TCP-based protocols like FTP as latency and bandwidth grow.

“UDP Acceleration”

The kind of technology we’ve developed to speed file transfers is often called UDP (user datagram protocol) acceleration. But the phrase doesn’t quite capture it. “UDP acceleration” is really just industry vernacular based on one small detail of the implementation, and doesn’t convey the depth of innovation involved.

What we’ve actually done is implement both an advanced transmission control protocol on top of UDP as a replacement for TCP, and an advanced file transfer protocol as a replacement for FTP. But, before we get into these improvements, let’s discuss how TCP, UDP and FTP — all original members of the Internet protocol suite — traditionally work.


Standard TCP is what provides a reliable stream of data from one point to another over the Internet. For the large majority of Internet traffic, TCP works really well. But for large files and data sets — especially larger files sent over distance — TCP breaks down.

One fundamental problem with traditional TCP is that it uses a relatively unsophisticated sliding window mechanism, only sending a certain amount of data over the network before it expects that data to be acknowledged as received on the other end. As TCP receives acknowledgements, it advances its window and sends more data. If the data doesn’t get through or an acknowledgement is lost, TCP will time out and retransmit from the last acknowledged point in the data stream.

There are a number of problems with this, such as retransmitting data that may have already been received, or long stalls in data sent while waiting on acknowledgements. Modern versions of TCP have addressed these challenges in two ways, scalable window size and selective acknowledgements:

Scalable window size allows the amount of data in flight to be greater than the original 64KB or 32KB supported by the protocol. This means that a system administrator can configure TCP to have a bigger window size and most systems today do so by default.

Selective acknowledgement allows the endpoint to verify which pieces of the transmission have been received so that any section that is missing — even one in the middle of the data set — can be retransmitted rather than the entire data set.


Signiant uses a mechanism similar to a scalable window and selective acknowledgement, but we’ve implemented them in an improved way. Using a dynamically adaptive window size, Signiant solutions properly size the window to the bandwidth delay of the network in order to keep an optimal amount of data in flight at all times.

Signiant is also constantly measuring effective throughput, network latency and loss, and building a history. By maintaining a history, we can see how all of these factors are changing over time. And, by analyzing the frequency of changes, we can locate network congestion. This affords us far more efficiency than congestion control algorithms that react to simple point-in-time packet loss, which is a problem with even modern TCP.


UDP was originally developed to send messages or datagrams over the Internet on a best effort basis, making standard UDP an unreliable mechanism for transferring data.

To make UDP reliable, Signiant added functionality similar to TCP, but Signiant’s transfer control protocol is implemented in a far more performant way, using:

Flow control, which makes sure data is transmitted at the optimal rate for the receiver,

Congestion control, which detects when the network is being overloaded and adapts accordingly,

Reliability mechanisms, which makes sure that data loss due to congestion or other network factors is compensated for and that the order of the stream of data is maintained.


Like TCP, FTP is inefficient in a number of ways, especially when working over high latency networks. Yet many media professionals still rely on this 40+ year-old technology as the foundation protocol for an ad hoc approach to moving files.

FTP is slow when transmitting a large number of small to medium sized files as a result of high per file overhead. With FTP, a set of command and response interactions is required for each file, and a separate TCP connection must be established to transfer the contents of each file.

Signiant improves on this by communicating information about files being transferred more efficiently and by multiplexing the transmission of files over a single channel. This dramatically reduces per file overhead and allows file operations to be performed in parallel.