Signiant acceleration technology improves on standard Internet transmission speeds up to 200 fold. All of our software leverages our core acceleration and security technologies, and we’ve continued fine-tuning them as we’ve move into cloud-based software development.
The kind of technology we’ve developed to speed file transfers is often called “UDP acceleration.” But UDP acceleration is really just industry vernacular based on one small detail of the implementation, and doesn’t convey the depth of innovation involved.
What we’ve actually done is implement both an advanced TCP (transmission control protocol) on top of UDP as a replacement for TCP, and advanced FTP (file transfer protocol) as a replacement for FTP.
Improving TCP Throughput
Fundamentally, TCP (transmission control protocol) is what provides a reliable stream of data from one point to another on the Internet. UDP (user datagram protocol), on the other hand, allows a chunk of data to be sent from one place to another on a best effort basis — but UDP doesn’t guarantee reliability. To make UDP reliable, we implement functionality on top of it that mimics what TCP does, but in a better way, including:
- Flow control, which makes sure data is transmitted at the optimal rate for the receiver.
- Congestion control, which detects when the network is being overloaded and adapts accordingly.
- Reliability mechanisms, which makes sure that data loss due to congestion or other network factors is compensated for and that the order of the stream of data is maintained.
A Better TCP
One fundamental problem with TCP is that it uses a relatively unsophisticated sliding window mechanism, only sending a certain amount of data over the network before it expects that data to be acknowledged as received on the other end. As TCP receives acknowledgements, it advances its window and sends more data. If the data doesn’t get through or an acknowledgement is lost, TCP will time out and retransmit from the last acknowledged point in the data stream. There are a number of problems with this, such as retransmitting data that may have already been received, or long stalls in data sent while waiting on acknowledgements.
Signiant uses a mechanism similar to a sliding window, but the mechanism incorporates two key improvements over traditional TCP: an adaptive window size and selective acknowledgment.
- Adaptive window size is a mechanism that measures the capacity of the network and the round trip distance. It then uses a window that’s big enough to keep data inflight on the network at all the times.
- Selective acknowledgement allows the endpoint to verify which pieces of the transmission have been received so that any section that is missing — even one in the middle of the data set — can be retransmitted rather than the entire data set.
Signiant is constantly measuring effective throughput, network latency and loss, and building a history. By maintaining a history, we can see how all of these factors are changing over time. And, by analyzing the frequency of changes, locate network congestion. This allows us to react more effectively than pure additive increase/multiplicative decrease responses to point in time packet loss (as in TCP).
A Better FTP
Like TCP, FTP is inefficient in a number of ways especially when working over high latency networks. Yet, many media professionals still rely on this 40+ year-old technology as the foundation protocol for an ad hoc approach to moving files.
How do we improve on FTP?
FTP is slow when transmitting a large number of small to medium sized files as a result of high per file overhead. With FTP, a set of command and response interactions is required for each file, and a separate TCP connection must be established to transfer the contents of each file.
We improve on this by communicating information about files being transferred more efficiently and by multiplexing the transmission of files over a single channel. This dramatically reduces per file overhead and allows file operations to be performed in parallel.