Latency, in the most general sense, is the time interval between stimulation to a system and its response. Latency impacts any system involving information or objects in flight: from sound waves traveling through the air and nerve impulses heading to the brain, to the New Horizons’ trip to Pluto and data moving over the Internet.
Latency is the time it takes to travel from one point to another, like the latency of five hours for a flight from New York to Los Angeles. And, while the latency on a network transfer from New York to Los Angeles is orders of magnitude lower than a plane ride — data travels at the speed of light (300,000 km/s) — latency can still have a huge impact considering the indirect path data takes, routing and switching overhead, and large numbers of round trips required by standard protocols to complete simple operations.
Calculating data transfer time involves more than just the speed of light and total distance travelled; the size of files or data sets also plays a role, as does bandwidth and the data transfer protocol being used.
Bandwidth or connection speed can be a misleading number. For example, a 1Gb/s pipe might seem like it will transfer one gigabit every second, but that is only if the network is completely clear like a highway without traffic. Add some traffic and it slows everything down; add a lot of traffic and gridlock occurs.
Any activity on the network contributes to filling up bandwidth and slowing down data movement. With traditional protocols like TCP (transfer control protocol), data traveling short distances uses disproportionately higher amounts of bandwidth compared to data travelling long distances on the same network.
Although bandwidth and latency are independent factors, when you combine high latency and high bandwidth a number of problems emerge that make it difficult to use all the bandwidth with standard protocols like TCP, which only utilizes a fraction of the available bandwidth. So, counter to what seems intuitive, more bandwidth does not reduce transfer times with TCP.
No matter the transfer protocol being used, the larger the files or data sets the more time they take because more data has to be sent. However not all data transfer protocols are the same in how efficiently they manage large file transfers, especially in high latency, high bandwidth networks.
Traditional methods for transferring data over the Internet, like FTP and HTTP, all rest on TCP (transfer control protocol). And, while TCP works fine for short distance data movement, it is greatly impacted by latency on high bandwidth networks due to the mechanism it uses for data transfer.
TCP controls a stream of data between two endpoints and will only send a limited amount before pausing the server to wait for acknowledgement that data was received on the other end. This “sliding window mechanism” creates a lot of back and forth, with associated latency for every round trip. So, with larger files over longer distances and high bandwidth, latency becomes a significant barrier to faster transfers.
Signiant was one of the first technology companies to develop a data transfer protocol that maintains speed and fairness independent of latency, distance and loss between endpoints. Signiant’s Emmy award-winning large file acceleration technology is used to move everything from the very large video files of professional filmmakers to the huge scientific data sets of researchers. It is up to 200 times faster than TCP/FTP, largely thanks to the way it deals with latency and loss.
Often called UDP acceleration, Signiant’s protocol only uses UDP as a packet transfer mechanism. The Signiant protocol implements flow and congestion controls and compensations for data loss. Additionally, it implements advanced file transfer mechanisms, all to reduce the impact of latency and capitalize on available bandwidth. The result is the most efficient and fastest large file transfer technology available to date.
For more detail on Signiant’s acceleration technology, visit our acceleration page.