The recent car emissions scandal that surprised so many eco-minded consumers is an extreme example of how “optimizing” for laboratory test conditions can lead to results that have little relationship with how products perform in the real world. In that case, the EPA said that engines were fitted with computer software that could sense when the vehicle was being tested by monitoring speed, engine operation, air pressure and even the position of the steering wheel. The net result was emissions up to 40 times higher under actual driving conditions than under test conditions.
Most tests are, by design, intended to isolate the impact of a specific independent variable on a dependent variable. For example, in testing file transfer speed, you could look at the impact of latency (independent variable) on throughput (dependent variable). When the number of potential independent variables is large, the settings of controlled variables can be used to manipulate results. While this isn’t the kind of thing that can lead to outright consumer cheating and law suites, it can be very misleading.
With accelerated file transfer, there are a huge number of independent factors that impact throughput. While it is feasible to engineer good behavior under one specific scenario, it’s much harder to design behavior that adapts to the practically infinite number of scenarios driven by real world conditions
There are independent external factors that gate end-to-end throughput, like the maximum speed data can be read from or written to storage at the source and target. This specific factor also depends on how data is being read and written. Benchmark tests typically use high performance solid-state storage to minimize storage factors. But, unless you’re using similar storage, your real world results will most likely be different.
Optional features and functions can also impact performance. For example, encryption and cryptographic integrity checks can add significant overhead to transfers at higher transfer speeds when hardware-based encryption offload isn’t used or available. Benchmarks are sometimes performed with these security mechanisms turned off to maximize throughput, but from a security perspective, determining when security features are needed and when they aren’t complicates management and increases the potential for mistakes and vulnerabilities.
With a test network in a lab, the impact of other traffic is controlled and available bandwidth and latency is much more predictable. Also, lab-based network impairment devices that introduce artificial bandwidth constraints, latency and loss often behave differently than real world networks. In the real world, most Internet Protocol networks utilize shared resources at some point. And, depending on where congestion occurs, different cues are available to the sender and recipient. Sophisticated techniques for extracting these cues are necessary to adapt to a wide variety of network conditions and achieve maximum throughput.
In Summary, benchmarks are useful tools, but those looking to deceive can also manipulate them. Understanding the specifics of how benchmarks are performed becomes as important as the result.
In the end, there’s absolutely no substitute for real world use. That’s why SaaS solutions provide a unique opportunity to test out software in your real world scenario with limited risk to the purchaser. There is little to no upfront infrastructure investment and SaaS utilizes a subscription pricing model. The burden of proof is placed squarely on the vendor that manages the service and relies on subscription renewals. Like the familiar automotive manufacturer’s disclaimer states, with accelerated file transfer software “your mileage may vary.” However, with SaaS, you only pay for performance.