Tech Articles

Producing Live Streaming Coverage of Remote Events: A Cloud-Based Approach

Live streaming of video from events taking place at remote locations, such as sporting events, has become an increasingly important piece of today’s Media & Entertainment world. Consumers want and expect to be able to see all the action, live, in high-quality video, wherever they are and on whatever device or platform they choose. Increasingly, watching live video streams delivered over the Internet has become the normal experience for the motivated fan.

In order to meet this demand, broadcasters and other media rights holders of remote events have massively increased the amount of video content they produce on site, and then had to decide how to deliver it to the CDNs for live streaming to their audiences. Typically there have been two models for the latter. Either the video is processed for real-time delivery to the CDN (often in a mezzanine quality) on site, or it is brought back in full broadcast quality to a permanent facility, where it is similarly processed and delivered live to the CDN. The “at home” production model has become particularly popular for some since it reduces the expense of the temporary infrastructure and staffing deployments needed each time they produce a remote event.

Each approach has its pros and cons, but in both cases there is one common challenge: how to transfer live content from the remote location to the CDN without unacceptable delays or loss of quality, both of which impact the ability of the CDN to avoid passing on the same to the consumer? The challenge is primarily one of cost versus quality (in the broadest sense, meaning the entire end consumer experience when watching the live stream of the event). The cost here is the cost of delivery, which, when we are talking about transporting high quality live video over long distances, can be considerable. Since at a certain point quality becomes non-negotiable, particularly when there is a lot of money and a company’s reputation at stake, the normal default position is to spend the money on the transport infrastructure to ensure that the desired quality video is reliably delivered to the CDN in a timely fashion.

Cost enters the picture because to achieve the kind of QoS content providers are looking for has historically required utilizing purpose engineered networks and expensive, dedicated, satellite or fibre connectivity. And these connectivity costs scale linearly with the volume and bitrates of the video delivered over them – both of which just grow ever larger as the demand for content increases, and as the resolutions of video continue to get higher, up to 4K and beyond. Such costs may be more readily absorbed by the big broadcasters covering major events with mass appeal, but they can quickly become a big hit on the budgets for smaller events, or for smaller companies working with far less resources.

An obvious alternative approach would be to take advantage of the economies of scale, the commodity-based infrastructure, and the open, standards-based, nature of the public Internet and cloud technologies, and simply migrate the contribution side of the supply chain to these. This is of course how the OTT distribution side – the live streaming to the consumer – has been done from the outset, with tens of millions videos streamed online every day. But there are well developed technologies for optimizing the streaming of lower bitrate content, already cached on the CDN, to consumers. The real-time delivery of high quality video streams over the public Internet on the contribution side, with its less forgiving QoS requirements, is a greater challenge and has remained a stubborn holdout.

The reason for this is due to limitations within the TCP layer that is built into the architecture of all TCP/IP networks, including the Internet. These limitations, deriving from the way in which TCP works, are generally not a problem when sending relatively small data sets over the Internet. But the reliability and congestion control mechanisms of TCP do not deal well with moving large data sets – such as streaming high quality live video – over long distances, where high latency and packet loss can cause transfers to become slow, or fail entirely. Signiant, and others, have developed technologies to mitigate this problem for the asynchronous transfer of file-based video (or any other data type) content, whether to CDNs for VOD streaming, or for a multitude of other video production, contribution and distribution use cases. By eliminating packet loss and taking latency (i.e. distance primarily) out of the equation, we have developed file transfer solutions that fully utilize the available bandwidth and achieve the kind of speeds and reliability required for moving large files efficiently across long distances over the Internet (or any TCP/IP network). Such solutions are in use by most major Media & Entertainment companies, and many smaller ones, all across the world.

More recently we have migrated this technology to the cloud – a movement Signiant pioneered – delivering the same accelerated file transfer capabilities as cloud-native SaaS solutions capable of moving huge data sets into and out of cloud storage quickly, reliably and securely without the need for expensive, dedicated connectivity.

At this year’s NAB, and the most recent HPA Tech Retreats in LA and the UK, we demonstrated our next generation transport architecture that brings our acceleration technology to standard HTTP(S) transfers. Designed for a cloud-centric, standards-focused world, the recently patented scale-out architecture delivers multi-Gbps throughput to cloud storage, over any distance. Available as a deployment option for Flight – our cloud-native SaaS utility for fast and secure transfer of large data sets into or out of the cloud – this standards-based approach allows customers to seamlessly optimize delivery of either file-based content or live video feeds over the same link, ensuring speed, reliability and security for all media transfers – as well as compatibility with emerging IP production environments.

While supporting the contribution side of remote event live streaming is only one use case this new technology can be applied to, we believe it offers the promise of significant benefits here. First, it can provide a far more cost-effective, agile and easy to deploy alternative to dropping in a dedicated MPLS circuit for transferring contribution video over long distances. But additionally, by ingesting live video direct to the cloud, it opens up new possibilities in terms of leveraging the ever-growing array of media-centric services – from live production switching, to captioning and transcoding – that are now deployed in the cloud.

As more and more media businesses look to take advantage of the cloud, and standards-based IP technologies, to “do more with less”, we at Signiant believe that these are the kinds of solutions that broadcasters, and other suppliers of the video content consumers clearly want to watch, will be looking for.

Version of Article Originally appeared in TVB Europe, August 2017, by Ian Hamilton, Signiant CTO

Suggested Content

November News & Updates from Signiant

Check out Signiant’s November monthly bulletin to learn about Media Engine’s directory navigation, viewing and accessing media with a...
Read more about November News & Updates from Signiant

Managing the Content Explosion in Live Sports

Discover how Signiant helps NEP enhance live sports production, managing content faster and reaching more audiences.
Read more about Managing the Content Explosion in Live Sports
Chain of Custody tag with blue background

Metadata Everywhere: Chain of Custody

In this final piece of Signiant’s Metadata Everywhere series, we look at chain of custody. Chain of custody provides...
Read more about Metadata Everywhere: Chain of Custody