Evolving from multi-cloud to multi-screen video delivery

Yoav Schreiber, Current AnalysisThe question "where were you" when a significant event occurred (such as the 1969 moon landing; the Berlin wall coming down in 1989; the 2009 inauguration of U.S. President Barack Obama) used to have a similar response: "watching it on TV in the living room." Ask the same question to 2010 FIFA World Cup viewers and the answers are likely to range from watching it on traditional TVs at home or the pub, to watching it on Internet-connected PCs at home, or the office, to watching it on the go with mobile devices. What a difference a year makes.

Okay, the evolution to meet consumer expectations for watching TV everywhere has been taking place for longer than a year. But the drivers behind the convergence of video delivery to multiple screens are aligning. Broadband speeds are increasing globally for fixed--and especially wireless access. Video-capable consumer devices are rapidly proliferating. Improved streaming technology can enable video to be optimally delivered across varied bandwidth environments. And more premium and long-form content is being released for online/on-demand consumption.

Evolving from multi-cloud to multi-screenOver the past few months, operators across multiple regions (North America, Europe, Asia), and networks (cable, telco, wireless) have made progress expanding their video services to additional screens. Swisscom, StarHub (Singapore), and Chunghwa (Taiwan) have announced mobile video services, while Portugal Telecom and Canadian cable operators Rogers, Shaw, and Videotron have announced online video services, to name just a few.

Despite the market's nascence, we can already identify common operator requirements. Most importantly, operators are seeking to deliver seamless multi-screen experiences, preserving the picture quality and user experience that their subscribers already enjoy on traditional TV. Operators are also looking for solutions that enable them to quickly launch converged video services and support multiple uses cases, such as live TV, start-over TV, catch-up TV, nPVR, and VOD services.

From a technological perspective, video headend processing solutions also require the flexibility to support the various protocols, devices and networks for multi-screen video delivery. For instance, there are different frameworks for adaptive streaming, including Adobe Flash Dynamic Streaming, Apple HTTP Live Streaming, and Microsoft Silverlight Smooth Streaming. Meanwhile, rendering content on different devices (i.e., legacy phones, smart-phones, HDTVs, etc.) requires dedicating additional processing overhead to ensure format compatibility (such as SD and HD) with screen resolutions. Finally, video encoding solutions need to support the bandwidth constraints and topologies of various networks, inclduing 3G, LTE, WiFi, WiMAX, and wired Internet.

The proliferation of permutations that arise when combining these technological requirements for scale deployments raises the complexity for multi-screen video headend processing. Simply consider a single use-case, such as catch-up TV services combined with delivery to several devices, and one quickly confronts the potential complexity to process and manage extensive content libraries with a variety of formats, codecs, and screen resolutions that are required to support delivery across multiple networks and to various devices.

Evolving from multi-cloud to multi-screenThe technical evolution in video headend processing addresses this complexity by attempting to limit the replication and storage of content in various formats by emphasizing real-time transcoding capabilities and automating the process to manage content workflows across content ingest, transcode, and delivery. It is also driving a potential shift in video headend architectural approaches, presenting an opportunity for resource convergence through the centralization of video processing resources in consolidated headends. As with the evolution in any technology, the pendulum swings back and forth between centralized and distributed models. The emergence of multi-functional chassis with centralized video processing capabilities is in competition with established approaches where dedicated video processing functionality has been optimized in distributed resources.

The outcome of this debate--as in so many others--will likely be based on total cost of ownership (TCO), where the complexity and cost of managing and operating dedicated resources will need to be compared to the complexity and cost of managing a more centralized architecture. As is so often the case, the answer will be "it depends"--on the specific operator's network and also on the particular use-case. For instance, on-demand use-cases might differ from live content use cases, depending on the specific transcoding requirements for content and whether it will be cached centrally or closer to the edge to mitigate bandwidth and latency obstacles.

As operators look to extend their existing video delivery environments to additional screens, most of the implemented solutions that we are witnessing today consist of separate video "clouds" delivering separate content assets over separate networks and leveraging separate video infrastructure components. Ultimately, the promise to seamlessly deliver converged video services to multiple screens will depend on operator adoption of increasingly converged video headend processing solutions. Yet, it remains to be seen whether the ideal architecture will be distributed, centralized, or as often the case--hybrid.

Yoav Schreiber is Senior Analyst for Digital Media Infrastructure at Current Analysis, and a FierceCable contributor.

Related:
Reducing the cost of multi-screen video delivery