Video

Can Open Standards Finally Fix Streaming’s Biggest Problem?

Open standards are reshaping how video services evolve, creating an environment where innovation and collaboration can thrive across platforms. Unlike closed ecosystems, open standardization ensures interoperability, cost efficiency, and scalability are key benefits in a global media streaming market that demands flexibility and reach.

The arrival of VVC/H.266 marks a major leap in video compression, offering sharper quality at lower bitrates and significantly improving the end-user experience. Paired with neural network-based post filtering, an AI-driven enhancement that adapts across codecs, these advancements open the door to more sustainable, efficient, and innovative streaming solutions. But adoption will require industry-wide collaboration, balancing opportunity with the challenges of integration and standard-setting.

Watch this in-depth interview with Ville-Veikko Mattila, Nokia’s Head of Multimedia Technologies.

Learn more.


Daniel Frankel:

Hello, I'm Daniel Frankel, regular contributor to StreamTV Insider and the Founder and Editor Chief of Next TMT, which covers technology, media and telecom. Real pleased today to be talking to Ville-Veikko Mattila, the Head of Multimedia at Nokia. You have been for quite a while. You've seen a massive evolution of video in your time there. Nokia has been the ground floor of developing video codecs, compression standards that enable us to pass around very high density files around the internet. We wouldn't be able to stream video without it. So today we're going to talk about the latest iteration of video technology, VVC, versatile video coding, Nokia's role in it and how the company sees it benefiting the video ecosystem and how adoption will happen. So how are you doing Ville-Veikko?

Ville-Veikko Mattila:

I'm doing fine, thank you. I hope you're also fine and thank you for having me on StreamTV, it's a pleasure to be here, and also to have the opportunity to talk about the exciting developments in video standardization.

Daniel Frankel:

So this has been a long road. You've been, VVC is also called H.266. You were part of the team that developed H.265 the predecessor, and the H.264 before that, just briefly tell us what these technologies are and how vital they are to the video ecosystem. What they do for us?

Ville-Veikko Mattila:

At Nokia, we have a long kind of 30 or even more years kind of background in multimedia standardization and then video codec standardization is one key domain for us, and you're right that we studied from A, B, C. So advanced video codec, we standardized that in 2003 and then later we moved to high efficiency video codec, HEVC, so H.265, and that we then finalized in 2013. So there's quite often this kind of 8, 10 years required to develop the new codec, because we also have a very strict requirement for the new technology when we standardize it. And then the latest standard comes from 2020, that is then VVC or Versatile Video Coding, H.266 and that is now the latest codec which is coming to the market. So really long background, lot of technology evolution between these codecs and getting more and more advanced when we're moving to newer standards.

Daniel Frankel:

My understanding is H.265 high efficiency video coding was very well situated for the evolution and adoption of 4K. And now that we have that as kind of a mainstream commodity, the next focus is on 4K being even more ubiquitous and 8K distribution starting to take hold. And for that you need even higher compression capability. HRVC is being touted as the next major leap forward in video compression. Maybe you could tell us what the benefits are compared to previous standards and how it will contribute to a better user experience

Ville-Veikko Mattila:

Virtual video coding, so VVC also known as H.266 from ITU-T side, so it truly represents a significant advancements in video compression technology compared to its predecessors, so HEVC previously offers approximately 50% better compression efficiency, meaning that it can deliver the same visual quality at half the bitrate. And if we think that about 80% or all traffic in IP network is video, so such compression efficiency can have a huge kind of global impact. And this leap in efficiency then translates into several tangible benefits and benefits for both service providers but also for end users. And of course enhanced streaming quality being one. So VVC enables smoother playback or high resolution content, so content like 4K or even up to 8K, but also content with HDR, so high dynamic range, having deeper colors, providing deeper colors or even with for virtual reality videos or 360 degree video and can do this smooth playback even on constrained networks.

So nowadays so many of us consume content on our mobiles. So overall the codec new codec reduces buffering and improves the overall viewing experience. So then it also brings broader device compatibility. So it is very true to its name. So VVC it is versatile, so it supports a wide range of applications. So applications from smartphones and tablets to smart TVs and VR headsets and doing so also ensures consistent quality across all these platforms. And then if you think about content providers, of course VVC reduces CDN, so content delivery and storage expenses, which is important, while for users it means less data consumption, especially in mobile and banquet-limited environments. And perhaps the last thing is that VVC is also future-proof. So we support for advanced features, so features like adaptive streaming, so your streaming experience can adapt to your network conditions. It also has tools for screen content coding.

So we can do very efficient coding or for example, synthetic content, content like games. So sharing and watching multiplayer games is very popular today. Multiview video is also one very exciting kind of application feature. So for example, stereoscopic imaging, which creates much more immersive kind of viewing experience. And then also low-latency video, which is important for example, again in gaming for example in cloud gaming, so latencies are, a low latency is a must there. So I mean, consider all these advanced features. So VVC is designed to meet these demands or the new emerging use cases. So whether it is immersive media or cloud gaming.

Daniel Frankel:

What are the biggest hurdles and opportunities driving the industry-wide adoption of the Kodak at this point?

Ville-Veikko Mattila:

So if you think about opportunities, so of course firstly we have this bandwidth efficiency at scale. So VVC's ability to half bitrate requirements without sacrificing quality, it is a true game changer for streaming services which then aim to deliver let's say 4K content or even 8K content and doing that efficiently. They also, organizations like DVP here in Europe and then ATSC in the US, these are broadcasting kind of standards which then have embraced VVC and are rolling it into use in broadcasting. Also, of course one, create opportunities that there exist great open source software tools for VVC. One very nice example from Fraunhofer. So software is available that you can take into use and also having commercial right take them into use. And then perhaps the last opportunity here that cost benefits and environmental benefits. So this can reduce data transmission, not only lowers operational costs but also contributes to sustainability goals by decreasing inertia consumption.

So these are all very great opportunities, but they are also challenges, and the hardware readiness today is one key challenge. So while many [inaudible 00:09:13] and TVs are technically capable of supporting VVC firmware and hardware support, still remain inconsistent. And of course mobile hardware integration is [inaudible 00:09:27] lagging still today. But then it's also good to know that this adoption lack for new technologies and new fundamental technologies like VVC. So industry experts know that codec transitions typically take up to a decade, so many providers are still transitioning to HEVC. So the previous standards and the business case for moving beyond is still evolving.

Daniel Frankel:

Let's talk about standardization a little bit. In earlier conversation we talked about the differences between open standardization and closed ecosystems. What are the benefits of open standardization and why is it important in a video service?

Ville-Veikko Mattila:

Open standardization, it refers to the kind of collaborative development of technical specifications which are then publicly available and can be implemented by anyone. And in the context of video services and more broadly even, it offers several key benefits, interoperability of course being key benefit. So open standards, they basically insert products and services which come from different vendors and kind of work seamlessly together globally. And this is crucial in video services where content must be delivered across diverse devices, platforms, and networks. So in a way, open standards enable technologies to scale globally. And so also conduct providers can reach global audiences without needing to tailor solutions for each ecosystem separately.

And this naturally reduces complexity and cost in a significant way. But then also this kind of innovation acceleration is important aspect to notice. By providing a common foundation open standards, they allow companies really to focus on the key differentiation and the key innovations rather than always needing to reinvent the field. And this fosters much healthier competitive environment overall. Moreover, cost efficiency, that is important aspect. So open standards they reduce licensing fees and you can also avoid vendor lock-ins making it easier, especially for smaller players to enter the market and to compete on the market. So in a way I would say that this democratizes access to technology and encourages diversity in offerings.

Daniel Frankel:

So innovation seems like the key here in a world of competing standards. Open standardization is a more powerful driver of innovation in your view.

Ville-Veikko Mattila:

It is a very powerful driver of innovation and especially for global media streaming in a kind of fragmented landscapes of competing standards. So I think open standardization stands out as a very powerful enabler innovation and for several reasons, one being this kind of collaborative development. So open standards, they are shaped by diverse stakeholders. We have academia, we have industry leaders, we have startups and regulators all together and bring together a wide range of expertise and perspectives. And this leads to more robust and future-proof solutions. As so many views been taken into account, the development of standards. And as open standards, are accessible to all, they also tend to gain traction much faster. And of course this can create a very vibrant ecosystem. So I mean ecosystem or tools, services, applications which often build on the standard and all this then accelerating innovation. Then in contrast, if you think about closed ecosystems, so they often tend to lead fragmentation where innovations are locked within the proprietary platforms.

And what we want to do with open standardization is to break down these silos and allow ideas, technologies to flow freely across the domain. I think that is very important and it's important because media streaming is inherently global. So open standards ensure that content, it can be delivered, it can be consumed across borders without any kind of technical barriers. And this can of course foster cultural exchange and market expansion overall. Very important things indeed. And also one thing's good to know is this kind of strategic shifts of individual companies which may happen. So company support technology and later drop it.

And I would say that open standards are much less vulnerable to these kind of shifts because they provide a very stable foundation that the industry can rely on and even as technologies evolve. So in a way as they also level the playing field, they encourage competition and encourage competition based on quality and performance user experience rather than exclusivity or control. And this is much better outcome for consumer and also means faster technological progress overall in kind of a sense, open standardization, I would say that it transforms competition from a race to dominate into a race to innovate. And I think this is very important with open standardization.

Daniel Frankel:

Well, especially now it seems like the acceleration of new technologies has kind of gone to 11. And I was reading my notes, there are enhancements related to AI, neural network based, post-filtering being one of them. Let's talk about that a little bit and its relationship to VVC and what you're doing with it.

Ville-Veikko Mattila:

I'm happy you picked this up. So it's a very recent, very exciting new topic and also this neural network based post-filter, so it's something we pretty recently standardized. Now something which is then introduced into the video coding ecosystem and it's introduced through the Versatile Supplemental Enhanced Information standard, VSCI. And this is actually the first time, really the first time AI has been formally integrated into video coding standards, so standard like VVC. And then if you think about how it works, so this neural network based post-filter, it operates as a kind of post decoding enhancement layer or step. So after a video is decoded using a standard codec like VVC, so neural network is applied, the decoded frames to for example, enhance visual quality of the video by reducing compression artifacts. And in our evaluations, so this corresponds to roughly 8% compression gain, which is very nice results indeed.

But you can also apply the filter to, for example, up sample resolution of your content. So moving up from high definition to ultra-high definition, so 4K content. You can also increase the frame rate. So coming up from 30 up to 60 frames per second for example. So having much smoother playback, you can also expand a bit depth or say richer colors. So having deeper blue, red and green colors, so many nice benefits what you can achieve with this kind of post-filter. And also the nice kind of aspect of this standard and this technology is that it is adaptable across codecs. So in a way it is codec agnostic design what we have here, because since the filter is applied after decoding, it doesn't require changes to the codec itself. So you can keep your current existing system, your current codecs and apply this as a additional technology to enhance the quality.

And this makes it compatible with multiple standards like VVC, HEVC and so on. And very unique aspect of this technology is that you can also do content we are fine-tuning on the go. So the neural network can be fine-tuned per content segment at the sender side and then this updated waves or parameters of the neural network can be transmitted to the receiver side to be then used for the filter when the video has been decoded. And we'll be also working on a quite unique new standard called neural network compression at EMBEC, which can be used to compress these parameters, these neural network weights so that we can achieve very efficient way when we transmit these neural networks over communication networks. And this fine-tuning really kind of inserts optimal performance across diverse content types. And then finally, this kind of backward compatibility is also should be noticed here that of course there are always devices that may not support such neural network post-filter and that is okay because you can still decode the base video stream bit of the filter. So in a way inserting graceful decoration.

Daniel Frankel:

So where are you in terms of adoption of these new technologies? And you've seen cycles across a couple of different epochs. How long does it take? Is it a decade proposition, less than that before it's in every smart television smartphone, every video device?

Ville-Veikko Mattila:

If you think about video codec standards this typically takes a kind of decade or let's say seven, eight years to really adopt the new technology. And these are truly foundational technologies and standards. So this has been our kind of experience from the past. And then also therefore we always develop the new standards every 8 to 10 years to provide the new technology for new use cases. For example with AVC, the kind of driving new use case plus high definition kind of video at that time. And then for example with HVC, we moved to 4K content. So ultra-high definition and for example now the latest codec, it is versatile as the name says. So it can provide very efficient compression for example, or synthetic content for game playing. So really driving new use cases so each codec generation can target new emerging use cases.

Daniel Frankel:

So are you already working on H.267 at this point? Or what's the... You're done with this, you have to support it, but where are you with the next generation?

Ville-Veikko Mattila:

So we are exploring, again, a new codec standard and exactly, it could be named as H.267, that is the name coming from ITU-T side. And again, having new use cases in mind. So the multimedia field is continuously developing and companies are really raising to introduce new multimedia experiences and services. And this also why we need to develop new technologies to support these emerging use cases, whether it's for example, virtual reality video or whether it's for example, cloud gaming where low latency video is crucial.

Daniel Frankel:

Well, it's a lot of very complicated but fascinating stuff. Again, you've been on the forefront of it and it's been a pleasure to talk about it a little bit with you today and learn about it. So thank you for sitting down with me, Ville-Veikko and look forward to hearing more about VVC in the future.

Ville-Veikko Mattila:

Thanks again for the opportunity to join you today. It's been a great conversation and I really appreciate the chance to share our perspective on the future of video standards and their development. Thank you.

The editorial staff had no role in this post's creation.