Deepdub is lowering the barrier of entry for localized content by using AI to reduce costs and time needed to deliver authentic, high-quality language dubbing so that programs can travel to new audiences and markets.
Having launched four years ago, Deepdub in Q1 marked significant growth, more than doubling its titles (a 120% yoy increase) since Q1 2023. Without giving a specific figure, Deepdub CRO Oz Krakowski said it has dubbed thousands of hours in total, and in Q1 alone “higher hundreds of titles” already.
It’s at a point of breaking “a range of hours per month…that scale wise is pretty dramatic,” he told StreamTV Insider. According to Krakowski major traditional dubbing studios can dub hundreds of hours a month or more into multiple languages and capacity-wise, he likened Deepdub to a mid-size traditional dubbing studio. By the end of 2024 or beginning of 2025 he expects capacity will be comparable to that of a major dubbing studio.
In announcing the growth, the company cited soaring industry demand alongside expansions into areas like free ad-supported streaming TV (FAST) channels, new languages and reality TV content, which follows on dubbing experience with feature films and premium content.
“Our Q1 increases in content demonstrate fast-rising demand throughout the global media and entertainment industry. In their drive to create efficiency, innovation and audience growth, producers are now turning to advanced high-end, quality AI dubbing across all content genres,” said Deepdub CEO Ofir Krakowski in a statement. “Concurrently, through our recent strategic partnerships with major tech and media service providers, we have significantly enhanced our AI dubbing and voice-over solutions worldwide.”
Deepdub, which supports 130 languages and dialects, already counts work with Hulu, FilmRise, and others. It’s also a localization partner for Amazon AWS and showcased a demo this April at the tech giant’s booth during NAB in Las Vegas.
What’s in a good dub?
When it comes to what makes a quality dub, Oz Krakowski acknowledged that it’s a very subjective practice, some of which relates to the type of content that’s being watched and expectations of the viewer.
But for dubbing in general, DeepDub’s view is that it’s about how much a dub “reflects the original experience into the target language” and how immersive it is. The company’s expertise is partly the ability to deliver and adjust dubs that consistently reflect the regional nuances of a target language – such as idioms and accents - while preserving authenticity of the original storytelling.
And to still do it quickly, effectively, and at scale with the help of AI.
According to Krakowski, when it comes to AI dubbing the same parameters still apply but the question is about retaining authenticity. Technology has advanced, he said, to where consumers can’t tell the difference between an AI voice or a real voice in terms of sounding natural. But where things get dicey are when translations are incorrect or aspects like emotional tone of a dubbed voice doesn’t match with the action or storyline on screen, disrupting the immersive nature of content and impacting viewers’ ability to enjoy what they’re watching. It’s these nuances and emotive aspects where Deepdub also counts skills and support across a broad range of languages.
Originally starting with voice-to-voice dubs, about a year and a half ago, it launched eTTS (emotion-based text-to-speak) technology, which creates human-sounding voices from text at large scale with support for a range of 26 emotions. This is also used for unscripted voice overs (voice-overs differ from full dubs, where the latter is more involved and has its own variety of levels but typically involve features like lip-synching).
The two main motivators for AI-powered dubs, according to Krakowski, are cost and time reductions – where clients are asking “can we achieve something that is much faster and cheaper?”
That was seen in work with FilmRise for its Forensic Files true-crime series IP. The company tapped Deepdub to dub 100 episodes from English to Italian in less than five weeks. A case study of the project showed the company delivered a 75% reduction in turnaround time and a 72% cost reduction.
Its proprietary dubbing technology and product set using AI is all about “enabling a localized version that wouldn’t be possible otherwise,” Krakowski said.
Using artificial intelligence means it can deliver dubs at scale, in multiple languages, on tight timelines (he cited an unnamed client that required a three-week turnaround) that would otherwise not be doable or would require “tens or even hundreds of people to do the same level of content, or it will just take a tremendous amount of time.”
Rise of FAST
Helping to drive Deepdub’s growth is both expansion into new content genres and languages, with the revenue chief attributing the majority of its growth to FAST.
In 2022 Deepdub’s genre breakdown was largely focused on scripted drama, which accounted for 94% of its dubbing portfolio. In Q1 2024 it marked increases, with documentaries growing to account for 34%, telenovelas increasing to 8% and eLearning representing 6.3%. It also marked gains in genres such as animation (growing to 6.5% of the portfolio), and game shows and news (3.5%). And in April and May 2024 reality TV gained traction, growing to reach nearly 12% of the portfolio in the last two months.
Krakowski noted that content owners could have 20 seasons of quality programming that has sat in their library for over a decade, but which could be revived and monetized via FAST when new and local language barriers are removed.
And the barrier to entry is low in the FAST space, he said, as there isn’t a need for massive investment in infrastructure to content owners. FAST channels also represent a way to test the waters with new languages and markets.
In the U.S. he said they’ve had customers with thousands of hours of content and existing FAST channels, but in order for the content owner to grow they need to extend beyond the origin or English language.
While it’s dubbed English content into other languages, the company’s also bringing foreign-language content to the U.S. market (Deepdub’s initial growth came from dubbing other languages into English, though now it translates from and into a variety of languages) – which Krakowski noted is by far the largest for FAST.
“We see broadcasters and content owners from outside the US that are interested to take a piece of the pie in the U.S. market, in U.S. FAST channels” he noted.
Dubbing from and into various languages is “also one of the things where AI lowers the barrier to entry…I can take content and transfer it or move it between regions much easier.”
Deepdub’s AI tech means it doesn’t need to worry about origin language or how many voices need to be dubbed – which historically had been key factors using traditional methods.
Deepdub’s also seeing interest in the Latin-Spanish market – not necessarily meaning just in Latin America but dubbing for Spanish-speakers globally, including in the U.S. and elsewhere. From there, FASTs are going into Europe in early stages, including Germany as an emerging market, he said.
SaaS service, voice artists
Deepdub started as a white glove managed service where it will handle all aspects of the dubbing from start to finish. But about a year go it introduced DeepDub Go, a software-as-a-service platform that provides access to AI tools in a do-it-yourself setup that’s collaborative with the AI elements.
Multiple dubbing studio customers have signed up for the tool, according to Krakowski, such as Babelto, an agency specializing in dubbing content for short-form content creators.
SaaS represents an emerging model for Deepdub and managed services still represent the majority of its business, where Krakowski said the split is currently 80-20. He expects that mix to stay the same, as it anticipates simultaneous growth for SaaS and managed services.
Also recent on its product roadmap are voice-to-voice cloning technology as well as an accent control tool that can instantly add, change or remove nuanced accents to voices across more than 130 languages.
The use of AI in TV and film was a point of negotiations last year when SAG-AFTRA-represented Hollywood actors went on strike in the U.S. before finalizing a new contract. Asked about implications for AI dubbing and talent, Krakowski acknowledged valid concerns and said the company is “very much involved in the process” in multiple ways.
He said Deepdub wants to ensure everything is done legally, morally and with the right privacy in place. To that end it’s made efforts to put infrastructure in place, including documentation and its own bank of voices, as well as professional voice artists or actors who are part of a voice artist royalty program that Deepdub launched following the strike. Clients can opt to use the original voice in their content, pick from a bank of synthetic voices or use a professional voice artist that’s compensated through Deepdub’s program.
Looking ahead, the company’s focused on scaling and technologies at the forefront like its accent control product, which Krakowski said extends “the original vision of being able to take content into places where it wouldn’t go before.”