As CTV platforms and streaming apps aim to improve content search and discovery for users, some have introduced or are considering AI-powered functionality that leverages Large Language Models for conversational voice or text queries. But LLMs have limitations and Nielsen’s metadata unit Gracenote today debuted a new offering that promises to bolster accuracy and relevance of entertainment information to help ground LLM-generated responses and allow streamers to serve up better results for even complex or highly specific content search prompts and discovery use cases.
Called the Gracenote Video Model Context Protocol (MCP) Server, it connects LLMs to Gracenote’s continuously updated knowledge base, validating, correcting and enriching entertainment questions in real-time.
By using the Gracenote Video MCP Server, the vendor said TV platforms can answer queries, recommend programming or drive tune-in based on a wide range of parameters, such as these three examples given:
- Show me the episodes of Brooklyn Nine-Nine in which Jake references Die Hard.
- The Academy Awards are on this week. List the twenty highest-grossing Oscar-winning films from the last ten years.
- Where can I watch the Dodgers game tonight?
While we’re focused on streamers here, the new product isn’t just for CTV platforms and streaming apps. Gracenote SVP of Product Tyler Bell told StreamTV Insider it was developed to meet the needs of “a range of video providers interested in using LLMs for content search and discovery use cases,” which could also include MVPDs, which own their tech stacks and consumer-facing TV apps, as well as middleware or recommendation engine providers and tech platforms.
Bell explained how these different types of customers still have problems in common that the MCP Server can help address. Namely, “normalizing and harmonizing data on the back end, recommending and promoting content to users, and serving as the underlying cleverness for voice and free text.”
Connecting to AI agents, providing a toolbox
Alongside the rise of AI and LLMs has been so-called AI agents, or LLM-based applications that connect to tools and logic.
Per Bell, the MCP Server is essentially a component of an AI Agent that a Gracenote customer would build that could recommend shows, perform data manipulation or promote content based on watch histories, for example.
“An MCP Server customer can adjust the Model of Context of the agent to serve their specific use case,” Bell said. “This is a toolbox rather than a very large hammer.”
The product can connect to any LLM that a Gracenote customer chooses for content search and discovery, including third-party LLMs such as ChatGPT, Deepseek, Claude or Gemini, as well as an LLM run locally on a customers’ infrastructure,
Using the MCP Server, TV platform LLMs can answer questions based on their own training data and based on Gracenote data. Answers can then be validated and normalized and enriched with additional context (like Gracenote IDs).
Notably, the MCP Server is not exclusive, Bell said, meaning customers can introduce their own information at the same time to the LLM – either through a prompt or their own MCP server. The LLM will digest, analyze and synthesize that data together with Gracenote data, he explained.
Platforms benefit from the ability to search across entertainment data but also restrict an AI agent to return information only contained in a specific catalog, either by using Gracenote Availability data or incorporating their own content catalog.
“The end result is that our customers can create a service that searches the breadth of global entertainment data and subsequently filters by availability or entitlement, or it can restrict its search to content for a specific platform or service,” Bell said.
Why does LLM-based content discovery need Gracenote data?
Gracenote’s announcement called out one problem with AI LLM-generated responses in that they’ve been found to hallucinate – or provide an answer that sounds like it could be right but hasn’t been validated against real-world data.
According to Bell, “the heart of the problem with LLM hallucinations is that they are non-deterministic,” so “they have been trained on data but are not databases themselves.”
Given this, he likened LLM-produced answers as being generated “by a sophisticated form of autocomplete that effectively produces something that resembles a correct answer” but isn’t necessarily grounded in fact.
“MCP Servers mitigate this issue by providing LLMs factual information that allows them to correct, validate, and enrich their responses, like you might create an answer in your head and then verify it against an encyclopedia,” Bell explained.
It also provides another assist for LLM shortcomings, where they don’t have access to current or real-time data – meaning responses can be inaccurate or outdated by the time they’re prompted. But the MCP Server will be updated with the most recent human-verified Gracenote entertainment data as it’s generated.
“The MCP Server allows LLMs to get recent and up-to-date information, such as availability schedules and recent releases, that post-date their training data,” Bell noted.
Supporting AI agents, not training LLMs
So how would an AI agent answer a prompt if a streaming service or CTV platform decides to utilize the Gracenote MCP within their AI-powered content search and discovery?
According to the SVP, to answer a search query, an AI agent built with the MCP Server would either utilize its own training data to find the right answer and then ground that response in Gracenote’s knowledge base, or search Gracenote first and then leverage its own data.
An AI agent might also do both and then harmonize the answers, he added.
“One of the great things about MCP and LLMs is that there are a number of ways to achieve outcomes that have been traditionally viewed as fairly intractable,” Bell commented.
To be clear, Gracenote’s MCP Server isn’t training LLMs on video and entertainment data.
Bell described the difference where LLM training impacts how the so-called neurons in a model are affected, permanently impacting the LLM itself.
In contrast, the MCP Server is one of several “LLM-adjacent technologies” meant to inform and enrich the model and output, without directly impacting the LLM neurons.
“Specifically, MCP tells the LLM what tools it has access to, and when and how to invoke those tools,” he explained.
Another benefit s that it makes Gracenote data more accessible and powerful through a fairly simple process.
“Customers can create an Agent with code or a third-party dashboard, and point the LLM to the MCP Server with a few clicks,” Bell said. “Onboarding takes just a few minutes, while integration will depend on the complexity of the customer’s infrastructure.”
Gracenote’s launch of the product comes just ahead of the IBC show in Amsterdam this month and at a time when Bell said consumers are getting more comfortable interacting with AI tools through voice prompts.
“As the quality of results improves, customer satisfaction increases,” he noted – which is a key factor for all streamers playing in a competitive space that has a known churn issue.
And “while we’re not sure that consumers will want to have a conversational back-and-forth dialogue with their TVs anytime soon, LLMs are great at using the unparalleled breadth of their training data to arrive at satisfying answers in response to increasingly complex user queries,” Bell added.