On March 15th, during a nationally televised Oklahoma City Thunder game, ESPN put an AI-generated graphic of Chet Holmgren on screen. The image was distorted, unrecognizable, and immediately went viral for all the wrong reasons. Fans roasted it. Holmgren himself changed his profile picture to a meme about it, posting: "This gotta be Ai dawg I'm tired of seeing this."
The internet called it an AI fail. Fans roasted it. The sports media world moved on in a day.
I want to stay on it a little longer. Not to pile on. But because the conversation everyone had about this incident missed something important.
The Incident Isn't the Story
The real story isn't that an AI graphic looked bad on TV. It's that the sports media industry - one of the most sophisticated content operations on the planet - is still working out where AI fits into its workflow and what guardrails need to exist around it.
I've spent 25 years working in sports and entertainment marketing. I know what it means to put a professional athlete's image on a national broadcast or in a national campaign. There are layers of approval, contractual obligations, and brand protections that exist precisely because an athlete's face and likeness is not a content asset. It's their business. It's their identity. It's their livelihood.
This isn't a knock on any single organization. Every major sports media operation is figuring this out right now. The question is whether the industry gets ahead of it or waits for the next incident to force the conversation.
The Distinction the Industry Needs to Make
Here is the distinction that nobody in sports media seems to be making clearly:
AI is an ideation tool. Not an output tool.
When I think about how AI should work inside a sports media organization, a creative agency, or a broadcast team, it looks like this: AI helps you get to 10 concepts faster. It accelerates the brainstorm, generates the reference images, roughes in the layouts, drafts the copy options. Then humans - editors, creative directors, legal, brand managers, and yes, the athlete's representation - take what the AI produced and do the actual work.
What AI is not: a replacement for the approval chain. A workaround for the rights conversation. A production shortcut that lets you skip the part where you ask if this is appropriate.
The moment AI-generated content involving real people's likenesses moves from ideation into production and broadcast, you've left the ideation zone and entered territory that requires the same human oversight, legal rigor, and creative standards as any other piece of content.
The Bigger Misunderstanding
The reaction to the Chet Holmgren situation reflects something I see constantly in conversations about AI: the assumption that AI is what produces bad content.
It doesn't. People produce bad content. AI just makes it faster.
The organizations getting AI right in sports and entertainment aren't using it to generate athlete faces for broadcast graphics. They're using it to analyze audience engagement data. To generate 50 headline variations for the social team to review and edit. To accelerate the pre-production phase of campaigns so the human creatives have more time to execute properly. To build internal knowledge systems that make their teams smarter and faster.
That's where the value is. That's where AI actually helps. And it's invisible. Nobody posts a story about it because it doesn't produce a bizarrely distorted image of a star NBA player.
What the Sports Industry Needs to Figure Out
The leagues, the broadcast networks, the agencies - everyone working at the intersection of sports, media, and AI needs to establish a clear line: what decisions require human approval before AI output goes public, and what happens when that line gets crossed?
Athlete likeness is an obvious one. But so is historical footage. Crowd audio. Announcer voice synthesis. Statistical visualization. All of these touch identity, rights, and trust in ways that move fast when you add AI to the process.
The ESPN situation is going to happen again. And again. Until the industry decides this is worth having a real policy conversation about instead of just dunking on whoever got caught.
I'd rather be part of building that conversation than watching from the sideline.
The sports organizations that figure this out first - that establish the internal frameworks for when AI assists and when humans own the call - are going to have a real competitive advantage. Not just in avoiding bad headlines. In producing better creative, faster, with more confidence.
That's the work worth doing. And it starts with being willing to name the question clearly: what is AI actually for here?
Michael Molnar is the founder and editor of The Useful Daily and Managing Partner of Glow, an independent creative agency with 25 years of experience in sports and entertainment marketing. He works with professional sports leagues, streaming platforms, and major brands at the intersection of culture, creativity, and emerging technology.