Traditional performers own their image and likeness—rights protected by publicity law. AI actress Tilly Norwood scrambles this framework entirely. Professor Stacey Lee examines how AI fundamentally disrupts professional contractual relationships, the implications for other industries, and three necessary next steps.
The Tilly Norwood problem: When AI innovation turns into abdication
When talent agents in Hollywood began circling Tilly Norwood in late September, the backlash was immediate. Actresses threatened boycotts. SAG-AFTRA condemned it. The Gersh Agency publicly refused representation. The reason? Tilly Norwood doesn't exist—she's an AI-generated character created by Dutch producer Eline Van der Velden's company Particle6. Her near-signing with a Hollywood agency exposed fundamental gaps in how we govern artificial intelligence in creative industries.
While union concerns about job displacement are well-founded, the most legally complex issues cut deeper: Who owns an AI performer? Who represents their interests? What contractual frameworks apply? And who governs this new category of synthetic talent? These aren't hypothetical puzzles. They're urgent governance challenges that entertainment, healthcare, and every other industry using AI to simulate human expertise will soon confront.
The ownership paradox
Traditional performers own their image and likeness—rights protected by publicity law in most states. Tilly Norwood scrambles this framework entirely. She was created using generative AI trained on "thousands of copyrighted films and performances." Multiple actresses, including Scottish performer Briony Monroe, have alleged that their likenesses and mannerisms appear in Norwood's synthetic performance.
California's AB 2602 (2024) and New York's Senate Bill 7676B (2024) now require "reasonably specific descriptions" of how digital replicas will be used in contracts. But these laws assume a human performer is consenting to their own replication. What happens when the "performer" is assembled from fragments of hundreds of unconsented performances?
The ownership question becomes murkier when we consider authorship. Under current U.S. and EU copyright law, AI-generated works aren't automatically protected unless a human author can be identified. Particle6 owns Tilly Norwood as intellectual property—but on what legal basis? As a compilation work? A derivative creation? The answer matters enormously for liability, licensing, and future use.
Who represents an algorithm?
What does it mean for a talent agency to "represent" an AI entity?
Traditional talent representation involves a fiduciary relationship. Agents negotiate on behalf of their clients, advancing their clients' interests and careers. But Tilly Norwood isn't a performer. She's a product. Representing her isn't advocacy—it's marketing. The agency wouldn't be protecting her interests; they'd be monetizing Particle6's asset.
This reveals how AI fundamentally disrupts professional relationships we've long taken for granted. Health care institutions are increasingly deploying AI "doctors" that provide diagnostic recommendations. Legal firms use AI assistants for contract review. When these systems make errors or cause harm, who bears responsibility? The confusion around representing Tilly Norwood's "interests" foreshadows similar confusion in every profession where AI might claim expertise.
The contract problem
Here's where negotiation meets governance. What contract terms apply when hiring an AI performer?
Traditional actor contracts address compensation, working conditions, usage rights, and creative approvals. They assume the presence of a human with agency who can refuse unreasonable demands. SAG-AFTRA won these protections through decades of collective bargaining, including the 2023 strike that secured consent requirements for digital replicas.
But an AI performer never refuses. It doesn't get tired, demand breaks, or push back on unsafe conditions. This creates a competitive race to the bottom. Why negotiate with human performers who have rights and needs when you can contract with synthetic ones who don't?
From a negotiation perspective, this represents a profound shift in leverage. When one party to an agreement has no interests of its own and exists solely to generate profit for its owner, traditional contract principles—good faith, fair dealing, mutuality of obligation—start to break down.
The governance gap
The Tilly Norwood controversy reveals something more troubling: we lack coherent governance frameworks for AI that can perform human roles.
The EU's AI Act requires transparency when synthetic content is produced and mandates documentation of training data sources. But U.S. federal AI regulation remains piecemeal. California's new laws weren't designed to address synthetic performers created from scratch rather than replicated from existing actors.
This governance vacuum creates three dangerous consequences. First, it allows companies to deploy AI performers without clear accountability when they cause harm—whether through biased outputs, misrepresentation, or economic displacement. Second, it creates regulatory arbitrage opportunities where companies might jurisdiction-shop to avoid oversight. Third, it undermines public trust in both AI systems and the institutions that deploy them.
What's more troubling is the industry response. Van der Velden insists Tilly is "not a replacement for a human being, but a creative work—a piece of art." Yet her company actively sought talent agency representation for this "art." This kind of semantic gamesmanship—calling something art when convenient, but marketing it as talent when profitable—makes governance nearly impossible.
What health care can learn
For those of us in health care policy, the Tilly Norwood problem should sound familiar. We're seeing similar governance gaps with AI diagnostic tools, algorithmic triage systems, and synthetic clinical assistants. The same questions apply: Who owns the AI's output? Who ensures its accuracy? What happens when it causes harm?
The entertainment industry's struggle with Tilly Norwood is a preview of health care's coming reckoning. We need robust contractual frameworks that specify AI use, liability allocation, and human oversight requirements. We need governance structures that can actually enforce accountability. And we need representation models that preserve human agency even when AI assists decisions.
Three principles for action
The Tilly Norwood controversy won't be resolved by simply refusing to sign AI performers to talent agencies. The technology exists. More AI performers are coming. The question is whether we'll develop governance frameworks that preserve human dignity, creative labor value, and professional standards—or whether we'll allow synthetic entities to erode protections that took generations to establish.
Three governance principles should guide our response:
First, transparency requirements. Any entity using AI to perform human roles should disclose training data sources, obtain proper consent from identifiable individuals whose work contributed to the AI, and clearly label AI-generated content.
Second, liability frameworks. Companies deploying AI performers must bear clear responsibility for the harm caused by those systems, including copyright infringement, misrepresentation, and economic damages.
Third, professional standards preservation. Industries should not allow AI to circumvent labor protections, safety requirements, or ethical standards that apply to human practitioners.
The question isn't whether AI will transform creative and professional work—it already has. The question is whether we'll govern that transformation in ways that protect human workers, preserve accountability, and maintain trust in our institutions.
Right now, we're failing that test. Tilly Norwood's near-signing with a Hollywood agency should serve as a wake-up call. When we don't proactively build governance frameworks for emerging technologies, we end up with synthetic "talent" competing against human professionals in markets designed for people.
That's not innovation. That's abdication.
What to Read Next
Faculty research
Econ expert weighs in: Will slimming down with GLP-1s widen health disparities?Stacey B. Lee, JD, is a professor at Johns Hopkins Carey Business School and Bloomberg School of Public Health, where she teaches health care law, policy, and negotiation. She is the author of "Transforming Healthcare Through Negotiation" and "A Practical Field Guide to Transforming Healthcare Negotiation." Her research focuses on the intersection of law, negotiation, and health care systems—including emerging governance challenges in AI-enabled health care.