Kathleen Day portrait


What’s yours isn’t mine: AI and intellectual property

Why it matters:

Kathleen Day, lecturer of finance, business communication and law and ethics at the Johns Hopkins Carey Business School, shares how artificial intelligence may violate intellectual property law.

What’s yours isn’t mine: AI and intellectual property

Computers that think and feel as humans do still belong to the future. But astounding advances in AI have given it the ability to write papers on everything from Hamlet to quantum physics, and also to drive our cars, and aid in medical research and diagnosis. 

Computer programmers of generative AI feed machines thousands of examples—pictures of the human face, works of visual art, literature—and ask computers to detect patterns they can use to produce something similar. This trial and error is how machines learn to generate images and documents that are increasingly hard to distinguish from work created by humans. The question is: When computers detect patterns from copyrighted material, does what they generate amount to plagiarism, and thus a violation of intellectual property law?

What is at stake?

We all benefit from new drugs, improved procedures, and the myriad of other advances AI has helped deliver. But what if the people whose work led to these breakthroughs lack the incentive to keep producing groundbreaking research? If AI uses their intellectual property without permission or compensation, they may have less incentive to produce more. That halts progress for everyone, and, paradoxically, for AI itself, as the well of examples computers analyze could dry up. 

Failure to protect intellectual property can discourage writers, artists, and other creators from producing new work and could hinder the existing fountain of human creativity—the very thing computers use to learn to mimic us.

Diminishing returns

Because this genie can’t be put back in the bottle, generative AI will become ever more pervasive. So how can we protect intellectual property?

Ongoing intellectual property lawsuits will help settle legal questions, but even if the AI programmers win, their headache could persist. They already worry they are running out of new material to use to teach AI computers, which leads to reusing material – specifically using AI-generated products to teach AI. That creates a synthetic base of examples that rely on technology without human review or intervention, creating the potential to yield increasingly poor learning outcomes.

Optimizing AI for ethical use

To iron out the problems so that we can use AI to its fullest potential, we must address the issue of intellectual property rights to ensure creators – from writers to artists to researchers – will continue to share their work for all to learn from and enjoy. 

In the meantime, how can you use generative AI ethically? Here are some tips:

  •  Always disclose when you use AI, whether you’re writing an email, completing an assignment, or submitting a project at work.
  • Triple-check any AI results for plagiarism by verifying the sources it used and citing any of that information used in your own work. 
  •  Be cautious about false information and “AI hallucinations” – a phenomenon that happens when AI generates outputs based on insufficient or biased data, incorrect modeling assumptions, or even bad actors intentionally manipulating material.  Don’t believe everything AI generates, and be sure to check that the information came from multiple, reliable sources. 

We’re living in a time that will be remembered as the new age of artificial intelligence. Young professionals are pioneering a new territory as more uses for the technology are uncovered. But as with any new product that hits the market, it is always best to be cautious. 

Authored by Kathleen Day, MBA, MS

A business author and journalist, Kathleen Day, MBA, MS, is a full-time lecturer at the Johns Hopkins Carey Business School with a specialty in financial crises and how they spread; in corporate governance; and in business communication, particularly during crises. In addition to financial crises, her interests include the related topics of corporate governance, particularly the history of the corporate form; government regulation and oversight; lobbying and campaign finance; ethics; crisis communication; antitrust; and the application of artificial intelligence, including in finance.


Discover Related Content