fingers typing on a laptop
research

Breadcrumbs

What’s the best way to protect privacy in a tech-loving market?

Why it matters:

Breaches in personal information and the onslaught of AI opportunities present new questions in the tension between privacy and innovation.

What to Read Next

As everything from Bumble to Bitcoin traffics in personal data, there is no single law that governs privacy and security nationwide. So what is the best way to protect privacy without stagnating commerce? 

The recent Panel on the Regulation of Data Privacy in the U.S., convened by Johns Hopkins Carey Business School Associate Professor Itay Fainmesser at the Johns Hopkins University Bloomberg Center in Washington, D.C., brought together an interdisciplinary group of experts to explore the intricacies and implications. 

Four rules in the U.S. 

According to Federal Trade Commission attorney Alejandro Rosenberg, who did not speak for the commission or its entities, there are five main ways the FTC can try to protect individual privacy.

The FTC Act allows for action against any non-federal entity whose privacy protections are considered “unreasonable or inappropriate.” It boils down to representation, omission, or practice that leads consumers to act against their best interests. The Gramm-Leach-Bliley Act is aimed at financial institutions; the Fair Credit Reporting Act deals with credit bureaus and other consumer reporting agencies; the Health Breach Notification Rule involves non-HIPAA-covered entities; and the Children’s Online Privacy Protection Act is for sites that engage or knowingly collect data from children under age 13.

“Companies are very aware of this policy,” Rosenberg said. “My daughter just turned 13 the other day, and the day of her birthday, Gmail sent a message that I could relinquish control of her account.”

Matters of the heart

Some dating apps collect highly sensitive information. Panelist and University of Southern California Gould School of Law and Marshall School of Business Professor Daniel Sokol noted Grindr, on which users may disclose their HIV status. In April, a global class action law firm filed suit in the United Kingdom, alleging that the app had been improperly selling that information to third parties. But did users consent to sharing that information, and understand what they were doing? 

Similarly, if a hospital checks your heart rate, that information is protected by HIPAA, but your smartwatch stores the exact same information and is not subject to those rules. 

“We consent to all kinds of things we don’t think about,” Sokol said. 

What are we agreeing to?

Often, an app or website will ask for consent to collect and share “data,” but that word is subject to interpretation. 

“We hardly give consent for sharing of data. We often give consent for sharing our behavior,” said economic theorist and MIT Sloan Professor Alessandro Bonatti. This means agreeing to share your browsing history and other information that allows marketers to target their audiences and then direct advertising to them via third-party optimizers. It’s what lets an ad follow us around the internet once we’ve clicked on a product on one site, or searched for it on Google. 

Bonatti’s recent research found if a platform has an informational advantage over consumers, no matter how small, the platform can completely control consumers’ shopping behavior—steering them away from sellers who don’t run advertising campaigns on the platform, for example. A second recent study found that platforms should match companies and consumers effectively and even adopt “best-value pricing” policies to ensure that the product served up to the consumer provides greater values than competing, non-sponsored products in the space. 

Bad for competition?

In this sense, targeted marketing may seem good for the consumer if behavioral privacy isn’t important. But privacy is not all that's at stake.

According to Bonatti’s models, because platform services are free for consumers, a platform with market power will drive up the cost of advertising by employing mechanisms--such as best-value pricing--that diminish competition among advertisers and charging them for the service. As a result, ad prices could increase and be passed on to consumers. In contrast, if targeted marketing were banned, advertisers would be less able to find new customers and match them well with products. 

One potential solution, Bonatti said, may be to use “coarser” consumer information—data with enough detail that platforms can identify the most efficient matches, but not so much as to put consumers at a disadvantage.

But is there such a thing? Could training a personalized model bypass privacy regulations to infer consumer traits at the individual level? Does limiting personal data use slow down the learning process that could help?

Now we’re in AI territory. And that brings up a whole new question about privacy.

Artificial intelligence

In most cases, consumers can decide whether they want to share data by opting in or out, or selecting what is collected and used. 

“For AI, that’s inherently impossible,” said panelist and University of Arizona Eller College of Management Associate Professor Laura Brandimante. 

Brandimante’s research includes a survey-based experiment in the U.S. and Europe, where there are very different cultural perspectives on privacy and data-sharing. The research measured people’s willingness to adopt an AI-enhanced email app that would access the user’s email exchanges, learn the user’s style, and write emails on the user’s behalf. The experiment checked three variables: opt-out vs. opt-in, whether users understood there was a privacy issue at stake, and whether the location of the storage server mattered. The research also considered individual factors: risk preference, trust, explicit concerns, and attitudes toward AI and algorithmic decision-making.

 “In the U.S., people didn’t care about opt-in or opt-out, or whether the risk was salient,” Brandimante said. “What they do care a lot about is where the servers are located. The location of the servers where data are stored, and therefore the legislation that governs that data, is very important to users.”

The impact of regulation

The European Union’s General Data Protection Rule, which took effect in 2018, is perhaps the world’s most restrictive privacy regulation. Yet early evidence suggests that the GDPR is not only bad for competition but also the consumer. The short-term gain in privacy protection may be outweighed by the long-term loss of venture capital investment. 

“We have one-third fewer apps in Europe post-GDPR,” Sokol said.

Domestically, the restrictive California Consumer Privacy Act gives residents not only the right to know about and opt out of personal data collection, use, and sharing, but also the right to delete, correct, and limit the use of some information. It also protects them against discrimination for exercising those rights. But the law was enacted in January 2021, with some provisions added later; it’s too early to know the impact on consumers or businesses.

 “When dealing with an innovative sector, it’s really important to be able to weigh the tradeoffs both short-term and long-term, which we haven’t always seen around the world,” Sokol said.

So what would work better? 

Put to the question, the panel fell silent, until Rosenberg ventured a response.

It’s like the adage about democracy: ‘It’s the worst form of government except for everything else.’ It’s what we’ve got right now. We hope something better comes along.”

TAGS:

Discover Related Content