We’ve gotten familiar with some forms of AI already. Roomba can map your house and vacuum your floors, DALL-E will create images from descriptive text, and Siri or Alexa can complete a multitude of tasks with a simple voice command. What makes ChatGPT so captivating is its seamless use of human language. While its essays are impressive, Dai said that the system is not foolproof.
“It’s also very deceptive in the sense that it is incapable of telling whether what it writes is accurate,” he explained. “In fact, just based on my own extensive testing, I found that it makes tons of factual mistakes, but it does so in a confident, authoritative, people kind of a way.”
Dai also noted that ChatGPT’s answers seem become increasing repetitive and even “defensive” when asked the same question over and over again. ChatGPT’s answers also vary depending on the language used to ask the question, because its answers will reflect the language of the source material ChatGPT draws from the formulate its response.
Risks or rewards
Like many tools at our disposal, ChatGPT holds great promise and frightening potential. Will it replace jobs or make it even harder for consumers to distinguish fact from fiction?
Dai explained, “This tool could pose a severe challenge to democracy, because it means that the cost of creating misinformation would become insanely low, such that it's going to be nearly impossible for people to detect AI-created content. Say that you can even make AI more authentic by inserting of typos and other error and biases that make it seem even more authentic and personable. I think that's the scariest part.”
At the same time, Dai said, there will likely be a premium on authentic writing and real thinking, which only humans can provide for now.
“Writing is not just writing; good writing reflects good thinking,” said Dai. “By taking a shortcut, I worry that people may lose that really valuable thinking skill.”