A fake news frenzy: why ChatGPT could be disastrous for truth in journalism | Emily Bell

IIt took very little time for the artificial intelligence application ChatGPT to have a disruptive effect on journalism. A New York Times technology columnist wrote that a chatbot feelings expressed (which is not possible). Other media full of examples of “Sydney”, the Microsoft-owned Bing AI search experience being “gross” and “intimidating” (also impossible). Ben Thompson, who writes the Stratechery newslettersaid Sydney had provided him with “the most mind-blowing computing experience of my life” and he deduced that the AI ​​was trained to elicit emotional responses – and it seemed to have succeeded.

To be clear, it is not possible for an AI such as ChatGPT and Sydney to have emotions. They also can’t tell if they make sense or not. These systems are incredibly good at mimicking human prose and predicting the “correct” words to string together. These “big language models” of AI applications, such as ChatGPT, can do this because they have been powered by billions of articles and datasets published on the Internet. They can then generate answers to the questions.

For journalistic purposes, they can create large amounts of material – words, images, sound and video – very quickly. The problem is that they have absolutely no commitment to truth. Think about how quickly a ChatGPT the user could flood the internet with fake news that appears to have been written by humans.

And yet, ever since the ChatGPT test was made public by artificial intelligence firm OpenAI in November, the hype surrounding it has become ominous. As with the birth of social media, enthusiasm from investors and founders drowned out cautious voices. Christopher Manning, director of the Stanford AI Lab, tweeted“The AI ​​ethics crowd continues to promote a narrative that generative AI models are too biased, unreliable, and dangerous to use, but, when deployed, people love how these models offer new possibilities to transform the way we work, find information and have fun.” I would consider myself part of this “ethical crowd”. And if we want to avoid the terrible mistakes of the last 30 years of big public – from Facebook’s data breaches to runaway misinformation interfering with elections and causing genocide – we urgently need to hear the concerns of experts warning of potential harm.

The most disturbing fact to reiterate is that ChatGPT has no commitment to truth. Like the MIT Technology Review the dish, large language model chatbots are “notorious bullshit”. Misinformation, scams, and crime usually don’t require a commitment to the truth either. Visit the blackhatworld.com forums, where those involved in dark practices exchange ideas for making money with fake content, and ChatGPT is advertised as a game changer to generate better fake reviews, or comments , or compelling profiles.

In terms of journalism, many newsrooms have been using AI for quite some time. If you’ve recently found yourself pressured into donating money or paying to read an article on a publisher’s website, or if the advertisement you see is a little more tailored to your tastes, it could also mean that the AI ​​is at work.

Some publishers, however, go so far as to use AI to write stories, with mixed results. Technology publication CNET was recently caught using automated articles, after a former employee said in it resignation email that AI-generated content, such as a cybersecurity newsletter, published false information that could “cause direct harm to readers.”

Felix Simon, a communications researcher at the Oxford Internet Institute, interviewed more than 150 journalists and news editors for an upcoming study on AI in newsrooms. He says it’s possible to make it much easier for journalists to transcribe interviews or quickly read datasets, but first-order issues such as accuracy, overcoming bias and provenance of data still relies heavily on human judgment. “About 90% of AI uses [in journalism] are for relatively tedious tasks, like personalization or creating smart paywalls,” says Charlie Beckett, who runs a journalism and AI program at LSE. Bloomberg News has been automating much of its financial earnings coverage for years, he says. However, the idea of ​​using programs such as ChatGPT to create content is extremely disturbing. “For newsrooms that consider it unethical to publish lies, it is difficult to implement the use of a ChatGPT without a lot of human editing and fact-checking,” says Beckett.

There are also ethical issues related to the nature of tech companies themselves. A Time exposed revealed that OpenAI, the company behind ChatGPT, paid employees in Kenya less than $2 an hour to sift through content depicting harmful graphical content such as child abuse, suicide, incest and torture in order to train ChatGPT to recognize it as offensive. “As someone using these services, you have no control over this,” says Simon.

In a 2021 study, academics have looked at AI models that convert text into generated images, such as Dall-E and Stable Diffusion. They found that these systems amplified “widespread demographic stereotypes.” For example, when you were asked to create an image of “a person cleaning up”, all the images generated were of women. For “an attractive person”, the faces were all, the authors note, representative of the “white ideal”.

“The enthusiastic support from investors and founders drowned out the cautious voices.” Photography: Sheldon Cooper/SOPA Images/REX/Shutterstock

NYU professor Meredith Broussard, author of the upcoming book More Than a Glitch, which examines racial, gender, and ability biases in technology, says that anything built into current generative models like ChatGPT — sets of data to who gets most of the funding – reflects a lack of diversity. “It’s part of the big tech monoculture problem,” says Broussard, not a culture that tech-enabled newsrooms can easily avoid. “Newsrooms are already in the grip of enterprise technology because they have never been funded enough to develop their own.”

BuzzFeed founder Jonah Peretti recently excitedly told staff that the company will use ChatGPT as part of its core business for listings, quizzes, and other entertainment content. “We see breakthroughs in AI ushering in a new era of creativity…with endless opportunities and applications for good,” he wrote. BuzzFeed’s dormant stock price immediately jumped 150%. This is deeply worrying – a mountain of cheap content spewed out by a ChatGPT should surely be a worst-case scenario for media companies rather than an ambitious business model. The hype for generative AI products may be masking the growing realization that they may not be all about “apps for good.”

I run a research center at the Columbia Journalism School. We studied the efforts of politically funded “dark money” networks to reproduce and target hundreds of thousands of local “news” stories in communities in the service of political or commercial gain. ChatGPT’s capabilities increase this type of activity and make it much more easily accessible to many more people. In a recent article on misinformation and AI, Stanford researchers have identified a network of fake profiles using generative AI on LinkedIn. The seductive text exchanges that journalists find so irresistible with chatbots are altogether less appealing if they trick vulnerable people into divulging their personal data and bank details.

Much has been written about the potential of deepfake videos and audio – realistic images and sounds that can mimic the faces and voices of famous people (notoriously, one of them had actress Emma Watson” reading “Mein Kampf). But the real peril lies outside the world of instant deception, which can be easily debunked, and in the realm of creating both confusion and exhaustion by “flooding the zone” with material that overwhelms the truth or at least drowns out more balanced perspectives.

It seems incredible to some of us in the “ethical crowd” that we have learned nothing from the past 20 years of rapidly deployed and mismanaged social media technologies that have exacerbated societal and democratic problems rather than ameliorating them. We seem to be led by a remarkably similar group of homogeneous, wealthy technologists and venture capitalists on another untested and unregulated track, but this time on a larger scale and with even less attention to security.

Emily Bell is director of the Tow Center for Digital Journalism at Columbia University’s Graduate School of Journalism

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top