image
1

Relying on AI-generative tools for factuality can cause faux-pas, Source: Depositphotos

ChatGPT: Truth or dare?

ChatGPT: Truth or dare?

The habit of using AI-generative models as search engines is contributing to the spread of misinformation

In today’s digital age, the rise of artificial intelligence has brought about numerous advancements in technology. One such innovation is ChatGPT, a language model known for its ability to generate human-like text.

The benefits of this tool have been widely spoken about saving time and resources, getting great business ideas, creating innovative content, assisting users in finding information quickly, and so on. While ChatGPT has proven to be a valuable aid on various occasions, there are concerns about its potential to spread fake news as ChatGPT is not always 100% accurate.

Consider these cases

There are numerous examples of how ChatGPT can dangerously put us on the wrong track. Business Reporter writes about chatbot not actually knowing an answer, but still confidently giving you one, even though it’s the wrong one. ChatGPT may even defend its answers and explain why they are correct. Unlike search engines that present information directly from databases, ChatGPT generates responses through a process of making educated guesses based on available data, which can sometimes lead to the unintentional spread of misinformation or fake news.

Moreover, the risk of generating fake news extends beyond ChatGPT's answer-creation process. As an AI chatbot, ChatGPT lacks the human ability to discern between opinion pieces and factual news articles, leading it to accept all information at face value.  Given that 62% of internet users in 2021 reported having encountered misleading content, there is a significant chance that ChatGPT may also encounter false information during its generating process and present it to you as a definitive answer. A tool like that can thus pose a major danger, especially in the field of journalism.

According to MIT Technology Review, “AI language models are notorious bullshitters, often presenting falsehoods as facts. They are excellent at predicting the next word in a sentence, but they have no knowledge of what the sentence actually means. That makes it incredibly dangerous to combine them with search, where it’s crucial to get the facts straight.” 

The Guardian in 2023 discovered that ChatGPT had made up fake articles under the name of the newspaper. Its editorial received an interesting email - a researcher had come across mention of a Guardian article, written by the journalist on a specific subject from a few years before. But the piece could not be found on their website and in search engines. The alleged author couldn’t remember writing the specific article, but the headline certainly sounded like something they would have written. It was a subject they were identified with and had a record of covering. At the end, it was concluded that the entire thing was invented by ChatGPT.

ChatGPT can make or break a business

ChatGPT is increasingly used in various business departments for tasks like creating marketing campaigns and developing products. However, there is a significant risk of ChatGPT providing incorrect information that teams may base entire projects on.

This could lead to reputational damage and financial loss if businesses invest in solutions based on inaccurate suggestions. The technology could leave a mark on other careers as well. In 2023, a thirty-year veteran New York lawyer Steven A Schwartz used ChatGPT to write a legal brief which he submitted to the court. He even asked if the citations were real. ChatGPT lied to him directly and he believed that answer without verifying it. He submitted the brief with quotes and citations supporting his case and then he had to face legal sanctions, as reported by CNN.

That’s why, to reduce the risks of ChatGPT spreading fake news, it is essential for users to critically evaluate the information they encounter online. Fact-checking and verifying sources are crucial steps in combating the spread of misinformation.

While ChatGPT offers many benefits in terms of language generation and communication, there are inherent risks in its potential to spread fake news. By raising awareness about these risks and taking proactive measures to address them, we can work towards a more informed and responsible use of AI technologies in the digital age.

This article is part of Read Twice – an EU-funded project, coordinated by Euro Advance Association that targets young people and aims to counter disinformation and fake news by enhancing their skills to assess critically information, identify vicious and harmful media content and distinguish between facts and opinions, thus improving their media literacy competences.

The contents of this publication are the sole responsibility of its author and do not necessarily reflect the opinion of the European Union nor of TheMayor.EU.

Newsletter

Back

Growing City

All

Smart City

All

Green City

All

Social City

All

New European Bauhaus

All

Interviews

All

ECP 2021 Winner TheMayorEU

Latest