Skip to content

Manipulation of Information in the Era of ChatGPT Artificial Intelligence

AI Assistant ChatGPT, released by OpenAI in November, offers a wide range of capabilities including coding, drafting business plans, acing exams, and even guiding on restricted topics like Molotov cocktail recipes. It has swiftly ascended to become one of the notable technologies to cross paths...

Misinformation Era: The Impact of ChatGPT
Misinformation Era: The Impact of ChatGPT

Manipulation of Information in the Era of ChatGPT Artificial Intelligence

Article: The National Security Threat Posed by AI-Powered Chatbots, as Demonstrated by ChatGPT

ChatGPT, an iteration of a language model developed by OpenAI, has gained significant popularity since its launch as a "research preview." With over one million users signing up within five days, the technology has the potential to revolutionize the way we interact with machines. However, as chatbots and other AI deepfake technologies advance, there is an increasing need to examine their potential to be exploited by hostile foreign actors [1].

Maximiliana Wynne, the author of this article, a researcher with expertise in threats to international security, discusses the dual-use challenge of ChatGPT. While it has the potential to be used for good, such as bridging linguistic barriers and facilitating access to information, as demonstrated by an anecdote from Microsoft CEO Satya Nadella at the World Economic Forum's annual conference [5], there is also a risk that it can be leveraged as a force multiplier to promote false narratives.

A study involving ChatGPT found that it delivered false and misleading claims about substantial topics like COVID-19, the war in Ukraine, and school shootings for 80 percent of the prompts with erroneous narratives [1]. This raises concerns about users relying on ChatGPT for information instead of conducting their own research. Moreover, individuals with little media literacy training are susceptible to consuming incomplete or false information from ChatGPT.

One of the key threats posed by AI-powered chatbots like ChatGPT is the generation of false information. Large Language Models (LLMs) like ChatGPT can produce fabricated but plausible-sounding statements, including false news reports, biased or adversarially modified outputs, and misinformation that evades traditional fact-checking [1]. For example, an incident in 2023 involved falsified news about a train collision generated by ChatGPT and circulated extensively, illustrating how AI can be exploited to incite social unrest [1].

Another concern is the manipulation of public opinion and election interference. Widespread concerns exist about AI amplifying fake news to destabilize democratic processes. Nearly half of Americans worry about AI destabilizing elections and flooding the internet with disinformation, which reflects the perceived risk of AI-enabled propaganda and manipulative content distribution [2][5].

There is also a risk that AI-powered chatbots could be used to destabilize countries by manipulating public opinion and spreading disinformation. Adversaries can poison training data, inject biased labels, or corrupt reinforcement learning feedback loops to degrade AI model integrity, which may result in the AI producing misleading outputs unknowingly or systematically favoring certain narratives [1].

Furthermore, challenges to content moderation and verification arise, as AI-generated disinformation can be hard to detect initially, as synthetic content can quickly trend on social media before verification, complicating defense against viral misinformation campaigns [3].

In response, government strategies focus on strengthening AI governance, building AI-focused information sharing and analysis centers, and encouraging export controls and alliances that reduce adversarial access to sensitive AI technologies [4].

In conclusion, the misuse of AI chatbots like ChatGPT for spreading disinformation presents a multifaceted national security threat by enabling rapid, large-scale misinformation campaigns that challenge public trust, democratic institutions, and the integrity of information ecosystems [1][2][4][5]. It is crucial to continue monitoring the evolution of these technologies and develop strategies to preempt, respond, and conceptualize threats at every step.

Image Credit: Focal Foto (adapted by MWI)

[1] Wynne, M. (2023). The Dual-Use Challenge of ChatGPT: A National Security Perspective. Military Review. [2] Pew Research Center (2022). Americans' Views on AI and Its Impact on Society. [3] Twitter (2021). Safety and Integrity on Twitter: An Update on Our Work to Protect the Health of the Conversation. [4] White House (2021). Executive Order on Promoting Competition in the American Economy. [5] Nadella, S. (2021). The Future Computed: Empowering Every Person and Every Organization on the Planet. Microsoft.

Read also:

Latest