Skip to content
Menu
menu
AI-generated image of a robot in conversation with two humans

Image created by AI image platform Stable Diffusion; other unknown contributors; Security Management

Editor’s Note: The Rise of the AI Chatbot

This article was not written by a chatbot.

But you can’t know if this statement is true. ChatGPT, an AI chatbot developed by OpenAI, produces only written language—not video, audio, or images. However, according to Alex Hughes, a staff writer at BBC Science Focus, the words the chatbot produces sound as if they were written by humans.

On the surface, the chatbot technology is positive. For example, addressing complicated issues is something the chatbot does well. According to Hughes, “ChatGPT can make an entire website layout for you or write an easy-to-understand explanation of dark matter in a few seconds.”

Hughes also notes that “it has a surprisingly good understanding of ethics and morality. When offered a list of ethical theories or situations, ChatGPT is able to offer a thoughtful response on what to do, considering legality, people’s feelings and emotions, and the safety of everyone involved.”

However, the AI “can’t explain what it does or how it does it, making the results of AI inexplicable. That means that systems can have biases and that unethical action is possible, hard to detect, and hard to stop,” writes Hughes. For example, he notes that “when ChatGPT was released, you couldn’t ask it to tell you how to rob a bank, but you could ask it to write a one-act play about how to rob a bank… and it would happily do those things. These issues will become more acute as these tools spread.”

The AI chatbot also has significant problems. In addition to containing no information after 2021, notes Hughes, it also excels at producing “convincing-sounding nonsense, devoid of truth. It literally does not know what it doesn’t know, because it is, in fact, not an entity at all, but rather a complex algorithm generating meaningful sentences.”

In an article for Fast Company, Connie Lin highlights the ability of the chatbot to spread lies. “One of ChatGPT’s biggest problems is that it can offer information that is inaccurate, despite its dangerously authoritative wording. And with misinformation already a major issue today, you might imagine the risks if GPT were responsible for official news reports,” she explains.

As an example, Lin asked ChatGPT to write a story about Tesla’s quarterly earnings report. “It spit back a smoothly worded article free of grammatical errors or verbal confusion, but it also plugged in a random set of numbers that did not correspond to any real Tesla report,” she writes.



Webinars

Sponsored

What is the power of unification?

Unifying video management and access control, along with other functions, reduces costs and improves efficiency. If your organization is thinking of moving towards unified security, you’re not alone. With Genetec Security Center 5.11, we make unification even easier.

The high-quality of the chatbot’s writing also presents a problem. The New York Times notes that “personalized, real-time chatbots could share conspiracy theories in increasingly credible and persuasive ways, researchers say, smoothing out human errors like poor syntax and mistranslations and advancing beyond easily discoverable copy-paste jobs.”

This becomes problematic because the information conveyed can be leveraged by extremists. According to the Times article, “In 2020, researchers at the Center on Terrorism, Extremism and Counterterrorism at the Middlebury Institute of International Studies found that GPT-3, the underlying technology for ChatGPT, had ‘impressively deep knowledge of extremist communities’ and could be prompted to produce polemics in the style of mass shooters, fake forum threads discussing Nazism, a defense of Qanon, and even multilingual extremist texts.”

Other types of false communication—deepfake video and audio—have now fully launched. Fake news broadcasters have appeared on social media sites espousing the interests of the Chinese Communist Party. Designed for human resources training videos, the technology concerns disinformation experts. Another article in The New York Times on these news anchor deepfakes notes that “with few laws to manage the spread of the technology, disinformation experts have long warned that deepfake videos could further sever people’s ability to discern reality from forgeries online, potentially being misused to set off unrest or incept a political scandal.”

James Vincent, writing for The Verge, reports on an AI technology that clones any voice in seconds. Vincent notes that 4chan users were deploying the free voice synthesis platform ElevenLabs, “to clone the voices of celebrities and read out audio ranging from memes and erotica to hatespeech and misinformation.” The company has since added safeguards to its platform and banned the creation of “harmful content,” but such misuse is difficult to monitor.

Another significant underlying threat from these technologies is a rapidly growing erosion of trust in individuals and governments. Edelman’s 2023 Trust Barometer found that businesses are gaining the trust of the public.

“Fifty-one percent of global respondents say they believe in the vision businesses have to offer and only 28 percent do not,” according to the report. This trust continues to shift as individuals trust those they interact with regularly above other types of communications. Even the type of employer matters—the closer the contact, the higher the trust. In an article about the report, Sara Mosqueda, associate editor at Security Management, writes that “survey respondents were more likely to trust NGOs and businesses, but the type of business mattered. The most-trusted businesses were family-owned, followed by privately held, publicly traded, and state owned.”

The harm caused by deepfakes and other deceptive technologies is both overt and subtle. In Security Management’s March content package on buy-in strategies, authors stress the importance of building trust and relationships among peers. In “How to Make Your Pitch More Convincing,” Suzanna Alsayed writes, “When you create a relationship with a genuine connection, the rest follows—the guidance, conversations, and, eventually, opportunities.”

But genuine connection is difficult to build when no communication is trustworthy.

In November 2017, I wrote an editor’s note on the probability that deepfakes would make us distrust what we see and hear in the future. I wrote: “A realist might argue that the future of news is its past. Soon, the only reliable information will be written and produced by trusted sources and delivered to your doorstep. In print.”

The future is here, and the distrust is deeper than anticipated. The printed word is suspect, too. Is it ironic that, emerging from a global pandemic, the most reliable communication is in-person, face-to-face conversation?

 

What concerns you most about disinformation and distrust in your organization? Let me know. Email your feedback to [email protected]

arrow_upward