Skip to content
Menu
menu
Illustration of a voting ballot going into a ballot box, with dramatic overhead lighting.

Illustration by iStock; Security Technology

AI and Elections: Addressing New Tools to Spread Inaccurate Information

Voters will head to the polls in more than 50 countries during 2024. Not all of these elections will be free, fair, and open, but they will have lasting consequences and likely be targets for malicious actors.

“I am deeply concerned that democracy, including in the United States, is under greater threat than ever,” said U.S. Senator Mark Warner (D-VA), chair of the Senate Select Committee on Intelligence, at an intelligence briefing in March 2024. “Bad actors like Russia are particularly incentivized to interfere, given what is at stake in Ukraine. Poll after poll demonstrates that Americans are increasingly distrustful of traditional sources of information, while [artificial intelligence] provides the tools to spread sophisticated misinformation at unprecedented speed and scale.”

Information operations (IO) will likely target electorates before, during, and after the voting process to influence popular opinion, according to CrowdStrike’s Global Threat Report 2024. Adversaries are also likely to use artificial intelligence (AI) tools to generate “deceptive but convincing narratives,” while politically active partisans may use the same tools to create “disinformation to disseminate within their own circles,” CrowdStrike assessed.


...AI provides the tools to spread sophisticated misinformation at unprecedented speed and scale.


“These issues were already observed within the first few weeks of 2024, as Chinese actors used AI-generated content in social media influence campaigns to disseminate content critical of Taiwan presidential election candidates,” the report said.

CrowdStrike said it anticipates that Russia and Iran will use IO against the European Union and the United States; China is likely to use IO against Indonesia, South Korea, and Taiwan.

“The overall polarization of the political spectrum in many countries amid continuing economic and social issues will likely increase the susceptibility of those countries’ citizenries to IO—particularly IO campaigns targeted at reinforcing those individuals’ negative opinions of political opponents,” according to the report.

At the Munich Security Conference in February 2024, technology executives announced their agreement to a new framework for responding to AI-generated deepfakes designed to fool voters. Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, and TikTok all pledged their support at the conference, and 12 other companies—including X, formerly known as Twitter—have said they will sign the agreement, The Guardian reports.

The framework—the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections”—asks signatories to work collaboratively on tools to detect and address online distribution of harmful AI-generated content and address its online distribution.

“Digital content addressed by the accord consists of AI-generated audio, video, and images that deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election, or that provide false information to voters about when, where, and how they can vote,” according to a release from the conference.

“Disinformation campaigns are not new, but in this exceptional year of elections—with more than 4 billion people heading to the polls worldwide—concreate, cooperative measures are needed to protect people and societies from the amplified risks of AI-generated deceptive content,” said Christina Montgomery, vice president and chief privacy and trust officer at IBM, in a statement.


Voters will head to the polls in more than 50 countries during 2024.


Nick Clegg, president of global affairs at Meta—the parent company of Facebook, Instagram, and WhatsApp—said that it is vital to do what can be done to prevent people from being deceived by AI-generated content in this major election year.

“This work is bigger than any one company and will require a huge effort across industry, government, and civil society,” Clegg said in a statement. “Hopefully this accord can serve as a meaningful step from industry in meeting the challenge.”

Anna Makanju, vice president of global affairs at OpenAI, said in a statement that the company is committed to protecting the integrity of elections by enforcing policies that prevent abuse and improving transparency around AI-generated content.

OpenAI is the parent company of ChatGPT and DALL-E—two widely used generative AI platforms. In a January 2024 blog post, the company shared some of the commitments it’s making to address potential abuse of its products to threaten elections. Measures include limits on abuse to use the platforms to create deceptive content.

DALL-E, for instance, is instructed to decline user requests that ask the image generator to create content of real people, including political candidates. Builders are also prohibited from creating chatbots that would mimic real people or institutions.

Moreover, OpenAI is developing the ability to implement the Coalition for Content Provenance and Authenticity’s (C2PA) digital credentials for images that are generated by DALL-E, as well as planning to include attribution and links about sources of information that is shared by ChatGPT.

“Transparency around the origin of information and balance in news sources can help voters better assess information and decide for themselves what they can trust,” OpenAI wrote.

Just days before the Munich announcement, Meta shared that it will include visible markers on images, as well as invisible watermarks and metadata embedded within image files, for photorealistic images created using the Meta AI feature. This is in line with the Partnership on AI (PAI) best practices on managing AI generated content, but it is not a fail-safe solution, Clegg wrote in a blog post on 6 February 2024, and Meta is working to develop classifiers so it can automatically detect AI-generated content.

“This work is especially important as this is likely to become an increasingly adversarial space in the years ahead,” Clegg wrote. “People and organizations that actively want to deceive people with AI-generated content will look for ways around safeguards that are put in place to detect it. Across our industry and society more generally, we’ll need to keep looking for ways to stay one step ahead.”

Clegg’s post came after he announced in November 2023 that Meta will require advertisers to disclose when they use AI or other digital techniques to create or alter a political or social issue ad.

“This applies if the ad contains a photorealistic image or video, or realistic sounding audio, that was digitally created or altered to depict a real person as saying or doing something they did not say or do,” Clegg explained. “It also applies if an ad depicts a realistic-looking person that does not exist or a realistic-looking event that did not happen, alters footage of a real event, or depicts a realistic event that allegedly occurred, but that is not a true image, video, or audio recording of the event.”

Meanwhile at TikTok, the company released in 2023 a tool that creators can use to label their own AI-generated content and began testing ways to label AI-created content automatically. The changes are meant to align with TikTok’s existing policy that requires people to label AI-generated content with realistic images, audio, or video.

After signing on to the Munich agreement, Theo Bertram, vice president of global public policy at TikTok, said it was critical that the industry work together to safeguard communities from deceptive AI.


In this watershed year for democracy, companies can be much more forthcoming about how they're guarding against the threats they helped unleash.


“This builds on our continued investment in protecting election integrity and advancing responsible and transparent AI-generated content practices through robust rules, new technologies, and medial literacy partnerships with experts,” Bertram said.

The commitment to the Munich accord addresses some of the issues that most concern election observers but are ultimately not enforceable and do not include measures that show how the companies will accomplish their commitments, wrote Lawrence Norden, senior director, elections and government, at the Brennan Center for Justice.

“In this watershed year for democracy, companies can be much more forthcoming about how they’re guarding against the threats they helped unleash,” Norden explained in a blog post.

Norden explained several measures he would like to see from the signatories—as well as regular updates—on several points that affect election integrity, including new policies introduced on AI and elections, investments in AI-generated materials identification, results on risk assessments about AI models related to deceptive to AI content, and AI systems, hardware, and software used by state actors and digital mercenaries.

“Collectively, [the signatories] have unleashed upon the world a set of tools and technologies that threaten, in their own words, to ‘jeopardize’ our democratic systems—and done so to enormous profits,” Norden wrote. “At this point, the democracies of the world who may pay the biggest price need more than promises. We need accountability.”

 

Megan Gates is editor-in-chief of Security Technology and senior editor of Security Management. Connect with her at [email protected] or on LinkedIn. Follow her on X or Threads: @mgngates.

arrow_upward