Skip to content
Menu
menu

Illustration by iStock, Security Management

FCC Bans Robocalls Using AI-Generated Voices

The U.S. Federal Communications Commission (FCC) banned robocalls that use artificial intelligence-generated voices to target consumers without consent in a unanimous decision issued on 8 February.

The FCC declaratory ruling explains that the Telephone Consumer Protection Act (TCPA) restrictions on artificial or pre-recorded voices covers the use of artificial intelligence (AI) technology to generate human voices. This means that people must give “prior express consent” for a party to initiate a call using AI-generated voices unless it is an emergency or a covered exemption, according to the ruling.

“Although voice cloning and other uses of AI on calls are still evolving, we have already seen their use in ways that can uniquely harm consumers and those whose voice is cloned,” the ruling said. “Voice cloning can convince a called party that a trusted person, or someone they care about such as a family member, wants or needs them to take some action that they would not otherwise take. Requiring consent for such calls arms consumers with the right not to receive such calls, or, if they do, the knowledge that they should be cautious about them.”

The FCC also said in a press release that U.S. state attorneys general will have new powers to go after bad actors who use cloning technology in robocall scams in the United States.

“Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities, and misinform voters,” said FCC Chairwoman Jessica Rosenworcel in a statement. “We’re putting the fraudsters behind these robocalls on notice. State attorneys general will now have new tools to crack down on these scams and ensure the public is protected from fraud and misinformation.”

Security Management’s request for comment to the FCC on what these new tools entail was not immediately returned.

Quang Trinh, PSP, vice chair of the ASIS International Emerging Technology Community Steering Committee and a member of the Security Industry Association’s AI Advisory Board, says creating safeguards for consumer protection is a good thing and that the FCC’s action appears to target unsolicited robocalls where generative AI is being misused.

Trinh adds, however, that the penalties for these use cases appear uncertain and may not be enough to deter illegal activity that originates outside of the United States.

“One thing that is lacking is guidelines for legitimate businesses that are looking to use generative AI to enhance the customer experience,” Trinh says. “It would be great if the ruling was a more comprehensive one that addresses fair use of generative AI for businesses while still addressing the malicious intent or abuse of use around generative AI in these types of robocall scams.”

Jon Polly, PSP, chair of the ASIS International Emerging Technology Community Steering Committee, agrees that the outcome of the FCC’s decision may be limited.

“While the TCPA is supposed to be the law that already enforces this, many who have been the victim of robocalls that did not provide prior written consent will find the enforcement of AI-generated voices difficult,” he says.

The FCC had been working since November 2023 to assess how it might combat illegal robocalls and how AI might be involved in their use. Its efforts ramped up in January 2024 following a robocall made to voters in New Hampshire during the lead-up to the Democratic Primary. The robocall used AI to copy U.S. President Joe Biden’s voice and discourage people from voting in the primary in January, stating “your vote makes a difference in November, not this Tuesday,” according to the New Hampshire Attorney General’s Office.

“The message appears to have been ‘spoofed’ to falsely show that it had been sent by the treasurer of a political committee that has been supporting the New Hampshire Democratic Presidential Primary write-in efforts for President Biden,” the attorney general’s office explained in a press release. “The message’s content directed recipients who wished to be removed from a calling list to call the number belonging to this person.”

The attorney general’s office called the robocall an “unlawful attempt to disrupt” the primary and suppress New Hampshire voters, and subsequently launched an investigation into its origins. On 6 February, New Hampshire Attorney General John M. Formella said his office—with the help of the Anti-Robocall Litigation Task Force—had identified the origin of the 5,000 to 25,000 robocalls: Texas-based Life Corporation and an individual named Walter Monk.

The Election Law Unit in Formella’s office traced the launch-point of the calls using the Industry Traceback Group, created in 2015 to by the USTelecom/The Broadband Association to combat illegal robocalls, and used tracebacks to identify the originating voice service provider for many of the calls—Texas-based Lingo Telecom—which terminated its services to Life Corporation after becoming aware of the robocalls.

Formella’s office has since issued a cease-and-desist order to Life Corporation for violating New Hampshire law’s prohibiting voter suppression, as well as document preservation notices and subpoenas to gather information from Life Corporation as part of an ongoing investigation into the robocalls. The FCC also issued a cease-and-desist letter to Lingo Telecom, demanding it stop supporting illegal robocall traffic on its networks.

“Ensuring public confidence in the electoral process is vital,” Formella said in a statement. “AI-generated recordings used to deceive voters have the potential to have devastating effects on the democratic election process.”

In a press conference on Tuesday, Formella called the robocalls a clear—and potentially first—attempt to use AI to interfere with a U.S. election.

“Calls using AI with something as deceptive as trying to clone the voice of the president of the United States, we haven’t seen something like that before so close to an election with such a blatant attempt to mislead voters,” Formella said as reported by the Associated Press. “We don’t want it to be the first of many. We want this to be an example for us to point to, but also an enforcement example for anyone out there who would consider doing this.”

Additionally, 51 U.S. state attorneys sent a warning letter to Life Corporation to cease any unlawful call traffic immediately or face being in violation of the Telephone Consumer Protection Act, the Truth in Caller ID Act, and other state consumer protection laws.

The warning was part of the work of the Anti-Robocall Multistate Litigation Task Force, which is made up of 51 attorneys general dedicated to investigating and taking legal action against those who route significant volumes of illegal robocalls into and across the United States.

“The Task Force has immediate concerns that this attempt to disrupt New Hampshire’s Presidential Primary Election is something that Life Corp, its subsidiaries, affiliates, customers, and/or other individuals or entities in the robocall ecosystem may see to replicate in each of our respective states in the upcoming primary elections and caucuses during this year’s Presidential election cycle,” the letter said.

Life Corporation did not return a request for comment on this story.

 

arrow_upward