Skip to content
Menu
menu

Image by iStock; Security Management

Case in China Highlights the Need for Legal Framework to Deal with Deepfakes

This dangers-of-deepfakes update is brought to you by way of Inner Mongolia, where a man was defrauded of 4.3 million yuan, or $622,000, by means of a deepfake video phone call.

A fraudster was able to use artificial intelligence (AI) voice manipulation and face-swapping technology to appear to be the friend of another man. The perpetrator then made a video call to the victim to convince him to transfer money to make a deposit during a bidding process, Reuters reports.

The individuals involved were not named in media reports about the incident, nor were details disclosed about what the bidding process was for or if the fraudster has been caught. The victim only realized he had been defrauded when he reached out to his friend to inquire how the bid worked out. Police were able to recover most of the stolen funds after the crime was reported, according to The Business Times. 

The incident serves to highlight China’s laws restricting the use of AI to create false information, a first-of-its-kind restriction from a country. The laws came into force in January 2023 and require companies that offer AI tools to follow these guidelines:

  • Only process personal information that is legally obtained

  • Periodically review, evaluate, and verify algorithms

  • Establish management systems and technical safeguards

  • Authenticate users with real identity information

  • Establish mechanisms for complaints and reporting

The first known use of the laws to charge someone with a crime was in May 2023. The incicdent involved a person who allegedly used ChatGPT, an AI chatbot, to create a story about a fake train crash and disseminate it through social sharing-based news sites.

China is not alone in working to establish laws related to deepfakes. A patchwork of legal developments have continued to evolve since the last time Today in Security covered the deepfake legal landscape in early 2021. As detailed in that 2021 report, the U.S. Department of Homeland Security  released an annual update on the topic in 2022. U.S. states and localities in continue to address deepfakes related to porn and elections. The European Union has also updated some codes dealing with disinformation.

But, as The New York Times wrote in its article on China’s laws, “China faces the same hurdles that have stymied other efforts to govern deepfakes: The worst abusers of the technology tend to be the hardest to catch, operating anonymously, adapting quickly, and sharing their synthetic creations through borderless online platforms. China’s move has also highlighted another reason that few countries have adopted rules: Many people worry that the government could use the rules to curtail free speech.”

Deepfake content is also increasingly easy to create. The ASIS Crisis Management and Business Continuity Community recently demonstrated in a video on security concerns with ChatGPT (ASIS members only), creating decent-looking and decent-sounding fake content is a relatively easy endeavor. An NPR article also addressed just how easy it is to great a fake video.

And earlier this week, fake images of explosions at the Pentagon and White House that spread quickly on social media actually caused temporary drops in the stock market.

Many innocuous fakes have also made the news: There’s the Pope-in-a-puffer-coat fake, the Tom Cruise TikTok impersonator fakes, and the now older deepfakes of Barack Obama and Donald Trump are some examples.

The U.S. Republican National Committee did face criticism when it used AI to create fake apocalyptic scenes in a political ad targeting U.S. President Joe Biden despite small disclaimers on the ad that said "built entirely with AI imagery." 

As reported in NPR, technical and legal solutions to deepfakes are not keeping up with the capabilities and use of the technology.

"Artificial intelligence is quickly getting better at mimicking reality, raising big questions over how to regulate it, according to NPR. "And as tech companies unleash the ability for anyone to create fake images, synthetic audio and video, and text that sounds convincingly human, even experts admit they're stumped."

arrow_upward