U.S. Laws Address Deepfakes
For the second year in a row, the U.S. National Defense Authorization Act (NDAA) includes provisions that address the growing problem of deepfakes.
[Free] ASIS Webinars
Learn cutting-edge, adaptable, and creative solutions to today’s most pressing security challenges; from Security Management, powered by ASIS International.
Deepfake technology enables users to create fake videos, images, or recordings of people that appear authentic. Some of the earliest and most prolific deepfake examples involve pornography—everything from face-swapping a celebrity into a pornographic video to an AI algorithm that creates a realistic nude from a person in an image.
Use of deepfakes continues to explode, and its reach goes well beyond porn. For example, a politician in India used deepfake technology to create a video in which he appears to give a speech in the Hindi dialect Haryanvi. In the original video, he speaks in English.
The 2021 NDAA, which became law when Congress recently voted to override U.S. President Donald Trump’s veto, requires the Department of Homeland Security (DHS) to issue an annual report for the next five years on deepfakes. The report should cover all manner of potential harm from the technology, including everything from foreign influence campaigns to fraud to harm against specific populations. This essentially expanded the scope of the deepfake report that the previous year’s NDAA had called for.
In addition, the law instructs DHS to study deepfake creation technology and possible detection and mitigation solutions. Finally, the law requires the U.S. Department of Defense to study the possibility of adversaries creating deepfake content depicting U.S. military personnel or their families and recommend policy changes.
ASIS Protection of Assets (POA)
Advance your mission. Accelerate your career. Security professionals worldwide rely on the Protection of Assets (POA) to navigate their toughest challenges and increase capacity to assess and mitigate risk.
Another law, the Identifying Outputs of Generative Adversarial Networks Act, was signed into law by President Trump in late December 2020. This law requires the National Science Foundation to research deepfake technology and authenticity measures, requires the National Institute of Standards and Technology to support the development of standards related to deepfakes, and instructs both agencies to develop ways to work with the private sector on deepfake identification capabilities.
According to a University of Illinois Law Review article, a few states have also enacted protections and prohibitions related to deepfakes. Texas was the first, banning deepfakes designed to influence an election in 2019. In the same year, Virginia banned deepfake pornography. California’s law prohibits the creation of “videos, images, or audio of politicians doctored to resemble real footage within 60 days of an election.” The article addresses the difficulties of deepfake laws: “In the U.S., injunctions against deepfakes are likely to face First Amendment challenges. Even if such injunctions survive a First Amendment challenge, lack of jurisdiction over extraterritorial creators of deepfakes would inhibit their effectiveness. Therefore, injunctions against deepfakes may only be granted under few specific circumstances, including obscenity and copyright infringement.”
A paper from global law firm Hogan Lovells reports that Europe has not addressed the legal landscape of deepfakes directly: “There are currently no European laws or national laws in the UK, France, or Germany specifically dedicated to tackling deepfakes. The EU Commission aims to tackle online disinformation in Europe, including the use of deepfakes, by way of a series of measures, including a self‑regulatory Code of Practice on Disinformation for online platforms.”
Enhance your career and earnings potential with ASIS certification.
Europol’s Malicious Uses and Abuses of Artificial Intelligence report released in November 2020 includes a case study, “A Deep Dive into Deepfakes.” In the conclusions from the deepfake examination, the report recommends that policies addressing deepfakes “should be technology-agnostic in order to be effective in the long run and to avoid having to review and replace these on a regular basis as the technology behind the creation and abuse of deepfakes evolves. Nevertheless, such measures should also avoid obstructing the positive applications of generative adversarial networks. Consideration should also be given to include the abuse of deepfakes in current and future initiatives at the international level to tackle illicit content online by focusing on public-private partnerships, notice and-takedown procedures, proactive use of detection technology, and closer cooperation with competent authorities.”