Skip to content

Illustration by iStock; Security Management

White House Announces AI Executive Order

The White House announced a new executive order on Monday that tackles the risks and potentials of artificial intelligence (AI).

Many of the actions in the executive order focus on matters of national security, privacy, and supporting the workforce.

National Security & Safety

Citing the Defense Production Act, the order requires that AI systems developers—especially those working on models that present a significant risk to national security, the economy, or the public health—inform the federal government about training the platform. The developers must also share the results of all safety tests.

The U.S. Department of Homeland Security will be responsible for applying NIST-developed standards to critical infrastructure sectors.

Other elements of the order include the development of new standards to prevent the engineering of any dangerous biological materials, establishing a cybersecurity program dedicated to developing AI tools that can find and fix critical software vulnerabilities, and tasking the Department of Commerce with developing guidance for watermarking AI-generated content to deter fraud or deception.

“The aim is noble and the need is certain, but the implementation will be challenging considering that Generative AI technology is already being used extensively by hackers and enemy states to attack U.S. companies with phishing emails that are nearly impossible to detect,” noted John Gunn, CEO for Token, in a statement emailed to members of the press. “Most AI technologies that deliver benefits can also be used for harm, so almost every company developing AI solutions needs to make the required disclosure today.”


The executive order asks Congress to support and pass privacy legislation to protect citizens’ privacy, including data being used to train AI systems. It also calls for strengthening privacy-preserving research and technologies, like cryptographic tools, and evaluating the collection and use of commercially available information that has personally identifiable information.

The Surveillance Technology Oversight Project (STOP) said in a press release that in this regard the executive order is too soft. Instead, the privacy and civil rights group said there should be an immediate moratorium on AI’s most harmful uses. “A lot of the AI tools on the market are already illegal,” according to Albert Fox Cahn, executive director for STOP. “The worst forms of AI, like facial recognition, don’t need guidelines, they need a complete ban. …Many of these proposals are simply regulatory theater, allowing abusive AI to stay on the market.”

Supporting Workers

The order aims to mitigate risks that AI may pose to jobs and workplaces by supporting collective bargaining and calling for the development of principles and best practices on job displacement, labor standards, and hiring considerations.

Other aspects of the order encourage research and innovation by leveraging AI with a pilot program, the National AI Research Resource. The program will offer AI researchers and students in “vital areas” like healthcare and climate change with access to resources, data, and grant opportunities.

Not all in the industry are hopeful for the developments the orders calls for.

“It’s a little early in the game to regulate a technology that is still in its infancy,” said Tony Pietrocola, president of autonomous security operations center AgileBlue and president of InfraGard-Northern Ohio Alliance, in a statement emailed to members of the press. “Is China putting curbs on AI? No. Have other nations put curbs on AI? No. Let AI prove its use cases before we move into talks of executive orders and stringent regulation.”