Skip to content
Menu
menu

Illustration by iStock, Security Management

How the Biden Administration’s New National Security Memo on AI Affects the Private Sector

U.S. President Joe Biden issued a national security memorandum on Thursday, outlining his administration’s approach to harnessing the power of artificial intelligence (AI) to advance U.S. national security.

Jake Sullivan, national security advisor to President Biden, unveiled the memorandum during a speech at the National Defense University in Washington, D.C., in remarks that referenced the Eisenhower doctrine during the Cold War.

“In this age, in this world, the application of artificial intelligence will define the future, and our country must once again develop new capabilities, new tools, and, as General Eisenhower said, new doctrine, if we want to ensure that AI works for us, for our partners, for our interests, and for our values, and not against us,” Sullivan said.

The United States is already the world’s leader in AI in many ways, Sullivan said, but the country needs to continue to invest in this work to maintain its advantage—especially over the People’s Republic of China (PRC).

“Our lead is not guaranteed. It is not pre-ordained. And it is not enough to just guard the progress we’ve made, as historic as it’s been,” Sullivan said. “We have to be faster in deploying AI in our national security enterprise than America’s rivals are in theirs. They are in a persistent quest to leapfrog our military and intelligence capabilities. And the challenge is even more acute because they are unlikely to be bound by the same principles and responsibilities and values that we are.

“The stakes are high,” Sullivan continued. “If we don’t act more intentionally to seize our advantages, if we don’t deploy AI more quickly and more comprehensively to strengthen our national security, we risk squandering our hard-earned lead.”

The memo calls upon the U.S. government to implement steps to ensure that the United States leads development of safe, secure, and trustworthy AI; harness AI technologies to advance the U.S. government’s national security mission; and advance international consensus and governance around AI.

How the Memo Affects the Private Sector

Key to these efforts will be engagements and investment in the private sector on several fronts, including talent recruitment, intellectual property protection, and risk assessments.

Talent. The memo instructs the U.S. Departments of State, Defense, and Homeland Security to use their legal authorities to attract and bring to the United States individuals with technical expertise to help lead U.S. competitiveness in AI and related fields, such as semiconductor design and production.

The assistant to the president for national security affairs is also instructed to work with U.S. agencies to explore actions that can be used to prioritize and streamline the process for visa applicants working with sensitive technologies.

“Doing so shall assist with streamlined processing of highly skilled applicants in AI and other critical and emerging technologies,” according to the memo. “This effort shall explore options for ensuring the adequate resourcing of such operations and narrowing the criteria that trigger secure advisory opinion (SAO) requests for such applicants, as consistent with national security objectives.”

The SAO is a process that was created following the 9/11 terror attacks to detect espionage, terrorism, or illegal transport of technology, as well as for extra screening for refugees.

“First, we have to ensure the United States continues to lead the world in developing AI,” Sullivan said. “Our competitors also know how important AI leadership is in today’s age of geopolitical competition, and they are investing huge resources to seize it for themselves. So we have to start upping our game, and that starts with people.”

Intellectual property. The memo also calls for the protection of U.S. AI from foreign intelligence threats, which adversaries have traditionally attempted to obtain through research collaborations, investment schemes, insider threats, and advanced espionage.

The memo instructs the National Security Council and the Office of the Director of National Intelligence (ODNI) to review previous national security frameworks and memos before making recommendations to improve identification and assessment of foreign intelligence threats to the U.S. AI ecosystem and its related sectors.

ODNI, with other U.S. government partners, is then instructed to identify critical nodes in the AI supply chain, create a list of the most “plausible avenues” that supply chain could be disrupted or compromised, and take steps to reduce those risks, the memo explained.

The Committee on Foreign Investment in the United States (CFIUS) is also instructed to consider if covered transactions involve foreign actors’ access to proprietary information on AI training techniques, algorithmic improvements, hardware advances, and other insights that could “shed light on how to create and effectively use powerful AI systems,” the memo said.

“One playbook we’ve seen [competitors] deploy again and again is theft and espionage,” Sullivan said. “So, the National Security Memorandum takes this head on. It establishes addressing adversary threats against our AI sector as a top-tier intelligence priority, a move that means more resources and more personnel will be devoted to combating this threat.”

In turn, the memo instructs U.S. government personnel to work with private sector AI developers to provide timely cybersecurity and counter-intelligence information to better protect AI technology.

Risk assessments. Acknowledging the role the U.S. private sector plays in advancing AI, the memo instructs the National Institute of Standards and Technology (NIST) AI Safety Institute to serve as the point of contact with the private sector to facilitate pre- and post- public deployment voluntary testing for safety, security, and trustworthiness of AI models.

This testing will assess risks related to cybersecurity, biosecurity, chemical weapons, system autonomy, and other risks as appropriate. It excludes nuclear risks, however, because that assessment will be conducted by the U.S. Department of Energy.

The AI Safety Institute (AISI) is also instructed to provide guidance for AI developers on testing, evaluating, and managing risks to safety, security, and trustworthiness that arise from dual-use (technology used for both military and commercial purposes) foundation models, including on:

  1. Measuring capabilities relevant to the risk AI models could enable to develop biological and chemical weapons or automate offensive cyber operations.

  2. Addressing societal risks, such as misuse of models to harass or impersonate individuals.

  3. Developing mitigation measures to prevent malicious or improper use of models.

  4. Testing efficacy of safety and security mitigations.

  5. Applying risk management practices throughout the development and deployment lifecycle.


“In the event that AISI or another agency determines that a dual-use foundation model’s capabilities could be used to harm public safety significantly, AISI shall serve as the primary point of contact through which the United States government communicates such findings and any associated recommendations regarding risk mitigation to the developer of the model,” according to the memo.

Other Actions of Note

The Biden administration published a Framework to Advance AI Governance and Risk Management in National Security alongside the memo to detail how it should be implemented, including requiring mechanisms for risk management, evaluations, accountability, and transparency.

The framework requirements mandate that agencies “monitor, assess, and mitigate AI risks related to invasions of privacy, bias and discrimination, the safety of individuals and groups, and other human rights abuses,” according to a fact sheet. “This framework can be updated regularly in order to keep pace with technical advances and ensure future AI applications are responsible and rights-respecting.”

In a statement shared with Security Management, U.S. Senator Mark Warner (D-VA), chair of the Senate Select Committee on Intelligence, said that AI technology is rapidly evolving in a way that poses massive consequences for the economy, national security, and democracy.

“I am heartened to see the administration recognize this very fact and take a leadership role to advance AI capabilities while simultaneously promoting responsible research, strong governance that ensures trust and safety, and the protection of human and civil rights,” Warner said.

While Warner added that he was gratified to see some of the legislative proposals he’s advanced included in the memo, he said that more work needs to be done—especially with the private sector.

“I encourage the administration to work in the coming months with Congress to advance a clearer strategy to engage the private sector on national security risks directed at AI systems across the AI supply chain,” Warner explained.

 

arrow_upward