Skip to content
Menu
menu

Illustration by Security Management

Council Urges American Investment in AI for Security, Competition, and Regulation

The world is at a turning point when it comes to artificial intelligence (AI), and the United States is at risk of falling behind, according to a new report from the National Security Commission on Artificial Intelligence (NSCAI).

The commission, which includes technology company CEOs, national security professionals, technologists, and academic leaders, filed its 756-page final report this week, stating that “For the first time since World War II, America’s technological predominance—the backbone of its economic and military power—is under threat. China possesses the might, talent, and ambition to surpass the Untied Sates as the world’s leader in AI in the next decade if current trends do not change. Simultaneously, AI is deepening the threat posed by cyberattacks and disinformation campaigns that Russia, China, and others are using to infiltrate our society, steal our data, and interfere in our democracy.”

The report declares that the U.S. government is not organizing or investing to win the AI race, nor is it prepared to defend against AI-enabled threats or rapidly adopt AI applications for national security. AI-based threats include deepfakes, the use of lethal drone swarms, and cyber or disinformation attacks conducted at lightning speed.

NSCAI recommends that military forces integrate AI solutions into its workforce and systems and manage risks associated with AI-enabled or autonomous weapons.

“AI will enable new levels of performance and autonomy for weapons systems,” the report said. “But it also raises important legal, ethical, and strategic questions surrounding the use of lethal force. Provided their use is authorized by a human commander and operator, properly designed and tested AI-enabled and autonomous weapon systems can be used in ways that are consistent with international humanitarian law.”

When it comes to AI-enabled warfare, however, experts caution that AI enables escalation—not just automation. In late 2019, RAND Corporation wargame exercises that tested how AI-driven forces could be used in geopolitical conflicts found that the speed of unmanned systems guided by AI occasionally led to inadvertent crisis escalations—including loss of life, Security Management reported.

“Decisions made at machine rather than human speeds also have the potential to escalate crises at machine speeds,” according to the authors of a RAND report on the wargames. “During protracted crises and conflicts, there could be strong incentives for each side to use autonomous capabilities early and extensively to gain military advantage. This raises the possibility of first-strike instability.”

The NSCAI report also recommended that AI solutions be democratic, transparent, and protective of civil liberties. “The government must earn that trust and ensure that its use of AI tools is effective, legitimate, and lawful. This imperative calls for developing AI tools to enhance oversight and auditing, increasing public transparency about AI use, and building AI systems that advance the goals of privacy preservation and fairness.”

The United States is not the only government increasingly concerned about balancing civil liberties, innovation, and competition. Officials in both the United States and the European Union have cited concerns about China’s growing influence in AI, and elected officials began laying the groundwork for a parallel AI strategy in an EU parliamentary hearing earlier this week, Politico reported.

The EU’s executive arm will release its AI regulation proposal next month, likely setting new strict rules for AI applications that are considered “high risk.”

arrow_upward