Skip to content
Menu
menu
Artificial intelligence and international security brains arm wrestle

Illustration by Patric Sandri

How AI Could Escalate Global Conflicts

In the global arena, national security is often practiced through the concept of deterrence, with Country A trying to deter and dissuade Country B from taking a violent course of action that would harm Country A.

Deterrence is usually accomplished through the threat of force, but not the actual use of force, so that the peaceful status quo can be maintained. In the words of Peter Sellers’ Dr. Strangelove, the nuclear expert in the Cold War-era movie classic of the same name, it is “the art of producing in the mind of the enemy the fear to attack.”

But what if the enemy is represented by a robot driven by artificial intelligence (AI), with no real mind? How would the dynamics of nation-against-nation conflict change when actions can be taken at computerized hyperspeed, but without the delays and nuances of human judgment?

Seeking answers to these questions, the RAND Corporation, a global policy think tank, recently simulated a conflict between China, Japan, North Korea, South Korea, and the United States by conducting a wargame exercise that involved AI-driven autonomous forces. In the wargame’s scenario, which takes place during an unspecified year in the future, China played the role of global power, and the United States (a lesser power than China, in this scenario), Japan, and South Korea were allies opposing China.

The goal of the exercise was to explore ways AI could affect deterrence and escalation. The results were detailed in an e-book report, Deterrence in the Age of Thinking Machines, issued in January 2020.

The wargame began with an attempt by China to exert greater control in its region, according to the RAND report. The United States and Japan resisted this effort, and the conflict continued to escalate. The United States and Japan engaged in joint exercises to show their solidarity; this provoked China, which escalated the conflict further by sinking an unmanned Japanese cargo ship while trying to enforce a port blockade.

The United States and Japan retaliated by sinking a manned Chinese submarine—the first human casualty in the wargame. The United States and Japan failed to de-escalate the conflict at this point, so China responded with a missile attack that also caused casualties. The wargame ended with the conflict still escalating, and RAND experts came away with a few key findings.

First, manned warfare systems seemed more effective than unmanned systems when it came to deterrence. While the Japanese and American systems were unmanned in the wargame, the Chinese had some manned platforms. The presence of humans in China’s systems made the United States and Japan more hesitant to use force, and it seemed to compel them to look for alternative actions to avoid escalation, according to the report.

Indeed, the wargame revealed how interested the United States and Japan were in “trying to manage the escalatory dynamic” and looking for alternatives to actions that could make the conflict worse, says Yuna Wong, a RAND policy researcher who was the lead author on the report, in an interview with Security Management.


Decisions made at machine rather than human speeds also have the potential to escalate crises at machine speeds


“One of the things that surprised me personally was how the United States and Japan kept trying to give China off-ramps in the game,” Wong says.

In addition, the speed of unmanned systems guided by AI occasionally led to inadvertent escalation in the wargame. Sometimes an autonomous warfare system, confronting an unanticipated situation in which officials did not intend to use force, still reacted with force.

This finding raised a question for the experts at RAND: “Will machines likely be worse at reading human signals than humans are?” Wong asks. While humans always run the risk of making a miscalculation in a conflict situation, machines run by AI could be even worse, and this should be taken into account by military planners.

The finding should also be con­sidered at the systems design phase, way before the use phase, Wong adds. “With AI systems, we should start thinking about the escalatory dynamic as we build and test them,” she says.

Overall, one of the major implications of these findings, according to the report, is that “widespread AI and autonomous systems could lead to inadvertent escalation and crisis instability.”

“Decisions made at machine rather than human speeds also have the potential to escalate crises at machine speeds,” the authors explained. “During protracted crises and conflicts, there could be strong incentives for each side to use autonomous capabilities early and extensively to gain military advantage. This raises the possibility of first-strike instability.”

Although the nations in the wargame exercise have not yet reached full AI cap­ability when it comes to warfighting, many are investing enough in development to go down that road.

“An arms race in autonomous systems between the United States and China already appears imminent and is likely to increase instability,” the authors wrote.

Given this potential arms race, experts at the Congressional Research Service (CRS) examined the development of possible AI-based rivalries between major global security players like China, Russia, and the United States.

“Potential international rivals in the AI market are creating pressure for the United States to compete for innovative military AI applications,” CRS said in a November 2019 report, Artificial Intelligence and National Security. CRS cited China and Russia as the main rivals to the United States in the AI-driven warfare systems sphere.

China, the report said, has released a plan outlining how it hopes to lead the world in AI development by 2030.

“Currently, China is primarily focused on using AI to make faster and more well-informed decisions, as well as on developing a variety of autonomous military vehicles,” the CRS authors wrote.

However, a cultural challenge could eventually hinder the development of AI, the authors added. “Chinese military culture, which is dominated by centralized command authority and mistrust of subordinates, may prove resistant to the adoption of autonomous systems or the integration of AI-generated decision-making tools,” the CRS authors explained.

Russia is also active in military AI development, with a particular focus on robotics. The country has released a national strategy for AI, which calls for the robotizing of 30 percent of its military equipment by 2025.

In support of this goal, Russia recently created the Foundation for Advanced Studies, a defense research organization specializing in autonomy and robotics. Russia has also established an annual military robotics conference, Robotization of the Armed Forces of the Russian Federation.

All these developments, if successful, have the potential to dramatically accelerate the overall pace of combat because an AI-driven system may have the ability to react at gigahertz speed.

“Some analysts contend that a drastic increase in the pace of combat could be destabilizing—particularly if it exceeds human ability to understand and control events—and could increase a system’s destructive potential in the event of a loss of system control,” according to the CRS.

Despite this risk, warfare system speed may confer an advantage to the country using it. This could create pressure for the global adoption of military AI applications, with many countries attempting to keep up with their competitors, the CRS added.

However these developments ult­imately unfold, AI-driven systems will likely have a major impact on the future of warfare. “Most believe that AI will have at least an evolutionary—if not revolutionary—effect,” according to the CRS analysis.

 

 

arrow_upward