It’s now a given that countries worldwide are . To date, most of the public discussion surrounding this competition has focused on commercial gains flowing from the technology. But the AI arms race for military applications is racing ahead as well, and concerned scientists, academics, and AI industry leaders have been  the alarm.

Compared to existing military capabilities, AI-enabled technology can make decisions on the battlefield with mathematical speed and accuracy and never get tired. However, countries and organizations developing this tech are  to articulate ideas about how ethics will influence the wars of the near future. Clearly, the development of AI-enabled autonomous weapons systems will raise risks for instability and conflict escalation. However,  these weapons are unlikely to succeed.
In an era of  and risk, leading militaries worldwide are moving ahead with AI-enabled weapons and decision support, seeking leading-edge battlefield and security applications. The  of these weapons is substantial, but ethical concerns are largely being brushed aside. Already they are  to  against small boat attacks, search for , and destroy .

For now, the AI arms race is a , mostly between the U.S., China, and Russia, but worries are it will become more than that. Driven by fear of other countries , the world’s military powers have been  by leveraging AI for years —  at least to 1983 — to achieve an advantage in . This continues today. , Russian President Vladimir Putin has said the nation that leads in AI will be the “ruler of the world.”

How policy lines up behind military AI use

According to an  in Salon, diverse and ideologically-distinct research organizations including the  (CNAS), the , and the  have argued that America must ratchet up spending on AI research and development. A Foreign Affairs article  that nations who fail to embrace leading technologies for the battlefield will lose their competitive advantage. Speaking about AI, former U.S. Defense Secretary Mark Esper said last year, “ informs us that those who are first to harness once-in-a-generation technologies often have a decisive advantage on the battlefield for years to come.” Indeed, leading militaries are , motivated by a desire to secure military operational  on the future battlefield.

Civilian oversight committees, as well as militaries, have adopted this view. Last , a U.S.  called on the Defense Department to get more serious about accelerating AI and autonomous capabilities. Created by Congress, the National Security Commission on AI (NSCAI)  an increase in AI R&D funding over the next few years to ensure the U.S. is able to maintain its tactical edge over its adversaries and achieve “military AI readiness” by 2025.

In the future, warfare will pit “algorithm against algorithm,” claims the new NSCAI report. Although militaries have continued to  using weapon systems similar to those of the 1980s, the NSCAI report claims: “the sources of battlefield advantage will shift from traditional factors like force size and levels of armaments to factors like superior data collection and assimilation, connectivity, computing power, algorithms, and system security.” It is possible that new AI-enabled weapons would render conventional forces near , with rows of decaying Abrams tanks gathering dust in the desert in much the same way as ships lie off the coast of San Francisco. Speaking to reporters recently, Robert O. Work, vice chair of the NSCAI  of the international AI competition: “We have got … to take this competition seriously, and we need to win it.”

The accelerating AI arms race

Work to incorporate AI into the military is already far advanced. For example,  in the U.S., Russia, China, South Korea, the United Kingdom, Australia, Israel, Brazil, and Iran are developing cybersecurity applications, combat simulations, , and other autonomous weapons.

 completed “global information dominance exercise” by U.S. Northern Command pointed to the tremendous advantages the Defense Department can achieve by applying machine learning and artificial intelligence to all-domain information. The exercise integrated information from all domains including space, cyberspace, air, land, sea, and undersea, according to Air Force Gen. Glen D. VanHerck.

Gilman Louie, a commissioner on the NSCAI report, is quoted in a  saying: “I think it’s a mistake to think of this as an arms race” — though he added, “We don’t want to be second.”

A dangerous pursuit

West Point has started training cadets to  when humans lose some control over the battlefield to smart machines. Along with the ethical and political issues of an AI arms race are the increased risks of triggering an . How might this happen? Any number of ways, from a misinterpreted drone strike to autonomous  with new algorithms.

AI systems are trained on data and reflect the quality of that data along with any inherent biases and assumptions of those developing the algorithms. Gartner through 2023, up to 10% of AI training data will be poisoned by benign or malicious actors. That is significant, especially considering the  of critical systems.

When it comes to bias, military applications of AI are presumably no different, except that the stakes are much higher than whether an applicant gets a . Writing in War on the Rocks, Rafael Loss and Joseph Johnson argue that military deterrence is an “extremely complex” problem — one that any AI hampered by a lack of good data will not likely be able to provide solutions for in the immediate future.

How about assumptions? In 1983, the world’s superpowers drew near to accidental nuclear war, largely because the Soviet Union relied on software to make predictions that were based on false assumptions. Seemingly this could happen again, especially as AI increases the likelihood that humans would be taken out of . It is an open question whether the risks of such a mistake are higher or lower with greater use of AI, but Star Trek had a vision in 1967 for how this could play out. The risks of conflict had escalated to such a degree in a “Taste of Armageddon” that war was outsourced to a  that decided who would perish.

There is no putting the genie back in the bottle. The AI arms race is well underway and leading militaries worldwide do not want to be in second place or worse. Where this will lead is subject to conjecture. Clearly, however, the wars of the future will be fought and determined by AI more than traditional “military might.” The ethical use of AI in these applications remains an open-ended issue. It was within the mandate of the NSCAI report to recommend restrictions on how the technology should be used, but this was unfortunately  to a later date.

Gary Grossman is Senior Vice President of Technology, and Global Lead of the AI Center of Excellence