The US Military is Inviting AI For Support

0
200
US Military Inviting AI

For all our suspicions about Terminator-style killer robots, the objective of Artificial Intelligence (AI) in the U.S. military is probably to be on augmenting humans, not replacing them.

Why it matters: AI has been labeled as the “third revolution” in warfare, after nuclear weapons and gunpowder. But each revolution carries risks, and even an AI strategy that emphasizes assisting human warfighters will bring enormous operational and ethical challenges.

Driving the news: Recently, Armenia agreed to a cease-fire with its neighbor Azerbaijan to bring a positive end to their brief war over the disputed territory of Nagorno-Karabakh.

  • Azerbaijan controlled the conflict in part thanks to the ability of its fleets of cheap, armed drones to overcome Armenia’s tanks, in what military analyst Malcolm Davis called a ” revolutionary game-changer for land warfare.”

An even better game-changer would be if such armed drones were made fully autonomous. Still, for the foreseeable future, such fears of “slaughter bots” that could be used to kill with liberty appear overstated, says Michael Horowitz, a political scientist at the University of Pennsylvania.

  • “The overpowering majority of military investments in AI will not be about lethal autonomous weapons, and certainly none of them may be,” says Horowitz.
  • report published last month by Georgetown’s Center for Security and Emerging Technology found defense research into AI is centered “not on displacing humans but helping them in ways that adapt to how humans think and treat information,” said Margarita Konaev, the report’s co-author, at an event previously this week.

Details: A type of that future was on display at an event held in September by the Air Force to show its Advanced Battle Management System (ABMS), which can quickly process data in battle and use it to lead warfighters in the field.

  • Even though they have costly hardware at their fingertips, servicemen and -women in a firefight usually transmit information manually, often via chains of radio transmissions. But ABMS intends to use machine learning and cloud computing to speed up that process, augmenting the abilities of every warfighter.
  • At the September demonstration, Anduril — a young Silicon Valley startup backed by Peter Thiel and co-founded by Palmer Luckey that emphasizes defense — presented its Lattice software systemwhich processes sensor data using machine-learning algorithms to identify and track targets like an incoming cruise missile automatically.
  • Using the company’s virtual reality interface, an airman in the demo only had to label the target as hostile and couple it with a weapons system to crush it, closing what the military calls a “kill chain.”

What they’re saying: “At the heart, our view is that the military has struggled with the question of, how do I know what’s trending in the world and how do we process it,” says Brian Schimpf, Anduril’s CEO.

  • What Anduril and other firms involved in the sector are planning to do is make AI work for defense in much the similar way it currently works for other industries: accelerating information processing and producing what amounts to a more helpful, human-machine hybrid workforce.

But then again: Even though people still decide whether or not to pull the trigger, specialists worry about the accurateness of the algorithms that are guiding that decision.

  • “If like Clausewitz you consider the fog of war, how could you ever have all the info that would allow you to pretend what the battlefield environment looks like in a way that would give you the confidence to use the algorithm?” says Horowitz.
  • Just as it’s not completely clear who would be responsible for an accident concerning a mostly self-driving car — the human inside or the technology — “who owns the outcomes if something goes wrong on the battleground?” asks P.W. Singer, a senior fellow at New America.

Be Intelligent: The strength of AI is also its weakness: speed.

  • It’s bad enough when faulty trading algorithms cause a stock market flash crash. But if faulty AI systems persuade the military to move too quickly on the battlefield, the result could be civilian fatalities, an international incident — or even a war.
  • At the same time, the Armenia-Azerbaijan war highlights the fact that warfare never stands still, and rivals like Russia and China are moving ahead with their AI-enabled defense systems.

The outcome: Two questions should always be asked whenever AI extends to a new industry: Does it work and should it work? In war, the risks of those questions can’t get any higher.

Leave a reply