As AI joins battlefield, Pentagon seeks ethicist

|
Charles Dharapak/AP/File
The Pentagon in Washington announced it is seeking to hire an AI ethicist. As artificial intelligence and machine learning permeate military affairs, these technologies are beginning to play a more direct role in taking lives.
  • Quick Read
  • Deep Read ( 5 Min. )

The Pentagon is increasingly  incorporating artificial intelligence, including for what the military calls “maneuver and fires,” the part of fighting wars that involves targeting and shooting people.  

To help ensure that these machines behave ethically, the Pentagon is looking for an AI ethicist to join its new Joint Artificial Intelligence Center.

Why We Wrote This

Artificial intelligence is making inroads in the U.S. military, transforming everything from helicopter maintenance to logistics to recruiting. But what happens when AI gets involved in war's grimmest task: taking lives?

“We’re thinking deeply about the safe and lawful use of AI,” says  Lt. Gen. Jack Shanahan, in the briefing where he announced the position.

Navigating AI ethics for the Pentagon will require safeguarding human and civil rights while keeping pace with AI development in China and Russia, countries whose militaries appear less preoccupied with such rights.

Even if the Pentagon develops computer algorithms that are always able to distinguish between enemy targets and noncombatants, there is a risk in developing capabilities that are too effective, says Patrick Lin, a philosophy professor at California Polytechnic State University. 

“If you fight your enemy with honor and provide some possibility for mercy, it ensures the possibility for reconciliation,” says Professor Lin. “We have ethics of war in order to lay the groundwork for a lasting peace.”

When the chief of the Pentagon’s new Joint Artificial Intelligence Center briefed reporters recently, he made a point of emphasizing the imminently practical – even potentially boring – applications of machine learning to the business of war.  

There’s the “predictive maintenance” that AI can bring to Black Hawk helicopters, for example, and “intelligent business automation” likely to lead to exciting boosts in “efficiencies for back office functions,” Lt. Gen. Jack Shanahan said. There are humanitarian pluses, too: AI will help the Defense Department better manage disaster relief.

But for 2020, the JAIC’s “biggest project,” General Shanahan announced, will be what the center has dubbed “AI for maneuver and fires.” In lulling U.S. military parlance, that includes targeting America’s enemies with “accelerated sensor-to-shooter timelines” and “autonomous and swarming systems” of drones – reminders that war does, after all, often involve killing people.

Why We Wrote This

Artificial intelligence is making inroads in the U.S. military, transforming everything from helicopter maintenance to logistics to recruiting. But what happens when AI gets involved in war's grimmest task: taking lives?

When he was asked halfway through the press conference whether there should be “some sort of limitation” on the application of AI for military purposes, General Shanahan perhaps recognized that this was a fitting occasion to mention that the JAIC will also be hiring an AI ethicist to join its team. “We’re thinking deeply about the safe and lawful use of AI,” he said.

As artificial intelligence and machine learning permeate military affairs, these technologies are beginning to play a more direct role in taking lives. The Pentagon’s decision to hire an AI ethicist reflects an acknowledgment that bringing intelligent machines onto the battlefield will raise some very hard questions.

“In every single engagement that I personally participate in with the public,” said General Shanahan, “people want to talk about ethics – which is appropriate.” 

A shifting landscape

Hiring an ethicist was not his first impulse, General Shanahan acknowledged. “We wouldn’t have thought about this a year ago, I’ll be honest with you. But it’s at the forefront of my thinking now.” 

He wasn’t developing killer robots, after all. “There’s a tendency, a proclivity to jump to a killer robot discussion when you talk AI,” he said. But the landscape has changed. At the time, “these questions [of ethics] really did not rise to the surface every day, because it was really still humans looking at object detection, classification, and tracking. There were no weapons involved in that.” 

Given the killing potentially involved in the “AI for maneuver and fires” project, however, “I have never spent the amount of time I’m spending now thinking about things like the ethical employment of artificial intelligence. We do take it very seriously,” he said. “It’s core to what we do in the DOD in any weapon system.”

Pentagon leaders repeatedly emphasize they are committed to keeping “humans in the loop” in any AI mission that involves shooting America’s enemies. Even so, AI technology “is different enough that people are nervous about how far it can go,” General Shanahan said. 

While the Pentagon is already bound by international laws of warfare, a JAIC ethicist will confront the thorny issues around “How do we use AI in a way that ensures we continue to act ethically?” says Paul Scharre, director of the technology and national security program at the Center for a New American Security.

It will be the job of the ethicist to ask the tough questions of a military figuring out, as General Shanahan puts it, “what it takes to weave AI into the very fabric of DOD.” 

Overseas competition 

Doing so will involve mediating some seemingly disparate goals: While most U.S. officials agree that it is important to develop the military’s AI capabilities with an eye toward safeguarding human and civil rights, these same leaders also tend to be voraciously competitive when it comes to protecting U.S. national security from high-tech adversaries who may not abide by the same ethical standards.

General Shanahan alluded to this tension as a bit of a sore spot: “At its core, we are in a contest for the character of the international order in the digital age.” This character should reflect the values of “free and democratic” societies, he said. “I don’t see China or Russia placing the same kind of emphasis in these areas.”

This gives China “an advantage over the U.S. in speed of adoption [of AI technology],” General Shanahan argued, “because they don’t have the same restrictions – at least nothing that I’ve seen shows that they have those restrictions – that we put on every company, the DOD included, in terms of privacy and civil liberties,” he added. “And what I don’t want to see is a future where our potential adversaries have a fully AI-enabled force – and we do not.”

Having an ethicist might help mediate some of these tensions, depending on how much power they have, says Patrick Lin, a philosophy professor specializing in AI and ethics at California Polytechnic State University in San Luis Obispo. “Say the DOD is super-interested in rolling out facial recognition or targeting ID, but the ethicist raises a red flag and says, ‘No way.’ What happens? Is this person a DOD insider or an outsider? Is this an employee who has to worry about keeping a job, or a contractor who would serve a two-year term then go back to a university?”  

In other words, “Will it be an advisory role, or will this person have a veto?” The latter seems unlikely, Professor Lin says. “It’s a lot of power for one person, and ignores the political realities. Even if the JAIC agrees with the AI ethicist that we shouldn’t roll out this [particular AI technology], we’re still governed by temporary political leaders who may have their own agenda. It could be that the president says, ‘Well, do it anyway.’”

An ethics of war

Ethicists will grapple with “Is it OK to create and deploy weapons that can be used in ethically acceptable ways by well-trained and lawyered-up U.S. forces, even if they are likely to be used unethically by many parties around the world?” says Stuart Russell, professor of computer science and a specialist in AI and its relation to humanity at the University of California, Berkeley. 

To date, and “to its credit, DOD has imposed very strong internal constraints against the principal ethical pitfalls it faces: developing and deploying lethal autonomous weapons,” Professor Russell adds. Indeed, Pentagon officials argue that beyond the fact that it does not plan to develop “killer robots” that act without human input, AI can decrease the chances of civilian casualties by making the killing of dangerous enemies more precise. 

Yet even that accuracy, which some could argue is an unmitigated good in warfare, has the potential to raise some troubling ethical questions, too, Professor Lin says. “You could argue that it’s not clear how a robot would be different from, say, a really accurate gun,” and that a 90% lethality rate is a “big improvement” on human sharpshooters.

The U.S. military experienced a similar precision of fire during the first Gulf War, on what became known as the “highway of death,” which ran from Kuwait to Iraq. Routed and hemmed in by U.S. forces, the retreating Iraqi vehicles – and the people inside them – were being hammered by American gunships, the proverbial “shooting fish in a barrel,” Professor Lin says. “You could say, ‘No problem. They’re enemy combatants; it’s fair game.’” But it was “so easy that the optics of it looked super bad and the operation stopped.” 

“This starts us down the road to the idea of fair play – it’s not just a hangover from chivalry days. If you fight your enemy with honor and provide some possibility for mercy, it ensures the possibility for reconciliation.” In other words, “we have ethics of war,” Professor Lin says, “in order to lay the groundwork for a lasting peace.”

You've read  of  free articles. Subscribe to continue.
Real news can be honest, hopeful, credible, constructive.
What is the Monitor difference? Tackling the tough headlines – with humanity. Listening to sources – with respect. Seeing the story that others are missing by reporting what so often gets overlooked: the values that connect us. That’s Monitor reporting – news that changes how you see the world.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.

QR Code to As AI joins battlefield, Pentagon seeks ethicist
Read this article in
https://www.csmonitor.com/Technology/2019/1028/As-AI-joins-battlefield-Pentagon-seeks-ethicist
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe