The Consortium for Socially Relevant Philosophy of/in Science and Engineering

Don Howard on Robot Ethics

RobotEthics

By: Dave Saldana

Don Howard is not interested in setting out a parade of the horribles and scary what-ifs. We don’t have to ponder, as the classic sci-fi film “RoboCop” did in 1987, whether a fully automated law enforcement machine might fail and kill an innocent person. In a world where unmanned aircraft wage war and driverless cars roam the highways, what’s real now is already enough for the director of Notre Dame’s Reilly Center for Science, Technology, and Values.

“We don’t need to consider whether autonomous systems fail. Of course they will; systems fail,” he says.

“What we need to consider is whether these systems fail more often, less often, or as frequently as humans do.”

What he finds is, robots, automated systems, and artificial intelligence have a much higher success rate than their human counterparts. Despite this, people have less trust for the successful machines than for error-prone humans. As Howard points out, “Passengers take great comfort in knowing that there are two pilots in the cockpit of their airliner, but they don’t realize that autopilot runs most systems on commercial aircraft.”

In fact, Howard says, most European airplanes have gate-to-gate autopilot technology, meaning human interaction is negligible throughout the entire trip. “When you quantify casualties, the case for human pilots is looking worse and worse all the time,” he says.

He points to the 2009 Air France Flt. 447 tragedy, where a momentary technical glitch led to a cascade of pilot error, culminating in the deaths of 228 people. “This case illustrates at least two crucial facts about human fallibility: (1) Humans often do not perform well under stress; (2) The pilots were incapable of handling all of the data that they were getting. Neither of those are problems for an autopilot, which doesn’t experience emotional stress and which can handle data streams an order of magnitude greater than humans,” Howard says.

It’s the ability to perform better than humans under stress and process enormous amounts of data quickly without error that Howard says can make autonomous systems not just as moral as humans, but actually more moral. The linchpin pin is the ability to be passionless and decisive in extraordinary circumstances that cloud judgment. One such scenario is in warfare, where the “fog of war” can lead to lethal mistakes.

Howard credits Ronald Arkin’s Governing Lethal Behavior in Autonomous Robotswith changing his thinking on the morality of autonomous weapons systems. “They never experience fatigue. They never experience grief over the death of a friend.  Humans do not perform very well in those circumstances,” Howard says.

But even as more autonomous, mechanized means of warfare enter the world’s arsenals, international law struggles to address the moral implications of their impact on combatants and civilians alike.

Of course, it is axiomatic in public policy that law is always playing catch-up with technology. The issue is coming to a head with driverless vehicles, which use lasers, GPS, and high-speed computers to navigate city streets without driver control. Though Google’s fleet of autonomous cars has logged more than 300,000 miles without an accident while under computer control, questions nevertheless exist about what to do in the eventuality that one of those cars fails, or is confronted with a problem similar to the infamous Runaway Trolley dilemma.

“What happens when an autonomous car is tasked with deciding whether to run down a pedestrian or crash into a bus full of schoolchildren? Those are decisions drivers have to face all the time,” Howard says.

“We’ve solved the problem of self-driving cars: we know how to make them,” Howard says. “And as we become more alert to foreseeable problems, we can engage in anticipatory governance. With these cars, we know what some of those issues will be, we know there will be accidents.”

An important question is how the law will apportion responsibility when a driverless car is involved in an accident. Who is at fault? The person sitting in the driver’s seat? The manufacturer that built the car? The programmer who wrote the code?

To facilitate development of laws that address these problems before they happen, Howard says he’s working with the National Highway Traffic Safety Administration and other agencies. While he is loath to make predictions, he ponders a time when technology advances so far that jurisprudence will be forced to leap in entirely novel directions.

“Will we soon find ourselves in a situation of legal and moral culpability of technology? Could artificial intelligence reach the point where we could impute some responsibility to the A.I. itself?”

It’s a question we’ll likely see answered, possibly sooner than we expect.