Scholars Delve Deeper Into The Ethics Of Artificial Intelligence

Nov 21, 2016
Originally published on November 21, 2016 7:58 pm

In 1941, science-fiction writer Isaac Asimov stated "The Three Laws of Robotics," in his short story "Runaround."

Law One: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Law Two: A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

Law Three: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

These laws come from the world of science fiction, but the real world is catching up. This month, a law firm gave Pittsburgh's Carnegie Mellon University $10 million to explore the ethics of artificial intelligence — or AI. This comes after industry leaders recently joined together to form the group called the Partnership on Artificial Intelligence to Benefit People and Society.

Peter Kalis is chairman of the law firm, K&L Gates. He says technology is dashing ahead of the law, leading to questions that were never taken seriously before. Such as what happens when you make robots that are smart, independent thinkers — and then try to limit their autonomy?

"One expert said we'll be at a fulcrum point when you give an instruction to your robot to go to work in the morning and it turns around and says, 'I'd rather go to the beach.' Or, more perilously, if we were to launch a robot on the battlefield and all of the sudden it took a more partial liking to the enemy than it did to its human sponsor," Kalis says.

He says that one day we'll want laws to keep our free-thinking robots from running wild — but we'll also have to weigh such laws against the U.S. Constitution.

"It says that every person should benefit from equal protection under the law. Well, I don't think anyone contemplated that person would include an artificially intelligent robot," Kalis says. "Yet I hear people seriously maintaining that artificially intelligent robots ought to replace judges. When we get to that point, it's a matter of profound constitutional and social consequence for any country, any nation which prizes the rule of law."

With the law firm's gift, Carnegie Mellon President Subra Suresh says the university will be able to dig into issues now emerging within automated industries.

"Take driverless cars," he says. "If there's an accident involving a driverless car, what policies do we have in place? What kind of insurance coverage do they have? And who needs to take insurance?"

As it is, people can take a ride in a driverless car in Pittsburgh where Uber uses the city as a testing ground for the company's driverless cars. Suresh says he's familiar with the program, but still has questions as a passenger.

"The mayor of Pittsburgh and I took the inaugural ride about a couple of months ago," Suresh tells NPR's Audie Cornish. "We were talking about this, you know, if somebody came and hit us now, are we liable or is somebody else liable? The clarification is not there yet."

The issues go beyond self-driving cars and renegade robots. Inside the next generation of smartphones, in those chips embedded in home appliances, and the ever-expanding collection of personal data being stored in the "cloud," questions about what's right and wrong are open to study.

So are Asimov's three laws of robotics all there is to govern AI right now — and is it necessary to have a moral guideline that everyone can understand?

"I think putting all three laws into one: Do no harm, could be the very first one," Suresh says.

He says people today are at "an interesting point in the intersection of humans and technology" — one they don't have any prior experience with.

Copyright 2016 NPR. To see more, visit http://www.npr.org/.

AUDIE CORNISH, HOST:

In 1941, science fiction writer Isaac Asimov stated the Three Laws of Robotics in a short story "Runaround." And those laws are the starting point for this week's All Tech Considered.

(SOUNDBITE OF MUSIC)

CORNISH: Asimov's first law...

COMPUTER-GENERATED VOICE: A robot may not injure a human being or through inaction, allow a human being to come to harm.

CORNISH: Law two...

COMPUTER-GENERATED VOICE: A robot must obey the orders given by human beings, except where such orders would conflict with the first law.

CORNISH: And law three...

COMPUTER-GENERATED VOICE: A robot must protect its own existence as long as such protection does not conflict with the first or second laws.

CORNISH: Now, these laws come from the world of science fiction, but the real world is catching up. This month, a law firm gave Pittsburgh's Carnegie Mellon University $10 million to explore the ethics of artificial intelligence or AI. Peter Kalis is chairman of the law firm K&L Gates.

He says technology is dashing ahead of the law leading to questions that were never taken seriously before such as what happens when you make robots that are smart, independent thinkers and then try to limit their autonomy?

PETER KALIS: One expert said we'll be at a fulcrum point when you give an instruction to your robot to go to work in the morning and it turns around and says, I'd rather go to the beach or more perilously, if we were to launch a robot on the battlefield and all of a sudden it took a more partial liking to the enemy than it did to its human sponsor.

CORNISH: He says that one day, we'll want laws to keep our freethinking robots from running wild. But we'll also have to weigh such laws against the U.S. Constitution.

KALIS: It says that every person should benefit from equal protection under the law. Well, I don't think anyone contemplated that person would include an artificially intelligent robot, yet I hear people seriously maintaining that artificially intelligent robots ought to replace judges. When we get to that point, it's a matter of profound constitutional and social consequence for any country, any nation which prizes the rule of law.

CORNISH: With the law firm's gift, Carnegie Mellon president Subra Suresh says the University will be able to dig into issues now emerging.

SUBRA SURESH: Take driverless cars. If there's an accident involving a driverless car, what policies do we have in place? What kind of insurance coverage do they have? And who needs to take insurance?

CORNISH: You're in Pittsburgh, and that's where Uber is testing self-driving taxis. Have you actually taken one?

SURESH: Yeah, I took - the mayor of Pittsburgh and I took the inaugural ride about a couple of months ago.

CORNISH: So it sounds like while you were on this ride, you had far more questions than the average person.

SURESH: We were talking about this, you know, if somebody came and hit us now, are we liable or is somebody else liable? The clarification is not there yet.

CORNISH: And those issues go beyond self-driving cars and renegade robots inside the next generation of smartphones, in those chips embedded in home appliances and the ever-expanding collection of personal data being stored in the cloud. Questions about what's right and wrong are open to study.

I asked Carnegie Mellon's Subra Suresh if Isaac Asimov's Three Laws of Robotics were all we had to govern AI right now and if he wanted the university's ethics experts to come up with some sort of moral guideline that everyone understands.

SURESH: I think putting all three laws into one, do no harm could be the very first one.

CORNISH: He says we're at, quote, "an interesting point in the intersection of humans and technology, one we don't have any prior experience with." Transcript provided by NPR, Copyright NPR.