Robot program’s purpose is morality

Rachel Logan

Derek Leben gives a presentation about his book, “Ethics for Robots: How to Design a Moral Algorithm,” Oct. 18 in the John P. Murtha Center.

Rachel Logan, Copy Editor

Philosophy professor Derek Leben showcased a book he wrote that debated moral dilemmas and theories as applied to robots Oct. 18 at the John P. Murtha Center to a crowd of about 70 people.

Leben’s book, “Ethics for Robots: How to Design a Moral Algorithm,” is to be released on Amazon in May. He covered the  first three chapters in his talk.

“If you want to know more,” Leben said, “you’ll have to buy the book.”

Pitt-Johnstown President Jem Spectar said that he was thoroughly engrossed the first time he talked to Leben about his book, and that he couldn’t wait to purchase it in May.

Leben said that, while he personally is not convinced that robots, such as self-driving cars, should make moral decisions, the need for such a capability is inevitable. Thus, it is necessary to work on algorithms that ensure robots work in humanity’s best interests down the road.

Leben mentioned 1940s science fiction writer Isaac Asimov’s three laws of robotics, which includes, “A robot may not injure a human being, or, through inaction, allow a human being to come to harm.”

This dilemma  might come up often for self-driving cars, Leben said. Would it be better to run into a crowd of people and harm many, or drive into a wall and harm only the passenger?

“It’s true that such dilemmas are rare, but we should still prepare for them,” Leben said.

Leben said that he has worked on an algorithm for three or four years that might work in every parties’ favor: cooperation through contractarianism and the maximin principle.

Leben defined contractarianism as the theory that cooperation produces the best results in any situation. The maximin principle applies this theory in an algorithm format, defined as picking situations that benefit the worst-off party the most.

Each party is assigned a probability of survival, in the case of the self-driving car. In the worst-case scenario, the individual who may come to the worst harm is the target. The car then picks a scenario in which that individual comes to the least harm in the case of collision.

“I am a philosopher, a humble philosopher. I am not a computer scientist or an engineer. This project is at its core an interdisciplinary project,” he said.

Leben said that he is an amateur, and unfunded, so he is willing to work with other amateur computer scientists and engineers on the project. He said students can contact him if they are interested in collaborating.