Sinnott-Armstrong noted that artificial intelligence and morality are not as irreconcilable as some might believe, despite one being regarded as highly structured and the other seen as highly subjective. He highlighted various uses for artificial intelligence in resolving moral conflicts, such as improving criminal justice and locating terrorists.
“You can’t tell a person to factor certain considerations out, but you can do that to a computer," he said. "There are a lot of advantages to these various uses and they’re clearly going to grow.”
He also discussed an application that he and a team of professors, graduate students and undergraduate students are currently developing—which will build human morality into artificial intelligence. By presenting users with various scenarios involving moral judgement, the application would observe how people determine which features of cases are morally relevant and then test the interaction of morally relevant features in complex cases.
These inputs would then serve as the foundation for an artificial intelligence with humans’ moral considerations programmed in, he explained.
“Our goal is to create artificial intelligence that mimics human morality to avoid doomsdays and to improve our understanding of human moral thinking,” Sinnott-Armstrong said.
- Walter Sinnott-Armstrong discusses artificial intelligence and morality