Tuesday, December 23, 2008

W. Wallach and C. Allen's "Moral Machines"

Colin Allen is a Professor of History & Philosophy of Science and of Cognitive Science at Indiana University. Wendell Wallach is a consultant and writer and is affiliated with Yale University's Interdisciplinary Center for Bioethics.

They applied the “Page 99 Test” to their new book, Moral Machines: Teaching Robots Right from Wrong, and reported the following:
Page 99 of Moral Machines coincides with the beginning of Chapter 7: "Bottom-up and Developmental Approaches." It's at the heart of our discussion of how, practically, one might go about engineering "artificial moral agents" — machines that have some facsimile of moral decision making capabilities. In Chapter 6, top-down approaches, decision procedures based on traditional moral theory, are shown to be unworkable for computers (and, we think, for people too). Human beings become moral agents through a process of learning and development in nurturing environments. They have the capacity to do so because of an evolutionary endowment. Bottom-up approaches to artificial moral agents attempt to use the principles of learning and evolution. But to understand the practicality of building machines on such principles, it is necessary to survey what is known about their operation in human beings. Hence Chapter 7.

On page 99, we mention DNA, moral evolution, moral development, and psychopathy. We dismiss the simplistic nature vs. nurture opposition, and we refer to the fearsome complexity of gene, environment, and culture interactions. We quote Alan Turing's classic 1950 article that set the agenda for artificial intelligence: "Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child's?" Can it be done?

Moral Machines is not about science fiction (although Asimov's Three Laws of Robotics, Commander Data, C3PO, and RoboCop all appear). The book is about the very real insertion of autonomous machines into many aspects of home, commerce, healthcare, warfare, and more. We think that the task of making these autonomous systems act ethically requires the deepest reflection on how it is that humans sometimes manage to acquire "a modicum of decency," as we put it in the opening paragraph on p.99. If there's nothing of interest on that page, Moral Machines is not the book for you. But for readers who want a guided tour of the relevant parts of computer science, neuroscience, psychology, biology, and philosophy, we think that p.99 provides a good example of what you're in for.
Read "6 Ways to Build Robots that Will Not Harm Humans" and other book-related posts at the Moral Machines blog.

Learn more about Moral Machines at the Oxford University Press website.

--Marshal Zeringue