Robotics, Ethical Theory, and Metaethics: A Guide for the Perplexed is the ethical “core” of the book and the chapter in which Abney answers “four crucial” questions for doing ethics.
From the first introduction of the robot concept in Capek’s 1921 play, Rossum’s Universal Robots, to numerous following novels and movies since, the idea of robots has existed mostly as fiction. Technological advancements in the past two decades, and their exponential progress explained by Moore’s Law, have now made robots a reality. Autonomous machines have become integral in our society, from being involved in surgeries, to serving as astronauts, child and elder care providers, and much more. Robot Ethics outlines the contemporary use of robots, delves into the problems that rapid robotic innovation might create, and examines how these potential problems can be prevented. In essence, the book is about roboethics and machine ethics, and is the first of its kind to draw together thinking from the relevant disciplines. The first part of Robot Ethics establishes today’s widespread use of robots and forecasts potential problems using the examples of the Human Genome Project–related “policy vacuum” or the unbanned use of landmines for hundreds of years until a recent international treaty ban in 1999. The first chapter’s emphasis on the importance of foresight sparks an interest and sets the tone for the rest of the book. Chapter 2 gives the working definition of the term robot as a machine that is “able to process information from sensors and other sources, such as internal set of rules, either programmed or learned, and to make some decisions autonomously.” Chapter 3, “Robotics, Ethical Theory, and Metaethics: A Guide for the Perplexed,” is the ethical “core” of the book and the chapter in which Abney answers “four crucial” questions for doing ethics. He starts with from Asimov’s Three Laws of Robotics that are based on consequentialist and deontological theories, to then conclude that virtue ethics is “a more helpful approach for robots” because it attempts to answer the question of “What should I be?” rather than of “What should I do?” He proposes that ruleutilitarianism is important, along with creating moral education through a professional code for roboticists. When applied to robot decision making, ethical theories are limited because of the inherent “frame problem” robots have. Last, Keith Abney ends with the discussion of moral personhood and agency in robots. The next section, “Design and Programming” of robots, is a fascinating sampling of ideas on how to engineer an ethical robot. The chapter by Colin Allen and Wendell Wallach, who are the authors of Moral Machines: Teaching Robots Right From Wrong, proposes “a comprehensive framework” on the new field of machine ethics. In their view, these increasingly autonomous systems must be programmed with a sort of moral sensitivity to guide their decisions. They believe the study of artificial morality, will facilitate “a richer understanding of human moral decision”— a core concept of Robot Ethics. Unsurprisingly, the next chapter examines the process of programming Buddhist robots. Starting with a quote by the Dalai Lama on the potential for robots to “become sentient beings,” this chapter by James Hughes presents the “core of Buddhist metaphysics,” the five skandhas, and the importance of sensory input in the development of consciousness. Programming the senses of “aversion or attraction,” suffering, selfawareness, and compassion are key ingredients in creating Buddhist robots. Although very novel in its concepts, this chapter might read as implausible, because it concludes that machines, like children, could develop degrees of capacity for growth, morality, and self-understanding, which gives us an ethical obligation to endow them with these skills. The following chapter presents a “top down approach” with Selmer Bringsjord and Joshua Taylor taking a DivineCommand stand to programming ethics to regulate “a realworld war fighting robot.” Unlike the Buddhist “bottom up” approach to robot morality, herein a potentially lethal robot is programmed to follow perceived divine commands while having some knowledge of the world. Despite the difficulty of following the terminology, these assertions are an important illustration of how ethical and moral codes are to be deduced to computational codes. The section, “Military Robotics,” quite appropriately builds on the previous Divine-Command chapter in addressing how robots are changing the nature of warfare. It discusses examples on the use of planes, drones, and unmanned combat vehicles in the air and on the ground. Noel Sharkey presents a counterargument to Arkin’s view that “robots 540588 JREXXX10.1177/1556264614540588Journal of Empirical Research on Human Research Ethics research-article2014