Great in theory, terrible in practice
Isaac Asimov first came up with his three Laws of Robotics when writing his science-fiction short story Runaround, the story of a robot gone missing (1, 2). These laws state that:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given [to] it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
In theory, these laws sound fantastic. They represent the ideal of how most people would like robots to behave, and to the average person, they seem simple and easy to implement. However, in reality, they are completely impractical. As Daniel Wilson, a roboticist, put it, “Asimov’s rules are neat, but they are also bullshit. For example, they are in English. How the heck do you program that?” (3).
Right now, robots can’t understand commands like humans do, and because of that, it’s extremely hard to implement broad laws like Asimov’s. It’s similar to a language barrier: robots understand code and humans can more broadly understand English. Because robots can only understand code, they need very specific instructions to follow that tell them what to do in every situation (4). Although people might intuitively understand what qualifies as “harm to a human” or “protecting its own existence,” these are broad concepts that would need to be broken down into specific situations for a robot. For example, a robot could be trained to understand that hitting a person would harm them, but not know that acid would burn them, because no one taught it that. It’s impossible to think of every situation that qualifies as one of these, so it becomes virtually impossible to implement Asimov’s Laws of Robotics at all (3, 4).
Additionally, Asimov’s Laws of Robotics imply that robots “protecting their own existence” is a morally correct thing to do. However, there are many cases where a robot being damaged is intended, such as a robot exploring a hazardous area (3). Asimov’s Laws of Robotics try to be simple, but because of this, they are too broad and cannot always lead a robot to do what is best for humanity.
The fundamental problem behind Asimov’s Laws of Robotics is that they attempt to give robots a moral code that they are unable to understand. Instead, it’s much more effective to implement a moral code for the people making and programming these robots, who are actually able to understand the intent behind the rules and the implications of their code. This approach acknowledges that a moral code is beneficial, but also recognizes the inherent limitations of robotics in implementing one (3).
Sources:
- Britannica, T. Editors of Encyclopaedia (2024, April 5). three laws of robotics. Encyclopedia Britannica. https://www.britannica.com/topic/Three-Laws-of-Robotics
- Runaround. (n.d.). Writing Atlas. Retrieved April 15, 2024, from https://writingatlas.com/story/isaac-asimov-runaround
- Singer, P. W. (2009, May). Isaac Asimov’s Laws of Robotics Are Wrong. Brookings. Retrieved April 13, 2024, from https://www.brookings.edu/articles/isaac-asimovs-laws-of-robotics-are-wrong/
- Salge, C., & The Conversation US. (2017, July 11). Asimov’s Laws Won’t Stop Robots from Harming Humans, So We’ve Developed a Better Solution. Scientific American. https://www.scientificamerican.com/article/asimovs-laws-wont-stop-robots-from-harming-humans-so-weve-developed-a-better-solution/
Image: