Monday, May 4, 2020
Science fiction fans will be familiar with Isaac Asimov's three laws of robotics, designed to keep Artificial Intelligence actors from harming humans (think of HAL from 2001: A Space Odyssey):
- A robot may not injure a human being or, through inaction, allow a human being to come to harm
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws
But as AI has become more sophisticated, and the ethical challenges expand beyond harm to humans, scientists are looking elsewhere for ethical guides for AI. One suggestion is that AI be designed to conform to the Universal Declaration of Human Rights. In fact, Utopia Analytics, based in Finland, has published an ethical AI manifesto based on the UDHR.
On the other side, UDHR cynics believe that the UDHR simply doesn't provide an adequate framework for resolving complex conflicts between human rights. They offer more nuanced ethical analyses as an alternative that can resolve these conflicts. But on the downside, these more nuanced analyses do not have the same universal buy-in that is enjoyed by the UDHR. Further, ethical standards such as fairness are arguably more vague than the human rights principles that embody those concepts, and that have enjoyed years of application and interpretation.
At the very least, the UDHR and other human rights agreements can provide a starting point for the development of AI ethics. As stated by Mark Latonero of Data & Society: "In order for AI to benefit the common good, at the very least its design and deployment should avoid harms to fundamental human values. International human rights provide a robust and global formulation of those values." Where it goes from there remains subject to lively debate.