Robots and ethics
Should laws only apply to humans? Or should they also apply to robots? We asked Professor of robot ethics, Alan Winfield, about ethical robots, how intelligent they are and if there are any laws that apply to them.
Robots are interacting more and more with humans, and the question is, whether laws should govern robots and how they act around humans. For example, in 2016, an artificial intelligence (AI) chatterbot called Tay was released by Microsoft on Twitter and shut down only 16 hours after her launch for language offences. This caused subsequent controversy but according to Microsoft, in an article by The Washington Post, it was instigated by a few trolls that triggered her and exploited her vulnerability. This led Microsoft to work on their second social chatbot attempt last year, Zo.
Robot ethics Professor, Alan Winfield, at the University of West England in Bristol, actively writes in his blog about ethical robots. As an advocate of robot ethics and co-founder of the Bristol Robotics Laboratory, he focuses on cognitive robotics and looks into how robots can be working models of life, evolution and intelligence. ‘When I started giving lots of public talks and debates I realised that there are many fears about robots and AI. That sensitised me to the need for robot and AI ethics,’ Professor Alan says.
Laws of robotics
Professor Alan thinks that the Asimov’s three laws of robotics are important. Isaac Asimov, a writer known for his works of science fiction, devised three laws to govern robots in his book published in 1981, I, Robot. ‘They [Asimov’s laws] established that robots should be governed by principles.
However, no one seriously expects real life robots to operate according to Asimov’s three laws. After all, they were written in part to explore the problems when a robot can’t resolve the conflicting demands of two of the laws,’ he says.
The three laws are: a robot may not injure a human being or, through inaction, allow a human being to come to harm; a robot must obey orders given it by human beings except where such orders would conflict with the first law; and a robot must protect its own existence as long as such protection doesn’t conflict with the first or second law.
Professor Alan says these laws assume that robots are autonomous, self-aware and intelligent. He says these principles are concerned with all types of AI and autonomous systems. ‘Robots, or to be more precise AIs, have been better than humans at playing chess for decades. Since 2016, they have been better at playing GO [a board game for two players] than all humans,’ he comments. Professor Alan predicts that driverless cars will be better than humans in 30 years and that robots can be as smart as humans in hundreds of years from now.
However, there are a few complex challenges with designing a robot. Professor Alan says the three biggest are materials, energy and AI. He wants to make robots out of soft and smart materials so that they are safer around humans. ‘A lot of work’s going into artificial muscles to replace heavy electric motors,’ he notes.
‘Energy is a big issue,’ says Professor Alan. ‘Ideally, we’d like our soft robots to consume no more energy than an animal with the same weight and strength,’ he adds. The third complex challenge is the artificial intelligence itself. AI’s another big challenge because it gives a robot its intelligence. ‘My favourite definition of a robot is that it’s an embodied AI – an AI in a physical body,’ he says.
‘Right now, robots can only be programmed or taught very limited skills, although autonomous car driving is still pretty sophisticated. We’ve no idea how to build robots with human skills like creativity or intuition nor can they build robots with free will. We can’t make a robot that can have feelings or self-awareness,’ he says.
First steps towards ethical robots
The question really isn’t about feelings but about how robots can be taught ethics. ‘There are a few people working on the hard problem of how to build ethical robots. Although the work is right at the very beginning so it’ll be a long time, if ever, that we have real-world ethical robots,’ he says.
In a step towards building an ethical robot, Professor Alan has worked on how robots could adhere to Asimov’s laws. ‘We’re not claiming that a robot, which apparently implements part of Asimov’s famous laws, is ethical in any formal sense,’ he says in one of his blog posts about his trial tests with robots.
Professor Alan has worked on these problems with robots and how they could adhere to Asimov’s laws. In his first trial test of a robot-human interaction, Professor Alan’s A-robot, named after Asimov, successfully prevents H-robot, which is a proxy human, from falling into a hole. The robot gets into an ethical dilemma in trial two when faced with another H2-robot.
He thinks major breakthroughs in AI aren’t needed to build an ethical robot and building ethical robots might not be such a good idea. In another demonstration of an ethical robot helping another acting as a human, the ethical robot’s logic transformed into an unethical robot. ‘The ease of transformation from ethical to unethical robot is hardly surprising. It’s a straightforward consequence of the fact that both ethical and unethical behaviours require the same cognitive machinery with – in our implementation – only a subtle difference in the way a single value is calculated,’ he explained.
Professor Alan summarises a few kinds of ethical categories a robot can have. Two of the categories are either explicit or implicit. He says we should and must be designing implicitly ethical robots which take into consideration the implicitly ethical agents that avoid negative ethical effects, rather than focusing on explicitly ethical agents that only help machines reason about ethics. Professor Alan’s robot was an explicitly ethical agent.
Though there are no formal laws to govern robots yet, there are standards for engineering robots. Professor Alan drafted the British standard BS 8611 in 2016 which is the world’s first standard on robot ethics published as a guide to ethical design of robots and robotic systems. The IEEE Standards Association’s Global Initiative, the worlds largest international organisation that develops global standards in a broad range of industries around the world, is in the process of drafting some new standards on the transparency of autonomous systems such as P7001 which Professor Alan is leading.
With his deep interest in mobile robots, it’s no surprise that Professor Alan’s favourite robot movies are WALL-E, A.I. Artificial Intelligence by Steven Spielberg and Bicentennial Man. ‘The societal implications of full ethical agents, if and when they exist, would be huge. For now, at least, I think I prefer my ethical robots to be zombies,’ he concludes.