Your new care robot has a dilemma. You’re worried about the side effects your medication is having, and decide you can’t take the pills anymore. The robot knows that according to the drug instructions this is going to harm your health. It could a) respect your wishes and await developments; b) insist you take them and pressurise you by emphasising the dangers; c) find a way to get the medication into you without your knowledge, in food or drink, or by taking you by surprise.
So what makes a ‘good’ robot in this case? It all depends on the programming, the algorithms of ethics.
This isn’t an academic exercise. Because robot technologies are coming into our lives – into our homes, at work, our transport vehicles – and will increasingly do so. Technology firms globally see the huge markets and potential for affordable, mass-produced robot helpers who can help solve some of the world’s social and technical issues: they can provide care and support for the ageing population and act as companions for the isolated. Robots would deliver cheaper childcare for working parents. They can remove the human element of risk in situations like driving vehicles and performing complex surgical procedures. They can replace the need for risking human life in war and conflict situations. The trend for voice assistants in the home and driverless cars is just the beginning.
Who’s going to make the robots good, able to make decisions that earn our trust as we learn to live with Artificial Intelligence? At the moment the design and drive for robots is in the hands of AI scientists, engineers, manufacturers. The UK is taking a lead in the area of robot ethics. Last year the British Standards Institute published the first standard for the robot ethics – BS 8611. It’s a move that has already received attention from many other countries conscious of the importance of a shared global position on the role of robots in daily life.
This, however, isn’t enough. We need to make sure, from this early stage, that the views and feelings of the wider general public are taken into account. How far are we willing to let robots into our lives? What kinds of roles are we comfortable with and what’s going too far? What kinds of values and standards of behaviour can we all agree is right?
As part of its work on the BSI’s UK Robot Ethics Group, Cranfield University is looking to gather the thinking and views of the public (accessible here: www.cranfield.ac.uk/
For all the benefits that can come from making use of robots for some roles, there is inevitably going to be resistance to the wholesale introduction of non-human ‘helpers’. The creepy robot, perverted by a tangle of overcomplicated logic and self-learning, has been a classic feature of science fiction for the past 70 years. To an extent, we’ve learnt to love our fear of robots. On the other hand, another development over the past decade or so has been our emotional attachment to technology. Research has demonstrated that we have become highly protective of our personal digital devices. Smartphones have become a core to our networks, relationships, daily consciousness and interactions. We feel a physical sense of loss without them. And it’s likely that as robots become more familiar and the AI more sophisticated in terms of demonstrating personality, we’re going to take that kind of devotion to a new level.
While it’s been important to be aware of the potential dangers of autonomous technologies, not to be complacent or dewy-eyed about tech with character and charm, we need to find a balanced position for the future. So far it’s been science fiction that’s provided the clearest basis for robot ethics. In 1942 Isaav Asimov set out the Three Laws of Robotics that continue to be referred to by AI scientists: 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2) A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law; and 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Asimov’s Laws are still a useful foundation – but of course, under these, that new care robot would be busy finding a way to get that medication into you somehow. And once it had done so, would you ever trust it again?
Dr Sarah Fletcher is a Senior Research Fellow at the Centre for Structures, Assembly and Intelligent Automation, Cranfield University, www.cranfield.ac.uk/
RELATED
https://www.thelondoneconomic.com/entertainment/book-review-conscious-robots-really-free-will-day-paul-kwatz/30/11/
https://www.thelondoneconomic.com/must-reads/mother-son-sex-shop-becomes-first-uk-vendor-sex-robots/25/09/
https://www.thelondoneconomic.com/news/science/amputees-can-learn-control-robotic-arm-leg-power-thought/28/11/