The ACCOMPANY project has been developing a "social robot" based on a Care-o-bot platform –that is, an upright nearly-adult sized robot equipped with a tablet interface and a retractable arm capable of grasping and lifting objects.
The Care-o-bot has the appearance of a robot butler, since it moves about at the command of the user, it can hold its tablet as a tray for objects, and can use its arm to get things from the tray to the person. It can move alongside or behind its elderly user. It can also answer the front door and keep track of the current position of a person from a different place in the house.
However, it is much more than a butler. What turns the Care-o-bot into a companion for elderly users is that it can learn about and interact with them. It can learn their routine, and adapt to it, knowing when they are likely to wake up, for example, or when their favourite TV programmes are shown. It can display websites on topics of interest, or (in principle) start a Skype session. The companion robot can show concern for its user: it is capable of reminding users to take medicines or can suggest getting up and moving around when they have been sitting or lying for a long time. It can suggest recreations. The robot can also monitor falls, and can summon help.
What if the user refuses to take scheduled medication or doesn't want to ask for help in case of a fall? If the user is present in the house with a visitor, should the robot be able to take instructions from the visitor? What if both the user and the visitor issue instructions? What if, in that case, the instructions conflict? In other words: to what degree should the user be the boss? Should falls and refusals to take medicine simply be observed passively by the robot, or should it be designed to query the apparent decisions of the user? What kind of event would be serious enough for help to be summoned no matter what the user said? It is evident that these cases raise a number of ethical issues.
The ethical framework worked out in the ACCOMPANY project by Professor Heather Draper (University of Birmingham) and myself provides answers to these questions, putting great weight on the value of user autonomy, that is, freedom to make one’s own choices. The idea begind this approach is that ageing users should not have their plans interfered with more than other adults just because they are old.
If autonomy is extremely important, as the ACCOMPANY ethical framework insists it is, then the robot should not be able to act as a surveillance agent for the caring authorities or for non-resident family members, even if both are only interested in the welfare of the elderly person.
The ACCOMPANY ethical framework suggests that robot companions are ethically justified primarily where they enable an elderly person to retain the whole range of decision-making powers ordinary adults have.
However if the robot is placed by competent authorities (the social or health care system) in the home of the elderly person on the condition that safety risks be reported, and the user agrees to this, then that may be a reason for the robot to summon help even if the user objects.
ACCOMPANY focuses on co-operation between robot and user, with the user being left to decide outcomes where there is a disagreement and no important health risk is at stake. Some of the action possibilities suggested by the robot might even appeal to users so that they decide to try them.
ACCOMPANY demonstrates that a social robot can help prevent isolation and loneliness, offering stimulating activities whilst respecting autonomy and independence.