I am sorry Dave. I’m afraid I can’t do that.

These were the words of HAL (Heuristically programmed ALgorithmic computer) to astronaut David Bowman in Arthur C. Clark’s novel ‘2001: A Space Odyssey (1969). As with many science fiction books, it takes a while before futuristic concepts become reality. In this case the onboard HAL 9000 computer concludes that the crew of the spacecraft is jeopardizing its primary objective and therefore sees no alternative than to kill the crew. Clark lays out an issue that is – almost 50 years later – going to play an important role as robots, computers, algorithms are taking on more and more roles and tasks in our society. The emergence of self driving cars and luggage, smart things, smart homes, smart cities, smart energy and smart roads are just the beginning. Intelligence is added to a growing array of existing artefacts and new concepts to increase and optimize functionality, efficiency and/or comfort. As a consequence the ability of these artefacts to operate autonomously is rapidly increasing. Combine this with the fast adoption of intelligent devices and Asimov’s Three Laws of Robotics as introduced in his novel Runaround (1942) could still form a starting point for creating a moral framework to be forced into every robot. Last year we concluded that different AI code in different self driving cars could lead to different if not undesirable outcomes. There is no single moral code nor are there basic robotic laws that could be implemented into robots or smart devices; there is just functional code. Moreover, a moral code, or basic laws for robotics seen from an entrepreneurial point of view, seems undesirable as it potentially limits business opportunity.

Instead there is an emerging debate around the idea that robots and other autonomous operating devices should operate under some kind of legal framework, just like people and organizations. The basic idea stems from the anticipated incidents that will eventually arise when autonomous artifacts (robots, bots, AI forms, androids etc) interact with humans and other artefacts, that will require some kind of settlement. Hence the need to introduce the concept of liability to autonomous AI. After all, the ultimate question is: who is going to pay for it?

Just a few weeks ago, the EU Legal Affairs Committee urged for the establishment of a new European agency for robotics and a code of ethical conduct. The new agency is to be charged with three tasks:

  • To design a “voluntary ethical conduct code to regulate who would be accountable for the social, environmental and human health impacts of robotics and ensure that they operate in accordance with legal, safety and ethical standards”.
  • To devise liability rules for self driving cars and in the long term to construct “a specific legal status of “electronic persons” for the most sophisticated autonomous robots, so as to clarify responsibility in cases of damage
  • To monitor the impact of robotics on society in terms of loss of jobs in certain fields. It urges the Commission to follow these trends closely, including new employment models and the viability of the current tax and social system for robotics.

At least the EU recognizes the big impact autonomous smart devices will have on people and society. But a voluntary ethical code of conduct will almost certainly lead to different ethical codes and hitherto unexpected outcomes.

The second task of the planned agency is to be a conduit for opening up the blame game discussion in case of accidents involving self driving cars. But how will the liability queston deal with other elements in a self driving environment: i.e. traffic information, smart roads, smart traffic lights, weather information. What happens when one of those elements break down? Making a self driving car a legal entity does not quite cut it. Manufacturers of self driving cars will try to shift liability to other parties in the driving environment.

It may still be a bit down the road but technological developments and their sudden adoption rates always seem to take us by surprise, with legal matters as usual trailing eons behind. It is a good thing to start thinking now about how to handle new autonomous operating artefacts in an open environment. As robot adoption grows a legal status for robots will have to evolve. This is a complex matter involving different stakeholders with different point of views regarding the design of such a construct. If it were up to robot manufacturers the design of such a legal construct would define a robot to its most basic components to divert liability to others. But if such a legal construct is devised according to consumer perspectives only the robot provider will be liable. Most likely both constructs will evolve. To top it off, after a while the legal construct will be complemented by a three strikes law for robots: defunct robots will be decommissioned and sent off to the dried up Lake Michigan.