The Ethical Machine Versus The Ruthless Samaritan

n TV series “Person of Interest” protagonist Finch has built a software system that is capable of predicting crime by combining artificial intelligence (AI) with access to all kinds of public and private data. He has designed the machine code in such a way that it values the life of individuals. The Machine has been given one objective: to come up with a social security number of a person that is about to get involved in a serious crime. The Machine’s nemesis is the Samaritan, a machine that can be told what to do and can be hired to do the job whatever it takes. For certain groups, criminals and governments an ideal machine as it works according to one rule: the end justifies the means. The underlying message: AI holds great promise but in the wrong hands will turn out to be an uncontrollable beast.

Ethical MachineThe Ethical Machine versus The Ruthless Samaritan discussion can be expected to heat up with the emerging world of smart objects. But it always takes a serious incident to attract people’s attention to what actually is going on with yet another new cool thing such as AI. Now that we have our first driverless car casualty things are getting more serious. The thing is: the car as we know it acts according to our manual actions. The driverless car (DC) however acts according to a set of smart computer code that does the driving . Can we trust this magic black box to drive us to our destination safely?

Right now DCs are still under development even though numerous live tests are being conducted. Tesla’s driverless car feature, however, can already be switched on and off. After the incident Tesla stated that human drivers should still pay attention to traffic as they bear ultimate responsibility;  “It is important to note that Tesla disables Autopilot by default and requires explicit acknowledgement that the system is new technology and still in a public beta phase before it can be enabled. When drivers activate Autopilot, the acknowledgment box explains, among other things, that Autopilot “is an assist feature that requires you to keep your hands on the steering wheel at all times,” and that “you need to maintain control and responsibility for your vehicle” while using it.” Tesla did not foresee that human drivers were so quick to leave the driving to the robot and started doing other things.

So who is responsible in a case of an accident? Can the car manufacturer in a court of law blame the driver for negligence or can the driver blame the car manufacturer for a malfunctioning product? Legal developments regarding technological developments are  slow. But as long as robots don’t pay taxes or have a social security number, the human ‘driver’ will always be held responsible for its car’s actions.

We have no clue how good the AI in the car’s black box is, how it works and how it decides. How will the car react to other DCs with different AI installed in an emergency situation? And, if your car is aware of the other DC and is also aware that this DC is of a different making, would it alter the decision when it meets a DC of its own kind? Will your own car behave as an Ethical Machine that is programmed to minimize the potential deadly victims even if it means you as the owner and driver/passenger could be hurt? Or will you be equipped with the AI system of the Ruthless Samaritan that will try to keep you safe at all cost, no matter the risk of causing deadly collateral damage. Better even, can we choose when buying a DC? If we opt for all-out Ethical Machines we will probably end up with well organized and efficient traffic with a lot less accidents. In this case all DC’s will eventually conclude that lower risks will save both its own car passengers and minimize collateral damage. But if there is a system choice most people will opt for the Ruthless Samaritan as this system is primarily focused on its passengers interest. If everybody opts for the Ruthless Samaritan we may have less accidents between cars but more collateral damage (bicycles, pedestrians, parked cars, buildings etc).

Parts of the DC code implicitly contain moral choices. We have no clue who is making those choices or what they are. As DC systems are designed by privately owned companies these moral choices can be based on business objectives that do not necessarily coincide which consumer interests. There is not only the DC designers morality. Every stakeholder in the transport ecosystem has a different one. Car repair shops have no interest minimizing car damage. Damage is their core business. Car insurance companies on the other hand have serious interest in keeping damage down as minimizing claims is key to their bottom line. But if Ethical Machines ultimately do away with 99% of car accidents the need for compulsory car insurance could disappear and erase their top line. Car manufacturers like to make their customers happy and may want to offer a feature that changes its DC systems moral code on the fly by changing its risk tolerance. They could also add a backdoor to stop the car at will (a bit like Amazon’s remote erase option on its Kindle); something a government would appreciate if not make mandatory for DCs but would open up a Pandora’s box for criminals and hackers.

A lot is riding on the development of the driverless car. It is not just about the evolution of the car. It is the bleeding edge of an increasingly robotized society in which we have to deal with what place we give AI, robots and smart things in our society. This is especially important since AI will become more intelligent through machine learning of which the outcome is unpredictable and therefore maybe undesirable. Microsoft’s experimental Twitter chat bot Tay within 24 hours turned from a innocent juvenile parrot to a nazi propagandist by gathering experience through Twitter conversations. The experiment reflects both human nature and the current limits of machine learning. Rolling out immature technologies in crucial everyday life systems is a risky business. In the mean time, would you opt for a Tesla model equipped with an Ethical Machine or a Ruthless Samaritan?