Understanding how we make choices is at the core of a new site that MIT has created, ‘The Moral Machine’. Gregory Punshon referred to this site in his recent post in which he wrote about a decision made by Mercedes-Benz that, in the event of a collision between one of their self-driving cars and a bystander, the car would be programmed to protect the life of the passenger. Gregory noted that prioritising passenger lives is attributed to there being more control over the passengers’ situation. This choice is one of the many trolley problems that arise for us when machines are given ‘intelligence’.
The Moral Machine’s interest in decision making is concerned with machine intelligence.
‘From self-driving cars on public roads to self-piloting reusable rockets landing on self-sailing ships, machine intelligence is supporting or entirely taking over ever more complex human activities at an ever increasing pace. The greater autonomy given machine intelligence in these roles can result in situations where they have to make autonomous choices involving human life and limb.
This calls for not just a clearer understanding of how humans make such choices, but also a clearer understanding of how humans perceive machine intelligence making such choices.’
I think it calls for more than that. Who, in the choices to be made in this coming digital infrastructure world, is weighing in on the side of community wellbeing? Who is speaking for ‘the common man’? Silicon Valley speaks for the technicians who are inventing new devices. Wall Street speaks for the businesses like Mercedes-Benz who stand to make a profit from them. But who speaks for us?
In a wiser world, this would be the politicians that the people elect to represent them, but they do not seem to be doing this. Where do we now look?
This is not only a problem for the management of machines. Paul Keating’s idea for a “Reserve Bank” for physical infrastructure (see the last post) raises exactly the same questions.
What is the difference between handing over infrastructure decisions to an unaccountable body of private sector financiers and public servants and handing them over to machines? How should decisions involving the disposition of community taxes and the future directions of community wellbeing, be made?
How are we going to make choices?
And Who should get to make them?
Infrastructure is created only to give benefit to humans. Water is provided to the home because we need water. Sewage is taken away to avoid massive public health problems. Infrastructure is there to benefit humans.
As soon as machines “make decisions”, they are merely doing what they have been programmed to do, by a human. Machines don’t have “intelligence” as such, they have a series of instructions that force them to act in certain ways. These are very complex instructions, but we have already seen a problem when a driver-less car didn’t recognise a trailer being towed by the car it ran into, Its instructions weren’t good enough.
The problem as I see it is not the machine being “moral,” but the people programming the machine, and having in mind that their machines are there for human benefit, in whatever form it takes.