The Moral Machine

In 2016 a group of scientists at MIT created an online platform to gather information about how people would decide what the outcomes of the actions of autonomous, automated systems “ought” to be. Although there were different scenarios, the most famous is the self-driving car that in the face of an imminent accident had to “decide” who should be run over and killed and who should be spared. Since it was matter of making decisions about what ought to be done in a case that led to harms, this was called a “moral” machine. The “machine” part comes from the fact that the automatic system was to be programmed in advance, which choice to make, that is, the choice was no longer “free” as would be the case when a human driver made the decision, but was determined by the programmer, who then bore the moral responsibility.

More interesting than the results of this experiment (see http://moralmachine.mit.edu/) are the assumptions it makes. One important assumption is that there are no accidents, that is, the fact that someone will be killed in an “accident” is not accidental, but a determined outcome of programming. Not just anything could happen, but only certain things could happen, and among these the choice was to be made in advance so that what does happen, happens “mechanically.” The second important assumption is that the future is no longer open and the present no longer free. Usually, we assume that the past is certain, the present is free, that is, we can decide in the present moment what to do, and the future, that is, the consequences of our actions, is open. We don’t know what the future brings. The future is contingent. This age-old temporal scheme is placed in question by the moral machine. The idea is that data analytics is able to know what will happen in the future and on the basis of this knowledge interventions in the present can be made that will influence, indeed, determine which future options will be realized. This is called datafication. Datafication is 1) the process by which all present states of the world are turned into data creating thereby a virtual double of reality, 2) subjecting this data to descriptive, predictive, preventive, and prescriptive analytics so that the effects of all possible variables can be simulated and on the basis of data-based projections of what will happen, interventions in the present can be made to influence future outcomes. Datafication is the basis of intelligent, autonomous, automated systems, such as self-driving cars, but also personalized medicine, learning analytics in education, business intelligence in the private sector, and much more. This is what makes the moral machine interesting. It is a parable of the digital age and poses central questions about what it means to live in a datafied world.

Of course, one could say that it was always the case that free agents, that is, human beings, were able to some extent to calculate the effects of their actions and thus foresee what will happen in the future. This is the basis of morality. Moral choices are precisely those that take account of the consequences of actions and moral judgements hold actors accountable for doing this, and also for doing it in the right way. This is why the programmer of the self-driving car is held accountable for what the car does and not the car itself. The temporal scheme of a certain past, a free present, and an open future seems to hold true despite the supposed ability of datafication to know the future and determine the present. It could be argued that since it is this temporal scheme that makes moral judgements meaningful and possible in the first place, without it, one couldn’t speak of a moral machine. Simply pushing responsibility onto the programmer instead of the driver doesn’t really change anything. There is nothing new or special about this situation.

If, however, the machine – whether it be a self-driving car or an autonomous factory or a self-regulating transportation system or whatever – was not directly programed by a human actor but programed itself as some machine learning scenarios foresee, then freedom of action could not be located in any actor who could be held accountable. In the case of the self-driving car, neither the driver, nor the programmer, since the algorithm as well as the data are generated by the system, nor the producer of the car, nor the road builders or whoever one might try to pin it on would be able or willing to stand up and take responsibility. All actors in the network are determined. The entire system or network becomes the point of reference. The only question remaining to be answered is whether or not the system as a whole is good or bad, or in other words, what is the best of all possible worlds. This is the well-known problem of theodicy, or the question of the justice of God. The problem is the following: If God is good, then He cannot be almighty, that is, really God, since the world He created is obviously not good. Or, if God is almighty, then He cannot be good, because a good God could not have created such a bad world. Therefore, God is either good, but not almighty, or almighty, but not good, and in both cases not deserving of the name God. The traditional solution was to deny that the world was as bad as it seems and to assume that, despite its shortcomings, this is the best of all possible worlds. And to top it off, Leibniz argued that God has His own standards of goodness that we humans cannot understand.

Let us assume that at least at the point of the singularity, if not earlier, a super-intelligent AI will indeed be able to calculate most possible future states and will also be able to decide which of these states should be realized. Let us further suppose that this machine would have to ability to intervene in the present such that these states would indeed be realized. For example: The network would know on the basis of my genome what diseases I will come down with and nudge me into appropriate measures to prevent their outbreak. The network would know what intellectual and emotional abilities I have and would nudge me into appropriate choices of study and work. It would know who I am compatible with and suggest an appropriate partner. And so on. The temporal scheme of past-present-future would collapse into what would seem like a God’s eye view of reality. This was traditionally called eternity. Classical eternity was characterized by immutability, that is, nothing could change since all possibilities are actualized. The possible is actual, and since everything that could happen has in principle already happened. I just have to comply, and if I don’t there would surely be a price to pay. Since nothing can really can change, it is also necessary. In this world, there would no longer be free agents that could be held morally accountable for the consequences of their actions. Morality would disappear. Freedom would disappear. We would be left with the question of whether we are living in the best of all possible machines. There would surely be those, such as Leibniz, who experienced this life as worth living and thus can be called optimists. What happens to the pessimists is an open question.

Share