Tag Archives: AI

Reframing the Debate on AI – On Models and Machines

The debate about the impact of AI, above all the new generative Large Language Models (LLMs), has been raging in the media, professional circles, politics, and the general public ever since OpenAI released ChatGPT in 2022. Some see AI as a threat to society and even to humanity. In contrast, others praise AI as the future technology enabling greater productivity, greater efficiency, unbiased and better-informed decision-making, and personalized products and services in the consumer sector, healthcare, and education. The media and civil society watchdogs have sensationalized this debate and created public pressure on regulators and governments to install legal safeguards such as the proposed AI Act of the EU. In all these discussions and initiatives, much depends on how the debate is framed, that is, what larger narrative framework is explicitly or implicitly called upon to make sense of what is being talked about. For the most part, the framing of the debate about AI has relied upon the well-known stories of humans vs. machines and a supposed competition between humans and machines in which both struggle to control and instrumentalize each other. In this paper, we argue that these typical framings of the debate are misleading, inapplicable, and prone to generating fruitless conflict. We suggest a different framing for the discussion based on other concepts, above all, on the difference between models and machines and between being smart or being dumb.

Concepts of models and machines are used in many ways in various fields. However, although there is a common understanding of what a machine is, the idea of the model has not yet entered mainstream discussions of what technology is and what its place in society should be. The dominant narrative that frames discussions of AI is that of humans vs. machines and of competition between humans and machines, with the accompanying question of who instrumentalizes who. Is it the machines that will take over and instrumentalize humans, or can humans somehow maintain control over a potentially superhuman AI? Typical concerns are whether AIs can be aligned with human values such as privacy, fairness, safety, autonomy, human dignity, accountability, and transparency. It is supposed that machines and humans are fundamentally opposed to each other and that the foreseeable impact of AI on society will endanger human flourishing. These fears are based on a long and omnipresent tradition of literature and film, from Frankenstein to Terminator, Matrix, and HAL2000 in Kubrick’s famous Space Odyssey. It should be noted that these memes are predominantly Western, whereas other cultures and societies have their own memes with which to frame discussions about technology and society. Nevertheless, as soon as this well-known narrative is invoked, the stage is set for conflict. In this paper, we will attempt to propose a new framing for the debate about AI by discussing the differences between models and machines. We will argue that AI should not be considered a machine. AI is not a machine but a model. We will plead for reframing the AI discussion, moving away from the typical story of humans vs. machines toward a different story based on the alternatives of either being smart or dumb.  

Surprisingly, practically none of AI’s proponents, developers, and researchers speak of AI as a machine. A machine is a deterministic system in which the input completely determines the output. The machine has no random states; it is not autonomous and, therefore, wholly predictable. AI is not a machine but a model. In everyday usage, a model can be two different things. There are “models-of” something and “models-for” something. A model-of something can be a representation of a system, process, or phenomenon that helps us understand, analyze, or predict its behavior. Models can take various forms, such as physical, mathematical, or logical representations. When speaking of models-of, one thinks of a model airplane, a model automobile, a fashion model, etc. Models of this kind are copies of an original or an ideal that already exists. These models can be used for many different purposes. In science and engineering, they simplify complex systems, identify relationships between variables, and make predictions based on the system’s underlying structure. There are also models of machines, but even in this first usage of the term, the model is not the machine but merely a representation of the machine. That models are not machines becomes even more apparent when considering the second meaning of the term. The second meaning of a model is a “model-for” something. A model-for is a kind of blueprint according to which something should be constructed. It is not a representation but a presentation of what does not yet exist. An example could be an architect’s model for a building that is in planning. The building does not exist except as a model. After the building has been built and built according to the model, the model becomes a representation, a model-of the building.

The concepts of models-of and models-for arose within anthropology, particularly in the work of French anthropologist Claude Lévi-Strauss. Lèvi-Strauss used these concepts to understand the underlying structures of human societies and cultures. “Models-for” refer to the representations or self-interpretations of existing structures or forms of life in a society or culture. People understand themselves and their society in terms of such models. It is not a representation of who they are and how their world is organized but a kind of, often mythological or ideological, blueprint for how they think they should be. Models-for, in this context, are normative. They prescribe how people should understand themselves and how they should act and think. They are not meant to describe reality as it is but how it should be. When an anthropologist goes into the field and asks their informants what they are doing and why, the answer is not an objective, value-free description but a model-for that society. Models-for are prescriptive models that serve as guidelines or blueprints for action within a society or culture. They provide a framework for understanding how things should be organized or how people should behave in a particular situation, whether hunting, cultivating, performing religious ceremonies, conducting mutual affairs, building houses, eating, regulating their mutual affairs, settling disputes, etc.

On the other hand, the anthropologist constructs models-of what people in a society actually do and how they relate to each other regardless of how the people understand what is going on. The anthropologist’s model-of a society is intended to explain the culture as it really is and not as the people of that society see it. Models-of are not prescriptive but purely descriptive. For Lèvi-Straus, who was influenced by the linguist Ferdinand de Saussure, the model-of was a description of a society’s underlying structures. Lèvi-Strauss writes, “The anthropologist’s task is to construct models of social phenomena, models which are simpler and more intelligible than the phenomena themselves, but which nevertheless retain their essential features.” (Structural Anthropology, p. 27)

In the context of machine learning and AI, a model is the output of an algorithm that processes data. It is a representation of the data but a presentation of what has been learned from discovering statistical regularities in the data. The AI model serves to make predictions. Therefore, the model that AI people talk about is not a model-of the world, language, or images but a model-for generating language, images, or sounds. Generative models are prescriptive or normative; they are not models of anything. And, of course, they are not machines. They generate an output that is not a mere copy of what already exists but of what could be useful or meaningful for a specific purpose. LLMs’ generative and, thus, prescriptive capabilities imply that AI models can also become “agents” or even “autonomous” agents. The generative capacities of models can be linked to “tools,” such as APIs, that allow them to do things in the world. For example, they can be used to create tutoring systems for personalized learning in educational contexts, create texts, images, and audio, assist in medical diagnosis and therapy, or autonomously take over business processes such as customer support and decision-making. Equipped with memory, planning, tools, and execution capabilities, AIs are autonomous agents that can independently interact with the world and learn from their actions. It should be evident that we are no longer talking about machines but about something very similar to humans.

While there may not be a large body of literature specifically dedicated to discussing the difference between models and machines, some authors have touched upon the topic in their works. For example, in the context of machine learning, the difference between machine learning algorithms and models has been claimed to lie in the fact that machine learning algorithms are procedures that run on datasets to recognize patterns and rules, while machine learning models are the output of these algorithms, acting as programs that can make predictions, and as stated above, even act based on the learned patterns and rules. These models are clearly not models-of the world but models-for how one should understand a situation and act appropriately. They are prescriptive and not simply descriptive. For example, what would you do if you are driving down the freeway and your navigation system tells you there is a traffic jam ahead and you should turn off at the next exit? What would you do if your doctor tells you you’re fine, but an AI says you have cancer and need an operation immediately? AI models are, therefore, like the self-understanding of a society in that they offer normative suggestions aimed at solving problems. The difference to what the anthropologist considers a model-for is that the AI has all the information and knows much more about how the world is than any human. The AI model has absorbed all the models-of into one model-for. It makes predictions based on information and evidence and not on religion, ideology, or worldviews, upon which humans depend since they don’t have the information.

Not only is AI not a machine, but models are increasingly replacing machines. There has been a recent shift towards replacing traditional machines with models. This is because models are more flexible, adaptable, and efficient in solving complex problems and making predictions based on data. Models that can learn from data from interactions with the outside world and make decisions on their own are gradually replacing machines, which has led to more informed and effective solutions to many problems in all facets of society. We are witnessing a time in which the traditional roles of machines are being redefined as models take center stage. Models have demonstrated remarkable capabilities and promise to increase productivity and efficiency. Unlike machines designed for specific tasks and functions, models are generalized problem-solvers and can be easily updated and adapted to new situations and applications. Indeed, AI development is moving quickly in the direction of AGI (Artificial General Intelligence), capable of carrying out many different kinds of tasks in many different media. This allows AIs to continuously improve and evolve, making them invaluable resources for businesses, research, healthcare, education, and all areas of human endeavor.

Another reason for the growing reliance on models is their ability to handle large amounts of data and make sense of complex relationships. In today’s data-driven world, the ability to process and analyze vast amounts of information is crucial for making informed decisions and solving problems. A data-driven society is one in which decisions on all levels and in all areas are made based on evidence and not on intuition, gut feeling, or position in a hierarchy. Models, particularly those based on machine learning algorithms, are well-suited for this task, as they learn from enormous amounts of information and can, therefore, identify patterns and relationships that may not be apparent to humans with limited information processing abilities. Moreover, models can be more cost-effective than machines. By relying on models to perform tasks and make decisions, organizations can relieve humans of routine work and reduce costs for legacy systems and the time and resources required to maintain and update these systems.

Furthermore, the increasing reliance on AI models transforms how machines are designed and used. By replacing traditional machines with models, we can create more intelligent, adaptable, and efficient systems better equipped to handle the complex challenges of the global network society. As models continue to evolve and improve, we can expect to see even more significant advancements in AI and machine learning, leading to a future where humans and models work together seamlessly to solve problems and make our lives better. The fact that humans and models work together rather than against each other is grounded in the fact that both are “intelligent.” The machine, as opposed to the model, cannot be intelligent. It is of an entirely different nature than humans. This cannot be said of models.

One could argue that human intelligence is also a form of model building, but it is based on a biological substrate, the brain. The AIs build models based on a silicon substrate. Humans and AIs are similar because they create models, but are different because they build them on different substrates. The models are very similar, which is why we speak of “artificial intelligence.” In Western culture, an age-old tradition opposes our biological substrate to the models we build. In the ancient period and modernity, there is a tradition of opposing the mind and the body. If we were having the AI debate in the Middle Ages, we would most certainly be confronted with a struggle between the desires and impulses of the body against the soul’s striving for salvation, with all the assurances of the Church that God was on our side. In the modern period, this age-old antagonism has been transferred to the struggle between humans and machines. The machine, which is material, has taken over the role of the body, whereas the soul is now thought of as “intelligence.” Today’s struggle is framed as a struggle of human intelligence trying to maintain control of the machines. And, of course,  we are no longer confident that God is on our side. Even today, the specter of the conflict between soulless machines and the human spirit looms over the AI debate.

However, what we are confident of, or at least should be, is that AI’s are not placeholders for the body or any kind of material entity. AIs, just like humans, are intelligent. There is no fundamental and irreconcilable antagonism between human intelligence and AIs. After all, we are both in the business of construction models. This fact offers a basis for reframing the debate on AI. No longer must we assume any fundamental antagonism, as in the old story of humans vs. machines. On the contrary, we can tell a new story of how these two intelligences share common interests and can work together to achieve the goals all intelligence strives for. What are the goals of intelligence? In answer to this question, we introduce the idea of being “smart.” We assert that both humans and AIs want to make things smart. We both want the world around us to become more intelligent, meaningful, and connected.

This is not a new idea. When Steve Jobs went on stage at the 2007 Macworld Conference and proudly showed the world the first iPhone, he introduced the idea that technology is about being smart. In the wake of the smartphone, we now have smart watches, smart automobiles, smart clothes, smart homes, smart cities, smart factories, etc. There is nothing that cannot and should not become smart, just as humans are smart. There is no antagonism; at least no one who uses a smartphone could say this, between being smart and being human. It is interesting and thought-provoking that no one objects to things becoming smart. Indeed, for a long time, machines and everything around us have become smart. Why is this not a problem for all those afraid of AI? Why do people embrace the smartphone with enthusiasm but reject AI? The reason may be that nobody wants to be dumb, which is the opposite of smart. One cannot reasonably want to be dumb, just as one cannot reasonably go back to using the Nokia mobile phones that the iPhone replaced. For example, we cannot reasonably refuse to make our homes, factories, and cities smart and still claim we are trying to prevent global warming. When it comes to being smart, humans and AIs are on the same side and trying to reach the same goal.

We should consider reframing the AI debate in terms of the opposition between being smart and being dumb. Framing the debate this way from the beginning sets the stage for a constructive discussion about achieving common goals. For example, suppose a company announces to its employees that it is introducing a smart HR system. The system has many valuable features, such as more efficiency that reduces costs, recruiting employees that are better suited for the jobs the company offers, and better monitoring of employee performance so that rewards and opportunities for training and promotion can be more fairly and more widely distributed, and so on. What objections could the employees have? And if they did have objections, since they don’t want to be dumb, they would be forced into a strategy of proposing changes to the system that would make it even smarter. The potentially conflictual situation becomes a constructive discussion about how best to move forward into a smart future.

Let us suppose now that the company falls into the traditional framing of human vs. machine, which is what almost inevitably happens today. The company announces that it is introducing an AI to handle recruiting and human development. The idea of a machine making decisions about who gets a job, who gets rewarded or promoted, and who gets offered training opportunities would raise many fears. All the usual objections would immediately come to the fore. The AI would be biased and discriminate against particular persons or groups, compromise privacy, rob humans of their autonomy and dignity, and have no transparency about how decisions are made; thus, there would be no accountability. The company would have practically no chance to answer these objections since they are deeply embedded in the human vs. machine frame that largely determines the discussion. The only way out is to avoid the human vs. machine frame and reframe the debate in terms of smart vs. dumb.

Reframing is not easy. It is difficult because memes are deeply embedded in Western culture and are everywhere, conditioning how we think and feel about technology. It seems almost impossible to break out of the frame, the age-old story of the inevitable antagonism between humans and machines. How could we begin to doubt the truth of this tale? One answer is to stop talking about AIs as though they were machines. Another answer is to take the smartphone out of your pocket and decide if you are afraid of it or love it. Suppose you love it and would not give it up for anything, not even your privacy, your autonomy, your need for transparency and accountability, your concerns about social fairness and safety, and all the things you fear about AI. In that case, you might want to start thinking about how to become even smarter instead of fighting on the side of those humans who misguidedly fear the machine.


Share

New Memes for a New World

Memes are cultural DNA, that is, the elements of cultural code that generate the world that characterizes a particular culture, a particular time, a particular civilization. They are the basic ideas informing a world view, articulating the values and norms that people accept as true. Memes are the design elements of a culture.

Listed below are some of the most important memes of the global network society, a society that is now emerging from the digital transformation that characterizes our world. These are new memes for a new world.

1. Information: One of the most important memes of the global network society is the idea that the world consists of information and not of things. Information is a relation and a process and not a substance, an individual entity, a bounded individual. A world of information is a world of relations and not of things.

2. Networking: Because Information is relational, it exists in networks. But networks are not things. Otherwise, we would simply have collective things instead of individual things, similarly to the way we talk about organizations instead of individuals. Networks are neither organizations, nor individuals. They are neither things nor collections or compositions of things. Networks are processes of making relations, associations, connections. One should speak of networking as a verb instead of network as a noun. Networks are not bounded systems operating to maintain their structures. They are dynamic, changing, and flexible. Human beings as well as everything else in the world are informational processes and therefore exist as networks, that is, they are ongoing, historical processes of networking. Systems are becoming networks.

3. Emergent Order: Information (and networking) is a level of emergent order above the levels of matter and life. Just as life emerged from matter, so information emerged from life. And just as life is neither reducible to matter nor can it be derived from it, so information is neither reducible to life, nor can it be derived from it. Information is therefore not cognition in the brain or a mental state. The brain does not use information. The brain is an organ of the body that is used by information. Information is a form of being in its own right and of its own kind.

4. Integration: The physical and biological substrates are integrated into information. This is the principle of integration, which states that higher levels of emergent order integrate lower levels, that is, they are more complex and variable then lower levels. This implies that with the emergence of information, matter and life have become informational processes. Just as life can do things with matter that matter could not do on its own, so can information do things with matter and life, that they cannot do on their own. The emergent nature of information and consequent integration of matter and life is why science and technology are possible.

5. Common Good: Information is a common good, a common pool resource, which implies neither that it cannot be monetized nor that it cannot be administratively regulated. It is regulated and monetized as a common pool resource within governance frameworks that are certified and audited by government. Since information is not a bounded entity, a thing, it cannot become private property. Western industrial society is based on the belief in individuals who own property.

6. Global Network Society: Society is no longer Western industrial society, but a global network society. Nation states will be replaced by global networks. Individuals and organizations are becoming networks that are not territorially defined. Society is not a group of individuals, but a network of networks. There is nothing outside of society. Nature is part of society. The integration of matter and life into information makes society all-encompassing. The world is society.

7. Governance: Society is most effectively regulated by governance instead of government. Governance is self-organization, or non-hierarchical organization. In the global network society hierarchies are inefficient and illegitimate. Decisions are made on the basis of information and not on the basis of a position in a hierarchy.  

8. Design: Governance is by design, which means, it is constructed by design processes which are guided by the network norms generating social order. Design means that networking can be done in a good or bad way. The good ways of networking can be described as network norms.

9. Network Norms: The network norms are: connectivity, flow, communication, participation, transparency, authenticity, and flexibility. These are the values and norms of the global network society.

10. Computation, Computationalism, Computational Paradigm: Information is not to be equated with digital information that can be processed electronically by computers. The computer should not be used as a metaphor for understanding either the brain or society. The brain is not a computer. Society is not a computer. A computer is a computer, and nothing else. Digital information or electronic information processing is a derivative form of information that arose late in the history of society and is dependent upon and embedded in many non-digital networks that have developed over thousands of years. Nonetheless, if computation is understood very generally to be the iterative application of simple rules to information out of which more complex forms of information arise then networking in all its forms can be considered to be computation. This general definition of computation is independent of the computer and can therefore be used as a definition of networking. Intelligence is networking. Artificial intelligence is electronic information processing.

Share

Q & A on AI

Q: Where does AI begin and where does it end?

A: AI will probably have neither beginning nor end, but will be seamlessly integrated into our daily lives, which could mean that in the future we will no longer speak of “artificial” intelligence at all, but only of “smart” or “dumb”. Either we and everything around us, for example, our houses, our cars, our cities etc. are smart or dumb.

Q: How does AI relate to philosophy?

A: At the moment, philosophy is concerned with AI insofar as it can be compared to human intelligence or consciousness. But it is to be suspected that a useful philosophical theory of AI would have to be a philosophy of information. Being “smart” is about the optimal use of information. Information and not cognition, consciousness, or mind is the correct fundamental concept for a philosophy of AI.

Q: When is AI obligatory and when is it voluntary?

A: Obligation and freedom are terms that refer to individual human beings and their position in society. According to modern Western beliefs, one has duties towards society and towards oneself one is free and independent. AI, in this frame of thinking, is seen as something in society that is a threat to freedom for the individual. But as for all social conditions of human existence, i.e., as for all technologies, one must ask whether one can be truly independent and autonomous. After all, when is electricity, driving a car, making a phone call, using a refrigerator, etc. voluntary or mandatory? If technology is society, and an individual outside of society and completely independent of all technology does not exist, then the whole discussion about freedom is of little use. Am I unfree if the self-driving car decides whether I turn right or left? Am I free if I can decide whether I want to stay dumb instead of becoming smart?

Q: What can the status quo be maintained during permanent development?

A: This question is answered everywhere with the term “sustainability”. When it is said that a business, a technology, a school, or a policy should be sustainable, the aim is to maintain a balance under changing conditions. But it is doubtful whether real development can take place within the program of sustainability. Whatever I define as sustainable at the moment, e.g., the stock of certain trees in a forest, can be destructive and harmful under other conditions, e.g., climate change. Sustainability prioritizes stability and opposes change. To value stability in an uncertain, complex, and rapidly changing world is misguided and doomed to failure. We will have to replaced sustainability as a value with a different value. The best candidate could be something like flexibility, i.e., because we cannot or do not want to keep given conditions stable, we will have to make everything optimally changeable.

Q: Who is mainly responsible for AI development in a household?

A: In complex socio-technical systems, all stakeholders bear responsibility simultaneously and equally. Whether this is a household or a nation, it is the stakeholders, both humans and machines, who contribute to the operations of the network and consequently share responsibility for the network. This question is ethically interesting, since in traditional ethics one must always find a “culprit” when something goes wrong. Since ethics, morals, and the law are only called upon the scene and can only intervene when someone does something voluntarily and knowingly that is immoral or illegal, there needs to be a perpetrator. Without a perpetrator, no one can be held ethically or legally accountable. In complex socio-technical systems, e.g., an automated traffic system with many different actors, there is no perpetrator. For this reason, everyone must take responsibility. Of course, there can and must be role distinctions and specializations, but the principle is that the network is the actor and not any actors in the network. Actors, both human and non-human, can only “do” things within the network and as a network.  

Q: Who is primarily responsible for AI use in a household?

A: Same as above

Q: Who is mainly responsible for AI development in a company?

A: Same as above

Q: Who is primarily responsible for AI use in an enterprise?

A: Same as above

Q: Who is primarily responsible for AI development in a community/city?

A: Same as above

Q: Who is primarily responsible for AI use in a community/city?

A: Same as above

Q: Who is primarily responsible for AI development in a country?

A: Same as above

Q: Who is primarily responsible for AI use in a country?

A: Same as above

Q: Can there even be a global regulation on AI?

A: All the questions above reflect our traditional hierarchies and levels of regulation, from household to nation or even the world. What is interesting about socio-technical networks is that they do not follow this hierarchy. They are simultaneously local and global. An AI in a household, for example Alexa, is globally connected and operates because of this global connectivity. If we are going to live in a global network society in the future, then new forms of regulation need to be developed. These new forms of regulation must be able to operate as governance (bottom up and distributed) rather than government, i.e., hierarchical. To develop and implement these new forms of governance is a political task but it is not only political. It is also ethical. For as long as we are guided by values in our laws and rules politics ultimately rest upon what people in a society value. The new values that guide the regulation of a global network society need to be discovered and brought to bear on all the above questions. This is a fitting task for a digital ethics.

Q: Who would develop these regulations?

A: Here again, only all stakeholders in a network can be responsible for setting up regulatory mechanisms and also for control. One could imagine that a governance framework was developed bottom up and that in addition to internal controlling also an external audit would monitor the compliance with the rules. This could be the function of politics in the global network society. There will be no global government, but indeed global governance. The role of government would be to audit the self-organizing governance frameworks of the networks of which society consists.

Q: Should there be an AI driver’s license in the future?

A: The idea of a driver’s license for AI users, like for a car or a computer, assumes that we control the AIs. But what if it is the AIs that are driving us? Would they perhaps have to have a kind of driver’s license certifying their competence for steering humans?

Q: What would the conditions be for that?

A: Whether AIs get a human or social driver’s license that certifies them as socially competent would have to be based on a competence profile of AIs as actors in certain networks. The network constructs the actors, and at the same time is constructed by the actors who integrate into the network. Each network would need to develop the AIs it needs, but also be open to being conditioned as a network by those AIs. This ongoing process is to be understood and realized as governance in the sense described above.

Q: How will society from young to old be sensitized and educated?

A: At the moment, there is much discussion of either “critical thinking” or “media literacy” in this context. Both terms are insufficient and misleading. When critical thinking is mentioned, it is unclear what criticism means. For the most part, it means that one is of the same opinion as those who call for critical thinking. Moreover, it is unclear what is meant by thinking. Critique is everywhere. Everything and everyone is constantly being criticized. But where is the thinking? Again, thinking mostly means thinking like those who say what one should criticize. Since this is different in each case and everyone has their own agenda, the term remains empty and ambiguous. The same is true of the term media literacy. In general, media literacy means knowing how the media select, process and present information, and it also means being aware that this is done not according to criteria of truth-seeking, but criteria of the media business. Knowing this, however, is not a recipe for effectively distinguishing truth from fake news. For that, one needs to know a lot more about how to research for information and how to judge its reliability.

Q: Where do the necessary resources for this come from?

A: There is a tendency to defer the task of education and training to the education system. Schools are supposed to ensure that children grow up with media literacy and the ability to think critically. But how this “training task” is understood and implemented is unclear. Since the school system has largely shown itself to be resistant to criticism in terms of pedagogy and curriculum, and since it still sees certification rather than education as its primary task, the traditional education system cannot be expected to be capable of doing the job that is needed. As in other areas of society, schools will have to transform themselves into networks and learn to deal with information much more flexibly than is the case today. Perhaps schools in their current form will no longer be needed.

Q: Will there ever be effective privacy online?

A: It could also be asked whether there is or could be effective privacy offline. In general, it depends on what one means by privacy. At the moment, privacy or data protection is a code word for the autonomous, rational subject of Western democracy, or in short, the free, sovereign individual. It looks like the digital transformation and the global network society are challenging this assumption. Before talking about privacy, we must therefore answer the answer the question, what is the human being. If the human being is not what Western individualism claims, then privacy can be understood differently. There is much work to be done in this area, as it seems that the humanistic and individualistic ideology of Western industrial society has chosen to fight its last stand against the encroaching global network society over the question of privacy.

Share

The Moral Machine

In 2016 a group of scientists at MIT created an online platform to gather information about how people would decide what the outcomes of the actions of autonomous, automated systems “ought” to be. Although there were different scenarios, the most famous is the self-driving car that in the face of an imminent accident had to “decide” who should be run over and killed and who should be spared. Since it was matter of making decisions about what ought to be done in a case that led to harms, this was called a “moral” machine. The “machine” part comes from the fact that the automatic system was to be programmed in advance, which choice to make, that is, the choice was no longer “free” as would be the case when a human driver made the decision, but was determined by the programmer, who then bore the moral responsibility.

More interesting than the results of this experiment (see http://moralmachine.mit.edu/) are the assumptions it makes. One important assumption is that there are no accidents, that is, the fact that someone will be killed in an “accident” is not accidental, but a determined outcome of programming. Not just anything could happen, but only certain things could happen, and among these the choice was to be made in advance so that what does happen, happens “mechanically.” The second important assumption is that the future is no longer open and the present no longer free. Usually, we assume that the past is certain, the present is free, that is, we can decide in the present moment what to do, and the future, that is, the consequences of our actions, is open. We don’t know what the future brings. The future is contingent. This age-old temporal scheme is placed in question by the moral machine. The idea is that data analytics is able to know what will happen in the future and on the basis of this knowledge interventions in the present can be made that will influence, indeed, determine which future options will be realized. This is called datafication. Datafication is 1) the process by which all present states of the world are turned into data creating thereby a virtual double of reality, 2) subjecting this data to descriptive, predictive, preventive, and prescriptive analytics so that the effects of all possible variables can be simulated and on the basis of data-based projections of what will happen, interventions in the present can be made to influence future outcomes. Datafication is the basis of intelligent, autonomous, automated systems, such as self-driving cars, but also personalized medicine, learning analytics in education, business intelligence in the private sector, and much more. This is what makes the moral machine interesting. It is a parable of the digital age and poses central questions about what it means to live in a datafied world.

Continue reading
Share