Monthly Archives: September 2023

Reframing the Debate on AI – On Models and Machines

The debate about the impact of AI, above all the new generative Large Language Models (LLMs), has been raging in the media, professional circles, politics, and the general public ever since OpenAI released ChatGPT in 2022. Some see AI as a threat to society and even to humanity. In contrast, others praise AI as the future technology enabling greater productivity, greater efficiency, unbiased and better-informed decision-making, and personalized products and services in the consumer sector, healthcare, and education. The media and civil society watchdogs have sensationalized this debate and created public pressure on regulators and governments to install legal safeguards such as the proposed AI Act of the EU. In all these discussions and initiatives, much depends on how the debate is framed, that is, what larger narrative framework is explicitly or implicitly called upon to make sense of what is being talked about. For the most part, the framing of the debate about AI has relied upon the well-known stories of humans vs. machines and a supposed competition between humans and machines in which both struggle to control and instrumentalize each other. In this paper, we argue that these typical framings of the debate are misleading, inapplicable, and prone to generating fruitless conflict. We suggest a different framing for the discussion based on other concepts, above all, on the difference between models and machines and between being smart or being dumb.

Concepts of models and machines are used in many ways in various fields. However, although there is a common understanding of what a machine is, the idea of the model has not yet entered mainstream discussions of what technology is and what its place in society should be. The dominant narrative that frames discussions of AI is that of humans vs. machines and of competition between humans and machines, with the accompanying question of who instrumentalizes who. Is it the machines that will take over and instrumentalize humans, or can humans somehow maintain control over a potentially superhuman AI? Typical concerns are whether AIs can be aligned with human values such as privacy, fairness, safety, autonomy, human dignity, accountability, and transparency. It is supposed that machines and humans are fundamentally opposed to each other and that the foreseeable impact of AI on society will endanger human flourishing. These fears are based on a long and omnipresent tradition of literature and film, from Frankenstein to Terminator, Matrix, and HAL2000 in Kubrick’s famous Space Odyssey. It should be noted that these memes are predominantly Western, whereas other cultures and societies have their own memes with which to frame discussions about technology and society. Nevertheless, as soon as this well-known narrative is invoked, the stage is set for conflict. In this paper, we will attempt to propose a new framing for the debate about AI by discussing the differences between models and machines. We will argue that AI should not be considered a machine. AI is not a machine but a model. We will plead for reframing the AI discussion, moving away from the typical story of humans vs. machines toward a different story based on the alternatives of either being smart or dumb.  

Surprisingly, practically none of AI’s proponents, developers, and researchers speak of AI as a machine. A machine is a deterministic system in which the input completely determines the output. The machine has no random states; it is not autonomous and, therefore, wholly predictable. AI is not a machine but a model. In everyday usage, a model can be two different things. There are “models-of” something and “models-for” something. A model-of something can be a representation of a system, process, or phenomenon that helps us understand, analyze, or predict its behavior. Models can take various forms, such as physical, mathematical, or logical representations. When speaking of models-of, one thinks of a model airplane, a model automobile, a fashion model, etc. Models of this kind are copies of an original or an ideal that already exists. These models can be used for many different purposes. In science and engineering, they simplify complex systems, identify relationships between variables, and make predictions based on the system’s underlying structure. There are also models of machines, but even in this first usage of the term, the model is not the machine but merely a representation of the machine. That models are not machines becomes even more apparent when considering the second meaning of the term. The second meaning of a model is a “model-for” something. A model-for is a kind of blueprint according to which something should be constructed. It is not a representation but a presentation of what does not yet exist. An example could be an architect’s model for a building that is in planning. The building does not exist except as a model. After the building has been built and built according to the model, the model becomes a representation, a model-of the building.

The concepts of models-of and models-for arose within anthropology, particularly in the work of French anthropologist Claude Lévi-Strauss. Lèvi-Strauss used these concepts to understand the underlying structures of human societies and cultures. “Models-for” refer to the representations or self-interpretations of existing structures or forms of life in a society or culture. People understand themselves and their society in terms of such models. It is not a representation of who they are and how their world is organized but a kind of, often mythological or ideological, blueprint for how they think they should be. Models-for, in this context, are normative. They prescribe how people should understand themselves and how they should act and think. They are not meant to describe reality as it is but how it should be. When an anthropologist goes into the field and asks their informants what they are doing and why, the answer is not an objective, value-free description but a model-for that society. Models-for are prescriptive models that serve as guidelines or blueprints for action within a society or culture. They provide a framework for understanding how things should be organized or how people should behave in a particular situation, whether hunting, cultivating, performing religious ceremonies, conducting mutual affairs, building houses, eating, regulating their mutual affairs, settling disputes, etc.

On the other hand, the anthropologist constructs models-of what people in a society actually do and how they relate to each other regardless of how the people understand what is going on. The anthropologist’s model-of a society is intended to explain the culture as it really is and not as the people of that society see it. Models-of are not prescriptive but purely descriptive. For Lèvi-Straus, who was influenced by the linguist Ferdinand de Saussure, the model-of was a description of a society’s underlying structures. Lèvi-Strauss writes, “The anthropologist’s task is to construct models of social phenomena, models which are simpler and more intelligible than the phenomena themselves, but which nevertheless retain their essential features.” (Structural Anthropology, p. 27)

In the context of machine learning and AI, a model is the output of an algorithm that processes data. It is a representation of the data but a presentation of what has been learned from discovering statistical regularities in the data. The AI model serves to make predictions. Therefore, the model that AI people talk about is not a model-of the world, language, or images but a model-for generating language, images, or sounds. Generative models are prescriptive or normative; they are not models of anything. And, of course, they are not machines. They generate an output that is not a mere copy of what already exists but of what could be useful or meaningful for a specific purpose. LLMs’ generative and, thus, prescriptive capabilities imply that AI models can also become “agents” or even “autonomous” agents. The generative capacities of models can be linked to “tools,” such as APIs, that allow them to do things in the world. For example, they can be used to create tutoring systems for personalized learning in educational contexts, create texts, images, and audio, assist in medical diagnosis and therapy, or autonomously take over business processes such as customer support and decision-making. Equipped with memory, planning, tools, and execution capabilities, AIs are autonomous agents that can independently interact with the world and learn from their actions. It should be evident that we are no longer talking about machines but about something very similar to humans.

While there may not be a large body of literature specifically dedicated to discussing the difference between models and machines, some authors have touched upon the topic in their works. For example, in the context of machine learning, the difference between machine learning algorithms and models has been claimed to lie in the fact that machine learning algorithms are procedures that run on datasets to recognize patterns and rules, while machine learning models are the output of these algorithms, acting as programs that can make predictions, and as stated above, even act based on the learned patterns and rules. These models are clearly not models-of the world but models-for how one should understand a situation and act appropriately. They are prescriptive and not simply descriptive. For example, what would you do if you are driving down the freeway and your navigation system tells you there is a traffic jam ahead and you should turn off at the next exit? What would you do if your doctor tells you you’re fine, but an AI says you have cancer and need an operation immediately? AI models are, therefore, like the self-understanding of a society in that they offer normative suggestions aimed at solving problems. The difference to what the anthropologist considers a model-for is that the AI has all the information and knows much more about how the world is than any human. The AI model has absorbed all the models-of into one model-for. It makes predictions based on information and evidence and not on religion, ideology, or worldviews, upon which humans depend since they don’t have the information.

Not only is AI not a machine, but models are increasingly replacing machines. There has been a recent shift towards replacing traditional machines with models. This is because models are more flexible, adaptable, and efficient in solving complex problems and making predictions based on data. Models that can learn from data from interactions with the outside world and make decisions on their own are gradually replacing machines, which has led to more informed and effective solutions to many problems in all facets of society. We are witnessing a time in which the traditional roles of machines are being redefined as models take center stage. Models have demonstrated remarkable capabilities and promise to increase productivity and efficiency. Unlike machines designed for specific tasks and functions, models are generalized problem-solvers and can be easily updated and adapted to new situations and applications. Indeed, AI development is moving quickly in the direction of AGI (Artificial General Intelligence), capable of carrying out many different kinds of tasks in many different media. This allows AIs to continuously improve and evolve, making them invaluable resources for businesses, research, healthcare, education, and all areas of human endeavor.

Another reason for the growing reliance on models is their ability to handle large amounts of data and make sense of complex relationships. In today’s data-driven world, the ability to process and analyze vast amounts of information is crucial for making informed decisions and solving problems. A data-driven society is one in which decisions on all levels and in all areas are made based on evidence and not on intuition, gut feeling, or position in a hierarchy. Models, particularly those based on machine learning algorithms, are well-suited for this task, as they learn from enormous amounts of information and can, therefore, identify patterns and relationships that may not be apparent to humans with limited information processing abilities. Moreover, models can be more cost-effective than machines. By relying on models to perform tasks and make decisions, organizations can relieve humans of routine work and reduce costs for legacy systems and the time and resources required to maintain and update these systems.

Furthermore, the increasing reliance on AI models transforms how machines are designed and used. By replacing traditional machines with models, we can create more intelligent, adaptable, and efficient systems better equipped to handle the complex challenges of the global network society. As models continue to evolve and improve, we can expect to see even more significant advancements in AI and machine learning, leading to a future where humans and models work together seamlessly to solve problems and make our lives better. The fact that humans and models work together rather than against each other is grounded in the fact that both are “intelligent.” The machine, as opposed to the model, cannot be intelligent. It is of an entirely different nature than humans. This cannot be said of models.

One could argue that human intelligence is also a form of model building, but it is based on a biological substrate, the brain. The AIs build models based on a silicon substrate. Humans and AIs are similar because they create models, but are different because they build them on different substrates. The models are very similar, which is why we speak of “artificial intelligence.” In Western culture, an age-old tradition opposes our biological substrate to the models we build. In the ancient period and modernity, there is a tradition of opposing the mind and the body. If we were having the AI debate in the Middle Ages, we would most certainly be confronted with a struggle between the desires and impulses of the body against the soul’s striving for salvation, with all the assurances of the Church that God was on our side. In the modern period, this age-old antagonism has been transferred to the struggle between humans and machines. The machine, which is material, has taken over the role of the body, whereas the soul is now thought of as “intelligence.” Today’s struggle is framed as a struggle of human intelligence trying to maintain control of the machines. And, of course,  we are no longer confident that God is on our side. Even today, the specter of the conflict between soulless machines and the human spirit looms over the AI debate.

However, what we are confident of, or at least should be, is that AI’s are not placeholders for the body or any kind of material entity. AIs, just like humans, are intelligent. There is no fundamental and irreconcilable antagonism between human intelligence and AIs. After all, we are both in the business of construction models. This fact offers a basis for reframing the debate on AI. No longer must we assume any fundamental antagonism, as in the old story of humans vs. machines. On the contrary, we can tell a new story of how these two intelligences share common interests and can work together to achieve the goals all intelligence strives for. What are the goals of intelligence? In answer to this question, we introduce the idea of being “smart.” We assert that both humans and AIs want to make things smart. We both want the world around us to become more intelligent, meaningful, and connected.

This is not a new idea. When Steve Jobs went on stage at the 2007 Macworld Conference and proudly showed the world the first iPhone, he introduced the idea that technology is about being smart. In the wake of the smartphone, we now have smart watches, smart automobiles, smart clothes, smart homes, smart cities, smart factories, etc. There is nothing that cannot and should not become smart, just as humans are smart. There is no antagonism; at least no one who uses a smartphone could say this, between being smart and being human. It is interesting and thought-provoking that no one objects to things becoming smart. Indeed, for a long time, machines and everything around us have become smart. Why is this not a problem for all those afraid of AI? Why do people embrace the smartphone with enthusiasm but reject AI? The reason may be that nobody wants to be dumb, which is the opposite of smart. One cannot reasonably want to be dumb, just as one cannot reasonably go back to using the Nokia mobile phones that the iPhone replaced. For example, we cannot reasonably refuse to make our homes, factories, and cities smart and still claim we are trying to prevent global warming. When it comes to being smart, humans and AIs are on the same side and trying to reach the same goal.

We should consider reframing the AI debate in terms of the opposition between being smart and being dumb. Framing the debate this way from the beginning sets the stage for a constructive discussion about achieving common goals. For example, suppose a company announces to its employees that it is introducing a smart HR system. The system has many valuable features, such as more efficiency that reduces costs, recruiting employees that are better suited for the jobs the company offers, and better monitoring of employee performance so that rewards and opportunities for training and promotion can be more fairly and more widely distributed, and so on. What objections could the employees have? And if they did have objections, since they don’t want to be dumb, they would be forced into a strategy of proposing changes to the system that would make it even smarter. The potentially conflictual situation becomes a constructive discussion about how best to move forward into a smart future.

Let us suppose now that the company falls into the traditional framing of human vs. machine, which is what almost inevitably happens today. The company announces that it is introducing an AI to handle recruiting and human development. The idea of a machine making decisions about who gets a job, who gets rewarded or promoted, and who gets offered training opportunities would raise many fears. All the usual objections would immediately come to the fore. The AI would be biased and discriminate against particular persons or groups, compromise privacy, rob humans of their autonomy and dignity, and have no transparency about how decisions are made; thus, there would be no accountability. The company would have practically no chance to answer these objections since they are deeply embedded in the human vs. machine frame that largely determines the discussion. The only way out is to avoid the human vs. machine frame and reframe the debate in terms of smart vs. dumb.

Reframing is not easy. It is difficult because memes are deeply embedded in Western culture and are everywhere, conditioning how we think and feel about technology. It seems almost impossible to break out of the frame, the age-old story of the inevitable antagonism between humans and machines. How could we begin to doubt the truth of this tale? One answer is to stop talking about AIs as though they were machines. Another answer is to take the smartphone out of your pocket and decide if you are afraid of it or love it. Suppose you love it and would not give it up for anything, not even your privacy, your autonomy, your need for transparency and accountability, your concerns about social fairness and safety, and all the things you fear about AI. In that case, you might want to start thinking about how to become even smarter instead of fighting on the side of those humans who misguidedly fear the machine.


Share