Reframing the Debate on AI – On Models and Machines

The debate about the impact of AI, above all the new generative Large Language Models (LLMs), has been raging in the media, professional circles, politics, and the general public ever since OpenAI released ChatGPT in 2022. Some see AI as a threat to society and even to humanity. In contrast, others praise AI as the future technology enabling greater productivity, greater efficiency, unbiased and better-informed decision-making, and personalized products and services in the consumer sector, healthcare, and education. The media and civil society watchdogs have sensationalized this debate and created public pressure on regulators and governments to install legal safeguards such as the proposed AI Act of the EU. In all these discussions and initiatives, much depends on how the debate is framed, that is, what larger narrative framework is explicitly or implicitly called upon to make sense of what is being talked about. For the most part, the framing of the debate about AI has relied upon the well-known stories of humans vs. machines and a supposed competition between humans and machines in which both struggle to control and instrumentalize each other. In this paper, we argue that these typical framings of the debate are misleading, inapplicable, and prone to generating fruitless conflict. We suggest a different framing for the discussion based on other concepts, above all, on the difference between models and machines and between being smart or being dumb.

Concepts of models and machines are used in many ways in various fields. However, although there is a common understanding of what a machine is, the idea of the model has not yet entered mainstream discussions of what technology is and what its place in society should be. The dominant narrative that frames discussions of AI is that of humans vs. machines and of competition between humans and machines, with the accompanying question of who instrumentalizes who. Is it the machines that will take over and instrumentalize humans, or can humans somehow maintain control over a potentially superhuman AI? Typical concerns are whether AIs can be aligned with human values such as privacy, fairness, safety, autonomy, human dignity, accountability, and transparency. It is supposed that machines and humans are fundamentally opposed to each other and that the foreseeable impact of AI on society will endanger human flourishing. These fears are based on a long and omnipresent tradition of literature and film, from Frankenstein to Terminator, Matrix, and HAL2000 in Kubrick’s famous Space Odyssey. It should be noted that these memes are predominantly Western, whereas other cultures and societies have their own memes with which to frame discussions about technology and society. Nevertheless, as soon as this well-known narrative is invoked, the stage is set for conflict. In this paper, we will attempt to propose a new framing for the debate about AI by discussing the differences between models and machines. We will argue that AI should not be considered a machine. AI is not a machine but a model. We will plead for reframing the AI discussion, moving away from the typical story of humans vs. machines toward a different story based on the alternatives of either being smart or dumb.  

Surprisingly, practically none of AI’s proponents, developers, and researchers speak of AI as a machine. A machine is a deterministic system in which the input completely determines the output. The machine has no random states; it is not autonomous and, therefore, wholly predictable. AI is not a machine but a model. In everyday usage, a model can be two different things. There are “models-of” something and “models-for” something. A model-of something can be a representation of a system, process, or phenomenon that helps us understand, analyze, or predict its behavior. Models can take various forms, such as physical, mathematical, or logical representations. When speaking of models-of, one thinks of a model airplane, a model automobile, a fashion model, etc. Models of this kind are copies of an original or an ideal that already exists. These models can be used for many different purposes. In science and engineering, they simplify complex systems, identify relationships between variables, and make predictions based on the system’s underlying structure. There are also models of machines, but even in this first usage of the term, the model is not the machine but merely a representation of the machine. That models are not machines becomes even more apparent when considering the second meaning of the term. The second meaning of a model is a “model-for” something. A model-for is a kind of blueprint according to which something should be constructed. It is not a representation but a presentation of what does not yet exist. An example could be an architect’s model for a building that is in planning. The building does not exist except as a model. After the building has been built and built according to the model, the model becomes a representation, a model-of the building.

The concepts of models-of and models-for arose within anthropology, particularly in the work of French anthropologist Claude Lévi-Strauss. Lèvi-Strauss used these concepts to understand the underlying structures of human societies and cultures. “Models-for” refer to the representations or self-interpretations of existing structures or forms of life in a society or culture. People understand themselves and their society in terms of such models. It is not a representation of who they are and how their world is organized but a kind of, often mythological or ideological, blueprint for how they think they should be. Models-for, in this context, are normative. They prescribe how people should understand themselves and how they should act and think. They are not meant to describe reality as it is but how it should be. When an anthropologist goes into the field and asks their informants what they are doing and why, the answer is not an objective, value-free description but a model-for that society. Models-for are prescriptive models that serve as guidelines or blueprints for action within a society or culture. They provide a framework for understanding how things should be organized or how people should behave in a particular situation, whether hunting, cultivating, performing religious ceremonies, conducting mutual affairs, building houses, eating, regulating their mutual affairs, settling disputes, etc.

On the other hand, the anthropologist constructs models-of what people in a society actually do and how they relate to each other regardless of how the people understand what is going on. The anthropologist’s model-of a society is intended to explain the culture as it really is and not as the people of that society see it. Models-of are not prescriptive but purely descriptive. For Lèvi-Straus, who was influenced by the linguist Ferdinand de Saussure, the model-of was a description of a society’s underlying structures. Lèvi-Strauss writes, “The anthropologist’s task is to construct models of social phenomena, models which are simpler and more intelligible than the phenomena themselves, but which nevertheless retain their essential features.” (Structural Anthropology, p. 27)

In the context of machine learning and AI, a model is the output of an algorithm that processes data. It is a representation of the data but a presentation of what has been learned from discovering statistical regularities in the data. The AI model serves to make predictions. Therefore, the model that AI people talk about is not a model-of the world, language, or images but a model-for generating language, images, or sounds. Generative models are prescriptive or normative; they are not models of anything. And, of course, they are not machines. They generate an output that is not a mere copy of what already exists but of what could be useful or meaningful for a specific purpose. LLMs’ generative and, thus, prescriptive capabilities imply that AI models can also become “agents” or even “autonomous” agents. The generative capacities of models can be linked to “tools,” such as APIs, that allow them to do things in the world. For example, they can be used to create tutoring systems for personalized learning in educational contexts, create texts, images, and audio, assist in medical diagnosis and therapy, or autonomously take over business processes such as customer support and decision-making. Equipped with memory, planning, tools, and execution capabilities, AIs are autonomous agents that can independently interact with the world and learn from their actions. It should be evident that we are no longer talking about machines but about something very similar to humans.

While there may not be a large body of literature specifically dedicated to discussing the difference between models and machines, some authors have touched upon the topic in their works. For example, in the context of machine learning, the difference between machine learning algorithms and models has been claimed to lie in the fact that machine learning algorithms are procedures that run on datasets to recognize patterns and rules, while machine learning models are the output of these algorithms, acting as programs that can make predictions, and as stated above, even act based on the learned patterns and rules. These models are clearly not models-of the world but models-for how one should understand a situation and act appropriately. They are prescriptive and not simply descriptive. For example, what would you do if you are driving down the freeway and your navigation system tells you there is a traffic jam ahead and you should turn off at the next exit? What would you do if your doctor tells you you’re fine, but an AI says you have cancer and need an operation immediately? AI models are, therefore, like the self-understanding of a society in that they offer normative suggestions aimed at solving problems. The difference to what the anthropologist considers a model-for is that the AI has all the information and knows much more about how the world is than any human. The AI model has absorbed all the models-of into one model-for. It makes predictions based on information and evidence and not on religion, ideology, or worldviews, upon which humans depend since they don’t have the information.

Not only is AI not a machine, but models are increasingly replacing machines. There has been a recent shift towards replacing traditional machines with models. This is because models are more flexible, adaptable, and efficient in solving complex problems and making predictions based on data. Models that can learn from data from interactions with the outside world and make decisions on their own are gradually replacing machines, which has led to more informed and effective solutions to many problems in all facets of society. We are witnessing a time in which the traditional roles of machines are being redefined as models take center stage. Models have demonstrated remarkable capabilities and promise to increase productivity and efficiency. Unlike machines designed for specific tasks and functions, models are generalized problem-solvers and can be easily updated and adapted to new situations and applications. Indeed, AI development is moving quickly in the direction of AGI (Artificial General Intelligence), capable of carrying out many different kinds of tasks in many different media. This allows AIs to continuously improve and evolve, making them invaluable resources for businesses, research, healthcare, education, and all areas of human endeavor.

Another reason for the growing reliance on models is their ability to handle large amounts of data and make sense of complex relationships. In today’s data-driven world, the ability to process and analyze vast amounts of information is crucial for making informed decisions and solving problems. A data-driven society is one in which decisions on all levels and in all areas are made based on evidence and not on intuition, gut feeling, or position in a hierarchy. Models, particularly those based on machine learning algorithms, are well-suited for this task, as they learn from enormous amounts of information and can, therefore, identify patterns and relationships that may not be apparent to humans with limited information processing abilities. Moreover, models can be more cost-effective than machines. By relying on models to perform tasks and make decisions, organizations can relieve humans of routine work and reduce costs for legacy systems and the time and resources required to maintain and update these systems.

Furthermore, the increasing reliance on AI models transforms how machines are designed and used. By replacing traditional machines with models, we can create more intelligent, adaptable, and efficient systems better equipped to handle the complex challenges of the global network society. As models continue to evolve and improve, we can expect to see even more significant advancements in AI and machine learning, leading to a future where humans and models work together seamlessly to solve problems and make our lives better. The fact that humans and models work together rather than against each other is grounded in the fact that both are “intelligent.” The machine, as opposed to the model, cannot be intelligent. It is of an entirely different nature than humans. This cannot be said of models.

One could argue that human intelligence is also a form of model building, but it is based on a biological substrate, the brain. The AIs build models based on a silicon substrate. Humans and AIs are similar because they create models, but are different because they build them on different substrates. The models are very similar, which is why we speak of “artificial intelligence.” In Western culture, an age-old tradition opposes our biological substrate to the models we build. In the ancient period and modernity, there is a tradition of opposing the mind and the body. If we were having the AI debate in the Middle Ages, we would most certainly be confronted with a struggle between the desires and impulses of the body against the soul’s striving for salvation, with all the assurances of the Church that God was on our side. In the modern period, this age-old antagonism has been transferred to the struggle between humans and machines. The machine, which is material, has taken over the role of the body, whereas the soul is now thought of as “intelligence.” Today’s struggle is framed as a struggle of human intelligence trying to maintain control of the machines. And, of course,  we are no longer confident that God is on our side. Even today, the specter of the conflict between soulless machines and the human spirit looms over the AI debate.

However, what we are confident of, or at least should be, is that AI’s are not placeholders for the body or any kind of material entity. AIs, just like humans, are intelligent. There is no fundamental and irreconcilable antagonism between human intelligence and AIs. After all, we are both in the business of construction models. This fact offers a basis for reframing the debate on AI. No longer must we assume any fundamental antagonism, as in the old story of humans vs. machines. On the contrary, we can tell a new story of how these two intelligences share common interests and can work together to achieve the goals all intelligence strives for. What are the goals of intelligence? In answer to this question, we introduce the idea of being “smart.” We assert that both humans and AIs want to make things smart. We both want the world around us to become more intelligent, meaningful, and connected.

This is not a new idea. When Steve Jobs went on stage at the 2007 Macworld Conference and proudly showed the world the first iPhone, he introduced the idea that technology is about being smart. In the wake of the smartphone, we now have smart watches, smart automobiles, smart clothes, smart homes, smart cities, smart factories, etc. There is nothing that cannot and should not become smart, just as humans are smart. There is no antagonism; at least no one who uses a smartphone could say this, between being smart and being human. It is interesting and thought-provoking that no one objects to things becoming smart. Indeed, for a long time, machines and everything around us have become smart. Why is this not a problem for all those afraid of AI? Why do people embrace the smartphone with enthusiasm but reject AI? The reason may be that nobody wants to be dumb, which is the opposite of smart. One cannot reasonably want to be dumb, just as one cannot reasonably go back to using the Nokia mobile phones that the iPhone replaced. For example, we cannot reasonably refuse to make our homes, factories, and cities smart and still claim we are trying to prevent global warming. When it comes to being smart, humans and AIs are on the same side and trying to reach the same goal.

We should consider reframing the AI debate in terms of the opposition between being smart and being dumb. Framing the debate this way from the beginning sets the stage for a constructive discussion about achieving common goals. For example, suppose a company announces to its employees that it is introducing a smart HR system. The system has many valuable features, such as more efficiency that reduces costs, recruiting employees that are better suited for the jobs the company offers, and better monitoring of employee performance so that rewards and opportunities for training and promotion can be more fairly and more widely distributed, and so on. What objections could the employees have? And if they did have objections, since they don’t want to be dumb, they would be forced into a strategy of proposing changes to the system that would make it even smarter. The potentially conflictual situation becomes a constructive discussion about how best to move forward into a smart future.

Let us suppose now that the company falls into the traditional framing of human vs. machine, which is what almost inevitably happens today. The company announces that it is introducing an AI to handle recruiting and human development. The idea of a machine making decisions about who gets a job, who gets rewarded or promoted, and who gets offered training opportunities would raise many fears. All the usual objections would immediately come to the fore. The AI would be biased and discriminate against particular persons or groups, compromise privacy, rob humans of their autonomy and dignity, and have no transparency about how decisions are made; thus, there would be no accountability. The company would have practically no chance to answer these objections since they are deeply embedded in the human vs. machine frame that largely determines the discussion. The only way out is to avoid the human vs. machine frame and reframe the debate in terms of smart vs. dumb.

Reframing is not easy. It is difficult because memes are deeply embedded in Western culture and are everywhere, conditioning how we think and feel about technology. It seems almost impossible to break out of the frame, the age-old story of the inevitable antagonism between humans and machines. How could we begin to doubt the truth of this tale? One answer is to stop talking about AIs as though they were machines. Another answer is to take the smartphone out of your pocket and decide if you are afraid of it or love it. Suppose you love it and would not give it up for anything, not even your privacy, your autonomy, your need for transparency and accountability, your concerns about social fairness and safety, and all the things you fear about AI. In that case, you might want to start thinking about how to become even smarter instead of fighting on the side of those humans who misguidedly fear the machine.


Share

What is Information?

One of the most important ideas today is the idea that the world is made up of information, not things. Information is a relation and a process and not a substance, a thing, an individual entity, or a bounded individual. A world of information is a world of relations and not of things.

This idea was expressed already a hundred years ago by the philosopher Ludwig Wittgenstein when he said, “The world is the totality of facts, and not things” (Tractatus logico philosophicus 1922). Why not things? Where are the things, if not in the world? What is the world made of, if not things? According to Wittgenstein, things are in language, that is, in all that can be said about the world. These are what Wittgenstein called “facts.” For example, a fact is that the ball is red, or the tree is in the garden. These are facts, if they are true, because they can be expressed in language. This means that what cannot be expressed in language is not in the world. “It” is nothing at all. Therefore, Wittgenstein can also say: “The limits of my language mean the limits of my world” (Tractatus…).

At about the same time Martin Heidegger formulated similar ideas. He said that humans (Heidegger speaks of “Dasein”) do not face a world of things, as if things are simply there and humans, if they want, can establish a relationship with things or not. Quite the contrary, humans are always together with things in a world of meaning. This is what Heidegger calls “being-in-the-world”, and he claims that humans exists as “being-in-the-world.”

It is not the case that man ‘is’ and then has, in addition to this, a relationship toward the “World”, which he occasionally takes up. Dasein is never ‘at first’ an entity which is, so to speak, free from Being-in, but which sometimes has the inclination to take up a ‘relationship’ towards the world. Taking up relationships towards the world is possible only because Dasein, as Being-in-the-world, is as it is. This state of Being does not arise just because some entity is present-at-hand outside of Dasein and meets up with it. Such an entity can ‘meet up with’ Dasein only in so far as it can, of its own accord, show itself within a world. (Being and Time, 1927 §12)

But how can things “show themselves of their own accord within a world?” They do this, as Wittgenstein thought, by being able to be expressed in language. But how is it possible that things “of their own accord” can be expressed in language? In order to answer this question, let us recall what Heidegger said about Aristotle’s well-known definition of humans as that animal which has language – zoon logon echon. Heidegger claimed that this definition of human being can be understood in two ways. On the one hand, it can mean, as has mostly been thought throughout the history of philosophy, that humans are distinguished among all living creatures because they have reason. Among all animals there is one animal that can also speak, respectively think. This is the human being. This interpretation is understandable because the Greek word echon means “to have, to be available.” According to Heidegger, it can also mean that it is language that “has” humans, or rather, that it is language (logos) that uses humans such that all things can show themselves in and through language. Humans do not use language; the logos uses humans. As Wittgenstein said, the limits of my language mean the limits of my world. We live in a world of meaning, a world constructed by logos, with our help of course.

Today we no longer speak of logos, reason, thought, rationality, or even language when we refer to the way things and ourselves exist in the world but of information. Why information? Why has the concept of information taken the time-honored place of reason and language and worked its way up to become the main concept of understanding the world and human existence? Why does everyone talk about information today? Can we imagine that Aristotle could have said: Humans are the animals that have information? If he would have said something like this, it would today be clear that only the second interpretation is valid. It is information that has us and not the other way around. Information is everywhere and not only something that humans have.

In physics, one no longer speaks only about matter, energy, fields and particles, but about information. Physicist Anton Zeilinger, who won the 2022 Noble Prize, said in words reminiscent of Wittgenstein, “I am firmly convinced that information is the fundamental concept of our world, … It determines what can be said, but also what can become reality.” According to Zeilinger, we must get used to the idea that reality is not purely material, but also contains an immaterial “spiritual” component. 

In biology, we hear similar things. Michael Levin, one of the most important biologists today says that he no longer needs the term “life”.  Instead, he prefers to speak of “cognition.” All living things, from the simplest single-celled organisms to humans, are distinguished above all by the fact that they use information to react to environmental conditions in such a way that they can continue to live. This is called “adaptation” or “viability” in evolutionary theory. Living things are thus “intelligent”, and not only the central nervous system or even the human brain is intelligent, but intelligence can be found everywhere living things solve problems, and that is what they do as long as they live. Life in all forms and at all scales is nothing else than information processing.

Finally, thanks to the invention of the computer, at the level of human society we speak of an information society. People in all their activities are characterized by the processing of information. Not only that, but an “artificial” intelligence” is emerging that promises in the future to far exceed human information processing abilities – formerly known as “reason.” Information processing is independently evolving beyond humans and is increasingly determining human existence. This is reminiscent of Heidegger’s interpretation of Aristotle; it is not humans who possess language, but language, or information, which has humans, and everything else, in its grip.

What exactly information is remains ambiguous and different in each field, whether in physics, biology, or philosophy and sociology. Is there a common denominator that fits all forms of information? Can we define information in general and for all cases? It is striking that wherever information is spoken of it is understood as a difference between at least two states. Whether we are talking about quantum states, for example, “upspin/downspin,” or biological information, for example, whether something is “edible/non-edible,” or electronic bits that are either 1/0, it is always about a relation between two states that can be measured as a relation. Information is, it seems, at the most general level, a relation and not a thing. From the perspective of philosophy, Bruno Latour has given a name to this peculiar entity that is information. He speaks of “irreduction.” What does “irreduction” mean? Latour writes: “Nothing is, by itself, either reducible or irreducible to anything else.” (Pasteurization of France, 158). What does this cryptic statement means? When something is “reduced” to something else, this means that there are no longer two, but only one. The difference between the two disappears and thus there is no longer a relation. If nothing can be subject to reduction, then everything that is exists as a relation and not as a thing. What does this have to do with information? Information is this relation, without which nothing can be. Relations, it must be emphasized, are not things. They are something else that cannot be understood as a thing.

Because information is relational, it exists in networks. Networks are not things either. Otherwise, we would simply have collective things in addition to individual things, much as in sociology we speak of organizations in addition to individuals. Networks are neither organizations nor individuals. They are neither things nor compositions of things. Networks are processes of making relations, associations, connections. One should speak of networks as a verb – networking – and not of network as a noun. Networks are not bounded systems which operate to maintain their structures. If, as Michael Levin claims, life consists of cognition, then living things are not things, but dynamic processes of adapting, changing, and networking. Humans, like everything else in the world, are made up of information processes which we experience as consciousness. We exist as networks/networking i.e., we are ongoing, historical processes of networking. It is these processes that we call society. There is no fundamental difference between individual and society, but only a difference of scale of information processing or networking. In the information world, systems, i.e., limited entities whether individuals or organizations, become networks. In the global network society, which is the world we are now entering, we will network with many other beings that also process information, be it humans, robots, cyborgs, AIs, artificial beings, etc., and collectively shape our lives. Living in an information world means networking, thinking and acting in networks. This is the challenge of our time.

Share

New Memes for a New World

Memes are cultural DNA, that is, the elements of cultural code that generate the world that characterizes a particular culture, a particular time, a particular civilization. They are the basic ideas informing a world view, articulating the values and norms that people accept as true. Memes are the design elements of a culture.

Listed below are some of the most important memes of the global network society, a society that is now emerging from the digital transformation that characterizes our world. These are new memes for a new world.

1. Information: One of the most important memes of the global network society is the idea that the world consists of information and not of things. Information is a relation and a process and not a substance, an individual entity, a bounded individual. A world of information is a world of relations and not of things.

2. Networking: Because Information is relational, it exists in networks. But networks are not things. Otherwise, we would simply have collective things instead of individual things, similarly to the way we talk about organizations instead of individuals. Networks are neither organizations, nor individuals. They are neither things nor collections or compositions of things. Networks are processes of making relations, associations, connections. One should speak of networking as a verb instead of network as a noun. Networks are not bounded systems operating to maintain their structures. They are dynamic, changing, and flexible. Human beings as well as everything else in the world are informational processes and therefore exist as networks, that is, they are ongoing, historical processes of networking. Systems are becoming networks.

3. Emergent Order: Information (and networking) is a level of emergent order above the levels of matter and life. Just as life emerged from matter, so information emerged from life. And just as life is neither reducible to matter nor can it be derived from it, so information is neither reducible to life, nor can it be derived from it. Information is therefore not cognition in the brain or a mental state. The brain does not use information. The brain is an organ of the body that is used by information. Information is a form of being in its own right and of its own kind.

4. Integration: The physical and biological substrates are integrated into information. This is the principle of integration, which states that higher levels of emergent order integrate lower levels, that is, they are more complex and variable then lower levels. This implies that with the emergence of information, matter and life have become informational processes. Just as life can do things with matter that matter could not do on its own, so can information do things with matter and life, that they cannot do on their own. The emergent nature of information and consequent integration of matter and life is why science and technology are possible.

5. Common Good: Information is a common good, a common pool resource, which implies neither that it cannot be monetized nor that it cannot be administratively regulated. It is regulated and monetized as a common pool resource within governance frameworks that are certified and audited by government. Since information is not a bounded entity, a thing, it cannot become private property. Western industrial society is based on the belief in individuals who own property.

6. Global Network Society: Society is no longer Western industrial society, but a global network society. Nation states will be replaced by global networks. Individuals and organizations are becoming networks that are not territorially defined. Society is not a group of individuals, but a network of networks. There is nothing outside of society. Nature is part of society. The integration of matter and life into information makes society all-encompassing. The world is society.

7. Governance: Society is most effectively regulated by governance instead of government. Governance is self-organization, or non-hierarchical organization. In the global network society hierarchies are inefficient and illegitimate. Decisions are made on the basis of information and not on the basis of a position in a hierarchy.  

8. Design: Governance is by design, which means, it is constructed by design processes which are guided by the network norms generating social order. Design means that networking can be done in a good or bad way. The good ways of networking can be described as network norms.

9. Network Norms: The network norms are: connectivity, flow, communication, participation, transparency, authenticity, and flexibility. These are the values and norms of the global network society.

10. Computation, Computationalism, Computational Paradigm: Information is not to be equated with digital information that can be processed electronically by computers. The computer should not be used as a metaphor for understanding either the brain or society. The brain is not a computer. Society is not a computer. A computer is a computer, and nothing else. Digital information or electronic information processing is a derivative form of information that arose late in the history of society and is dependent upon and embedded in many non-digital networks that have developed over thousands of years. Nonetheless, if computation is understood very generally to be the iterative application of simple rules to information out of which more complex forms of information arise then networking in all its forms can be considered to be computation. This general definition of computation is independent of the computer and can therefore be used as a definition of networking. Intelligence is networking. Artificial intelligence is electronic information processing.

Share

Q & A on AI

Q: Where does AI begin and where does it end?

A: AI will probably have neither beginning nor end, but will be seamlessly integrated into our daily lives, which could mean that in the future we will no longer speak of “artificial” intelligence at all, but only of “smart” or “dumb”. Either we and everything around us, for example, our houses, our cars, our cities etc. are smart or dumb.

Q: How does AI relate to philosophy?

A: At the moment, philosophy is concerned with AI insofar as it can be compared to human intelligence or consciousness. But it is to be suspected that a useful philosophical theory of AI would have to be a philosophy of information. Being “smart” is about the optimal use of information. Information and not cognition, consciousness, or mind is the correct fundamental concept for a philosophy of AI.

Q: When is AI obligatory and when is it voluntary?

A: Obligation and freedom are terms that refer to individual human beings and their position in society. According to modern Western beliefs, one has duties towards society and towards oneself one is free and independent. AI, in this frame of thinking, is seen as something in society that is a threat to freedom for the individual. But as for all social conditions of human existence, i.e., as for all technologies, one must ask whether one can be truly independent and autonomous. After all, when is electricity, driving a car, making a phone call, using a refrigerator, etc. voluntary or mandatory? If technology is society, and an individual outside of society and completely independent of all technology does not exist, then the whole discussion about freedom is of little use. Am I unfree if the self-driving car decides whether I turn right or left? Am I free if I can decide whether I want to stay dumb instead of becoming smart?

Q: What can the status quo be maintained during permanent development?

A: This question is answered everywhere with the term “sustainability”. When it is said that a business, a technology, a school, or a policy should be sustainable, the aim is to maintain a balance under changing conditions. But it is doubtful whether real development can take place within the program of sustainability. Whatever I define as sustainable at the moment, e.g., the stock of certain trees in a forest, can be destructive and harmful under other conditions, e.g., climate change. Sustainability prioritizes stability and opposes change. To value stability in an uncertain, complex, and rapidly changing world is misguided and doomed to failure. We will have to replaced sustainability as a value with a different value. The best candidate could be something like flexibility, i.e., because we cannot or do not want to keep given conditions stable, we will have to make everything optimally changeable.

Q: Who is mainly responsible for AI development in a household?

A: In complex socio-technical systems, all stakeholders bear responsibility simultaneously and equally. Whether this is a household or a nation, it is the stakeholders, both humans and machines, who contribute to the operations of the network and consequently share responsibility for the network. This question is ethically interesting, since in traditional ethics one must always find a “culprit” when something goes wrong. Since ethics, morals, and the law are only called upon the scene and can only intervene when someone does something voluntarily and knowingly that is immoral or illegal, there needs to be a perpetrator. Without a perpetrator, no one can be held ethically or legally accountable. In complex socio-technical systems, e.g., an automated traffic system with many different actors, there is no perpetrator. For this reason, everyone must take responsibility. Of course, there can and must be role distinctions and specializations, but the principle is that the network is the actor and not any actors in the network. Actors, both human and non-human, can only “do” things within the network and as a network.  

Q: Who is primarily responsible for AI use in a household?

A: Same as above

Q: Who is mainly responsible for AI development in a company?

A: Same as above

Q: Who is primarily responsible for AI use in an enterprise?

A: Same as above

Q: Who is primarily responsible for AI development in a community/city?

A: Same as above

Q: Who is primarily responsible for AI use in a community/city?

A: Same as above

Q: Who is primarily responsible for AI development in a country?

A: Same as above

Q: Who is primarily responsible for AI use in a country?

A: Same as above

Q: Can there even be a global regulation on AI?

A: All the questions above reflect our traditional hierarchies and levels of regulation, from household to nation or even the world. What is interesting about socio-technical networks is that they do not follow this hierarchy. They are simultaneously local and global. An AI in a household, for example Alexa, is globally connected and operates because of this global connectivity. If we are going to live in a global network society in the future, then new forms of regulation need to be developed. These new forms of regulation must be able to operate as governance (bottom up and distributed) rather than government, i.e., hierarchical. To develop and implement these new forms of governance is a political task but it is not only political. It is also ethical. For as long as we are guided by values in our laws and rules politics ultimately rest upon what people in a society value. The new values that guide the regulation of a global network society need to be discovered and brought to bear on all the above questions. This is a fitting task for a digital ethics.

Q: Who would develop these regulations?

A: Here again, only all stakeholders in a network can be responsible for setting up regulatory mechanisms and also for control. One could imagine that a governance framework was developed bottom up and that in addition to internal controlling also an external audit would monitor the compliance with the rules. This could be the function of politics in the global network society. There will be no global government, but indeed global governance. The role of government would be to audit the self-organizing governance frameworks of the networks of which society consists.

Q: Should there be an AI driver’s license in the future?

A: The idea of a driver’s license for AI users, like for a car or a computer, assumes that we control the AIs. But what if it is the AIs that are driving us? Would they perhaps have to have a kind of driver’s license certifying their competence for steering humans?

Q: What would the conditions be for that?

A: Whether AIs get a human or social driver’s license that certifies them as socially competent would have to be based on a competence profile of AIs as actors in certain networks. The network constructs the actors, and at the same time is constructed by the actors who integrate into the network. Each network would need to develop the AIs it needs, but also be open to being conditioned as a network by those AIs. This ongoing process is to be understood and realized as governance in the sense described above.

Q: How will society from young to old be sensitized and educated?

A: At the moment, there is much discussion of either “critical thinking” or “media literacy” in this context. Both terms are insufficient and misleading. When critical thinking is mentioned, it is unclear what criticism means. For the most part, it means that one is of the same opinion as those who call for critical thinking. Moreover, it is unclear what is meant by thinking. Critique is everywhere. Everything and everyone is constantly being criticized. But where is the thinking? Again, thinking mostly means thinking like those who say what one should criticize. Since this is different in each case and everyone has their own agenda, the term remains empty and ambiguous. The same is true of the term media literacy. In general, media literacy means knowing how the media select, process and present information, and it also means being aware that this is done not according to criteria of truth-seeking, but criteria of the media business. Knowing this, however, is not a recipe for effectively distinguishing truth from fake news. For that, one needs to know a lot more about how to research for information and how to judge its reliability.

Q: Where do the necessary resources for this come from?

A: There is a tendency to defer the task of education and training to the education system. Schools are supposed to ensure that children grow up with media literacy and the ability to think critically. But how this “training task” is understood and implemented is unclear. Since the school system has largely shown itself to be resistant to criticism in terms of pedagogy and curriculum, and since it still sees certification rather than education as its primary task, the traditional education system cannot be expected to be capable of doing the job that is needed. As in other areas of society, schools will have to transform themselves into networks and learn to deal with information much more flexibly than is the case today. Perhaps schools in their current form will no longer be needed.

Q: Will there ever be effective privacy online?

A: It could also be asked whether there is or could be effective privacy offline. In general, it depends on what one means by privacy. At the moment, privacy or data protection is a code word for the autonomous, rational subject of Western democracy, or in short, the free, sovereign individual. It looks like the digital transformation and the global network society are challenging this assumption. Before talking about privacy, we must therefore answer the answer the question, what is the human being. If the human being is not what Western individualism claims, then privacy can be understood differently. There is much work to be done in this area, as it seems that the humanistic and individualistic ideology of Western industrial society has chosen to fight its last stand against the encroaching global network society over the question of privacy.

Share

Q&A on Gaia, the Anthropocene and Information

Q: The climate crisis has brought James Lovelock’s Gaia hypothesis into the center of discussion. Bruno Latour has interpreted Gaia as the “critical zone”, a relatively thin strip of atmosphere and earth that comprises the space in which life exists. According to Latour, it is within this critical zone that Lovelock’s self-regulating life system operates. And it is this critical zone that must be the focus of our attempts to avert or at least control in some measure the devastation effects of human intervention on the Earth’s climate and ecosystems. How does this discussion relate to the digital transformation?

A: It is interesting that James Lovelock himself at the age of ninety-nine recently published a book in which he introduces a level of reality beyond Gaia, that is, beyond life and all the problems that threaten life in the Anthropocene. Lovelock calls this new age the Novocene. The Novocene is the age not in which human beings decisively influence the planet, but in which information becomes the most significant factor. This view has important consequences. First, it implies that information is what needs to be understood and not life. The discussion on climate change is not about life, but about information. Secondly, it implies that the so-called information age is a posthuman age. Many have heralded the posthuman and done this in many different ways. The end of the Anthropos can mean, for example, merely the end of patriarchy, of a male dominated society. It can also mean the heightened awareness of non-human agency and values. One speaks of animal rights or the rights of nature in all its forms over against human intentions. In this sense, the ecology movement can also be understood to be posthuman. But in a third sense, the posthuman can mean the end of humanism as a world view, as a set of taken for granted assumptions about what human beings are and what the world is. Humanism began in the modern West, gained prominence during that period of Western history that has been called the Enlightenment, and has become the set of beliefs that lie at the foundations of Western industrial society, capitalism, democracy, liberalism and socialism alike. Indeed, humanism could be called the “religion” of the modern Western world. In the age of globalization, where other cultures have come into view as serious contending interpretations of reality, humanism can no longer be taken for granted. Furthermore, the digital transformation seems to be ushering in a global network society that no longer subscribes to Western modernity. Finally, it has become apparent that humanism can not adequately account for or understand the world of information. If, as Lovelock claims, information lies at the heart of future developments in all areas, then we should talk less about human beings and about life and start thinking more about information. This is what the digital transformation is all about.

Q: The Gaia hypothesis and Latour’s description of the critical zone are based on a well-established theory of self-regulating systems which is fundamentally a biological theory. Gaia is often thought of as a kind of super-organism, a single, all-encompassing dynamic system. Cybernetics and general systems theory offer a wholistic model of how self-organizing, informationally and operationally closed systems come to be and how they function in relation to a constantly changing environment. It is within the systems paradigm that all discourse on climate change is located. Ecology has always been interpreted as a systems science as the title Earth Systems Science illustrates. If we are to shift to a different paradigm for understanding a world of information, would this imply that systems science can no longer serve as the foundation of ecology? What other paradigm is there to fill the gap when systems theory no longer works?

A: It is correct to say that there is currently no consensus on what a new information paradigm could be and on what it might mean to say that reality is fundamentally informational. The only well-known and widely accepted theory dealing with information comes from cybernetics and computer science and is often called computationalism. Lovelock relies on computer science and its notions of digital information as bits, that is, on/off circuits that register and process 1/0 differences in a computer. The basic model is the famous Turing machine, or universal computation machine, which processes information by means of a finite set of rules. But bits and bytes are only meaningful for human beings and human society when they become semantic information, that is, semiotically coded information which includes everything we can experience and speak about. If Lovelock’s cyborgs, the beings which he envisages will occupy the Novocene, do not communicate and cooperate with human beings and thus operate within society, they will have no significance whatever, they will make no difference. If information, as Bateson said, is a difference that makes a difference, then bits alone and all the computing power one could imagine will change nothing in the world in which we exist. Computation is only possible within society. Society and the world are not somehow located within or generated by computation. Computation is a part of society. Without being embedded in society, the computer would make no difference and digital information would have no meaning. But in fact, the computer, as all technology, is embedded in society and digital information is the end result of many complex social practices which have developed over hundreds, if not thousands of years of history. If we want a theory of information that does not put the horse behind the carriage, then we must start anew and not simply take over the assumptions of computer science.

Q: Speaking of a “digital transformation” places the computer or what could be called “computationalism” at the heart of what characterizes our time. Many assume that we are living in an information age. Sociologists speak of the information society and physicists have proposed information as the basis of reality. Biologists describe living systems as based on the internal construction of information. Oxford philosopher Luciano Floridi goes so far as to claim that human beings are “inforgs,” that is, informational beings, instead of cyborgs. Could it not be claimed that there is a computational paradigm based on digital computers which offers a coherent and encompassing theory of information allowing us to escape the assumptions of systems theory and its biological foundations?

A: Floridi’s inforgs are not computer-like cybernetic machines. They are constituted by a different kind of information than bits and bytes and process information not merely according to algorithms. They live in a world of meaning or what could be called semiotically coded information which Floridi calls the “infosphere.” It is not necessary to assume that the infosphere and the inforgs that exist in it can only consist of bits and bytes processed by mathematical rules. Whatever the information is that Floridi is talking about, it cannot be merely digital information. Nonetheless, even in Floridi’s account, it often seems that computation is the paradigm of what information is. At one point he locates the beginning of the information age with the invention of the computer, but at another the point he pushes the date back to the invention of writing about five thousand years ago. How people lived their lives for hundreds of thousands of years before this date without information is an open question. Obviously, information must be something other than what is usually thought. Apart from these uncertainties about the nature of information, it is doubtful that computationalism is really a different paradigm than systems theory. After all, computers are modelled as cybernetic machines. All theories of digital information remain within the boundaries of general systems theory which itself is fundamentally a biological theory. Artificial intelligence, for example, is modelled on the workings of the brain which is an organ of the human body. Understanding the brain as a computer amounts to understanding the computer as an organism. Computationalism blurs the boundaries between organism and machine. One speaks of “evolution” which is a term that only makes sense for living systems when talking about how cybernetic machines and even societies and cultures develop. And of course, it should not be forgotten that systems theory has also become dominant in sociology. Life, it would seem, remains the basis for understanding intelligence and information within computationalism. The machines that Lovelock sees taking over the world are artificial forms of life. Lovelock, however, denies this and sees intelligence moving beyond the biological substrate in which it has evolved. But what does this mean? What other models of self-organization, reproduction, and information construction do we have than the living system? As long as information is constructed and processed by a self-referential, operationally closed system it makes no difference weather information processing occurs in living tissue or in silico. Lovelock’s cyborgs are still self-regulating systems, just as any organism. Computer science seems to offer no better model of intelligence, that is, the creation and use of information, than the self-regulating machine that is modelled on living organisms. If we need a new paradigm, we must look beyond computationalism, or at least find a different basic model than the computer.

Q: If information is not digital but something else, what is it? How can information processing be modelled if not computationally? In other words, what could computation mean if not the rule-guided operations of a system?

A: Perhaps there is a notion of computation that is sufficiently general to encompass semiotic coding, that is, information understood as meaning. Perhaps the rules that generate and process semiotic information are different from those that generate and process digital information. Perhaps, as non-Cartesian cognitive science claims, meaning is not located in the brain. It could be that the very notion of information must be ascribed to a higher level of emergent order than physical or biological systems. We must speak of a level of emergent order beyond matter and beyond life. This would mean that information is something sui generus and not to be interpreted in the same way as something physical or even something biological. Furthermore, it would mean that the information of which physicists and biologist speak is not the basic form of information, but a derived and limited version of what information is. It may be that general systems theory is not applicable to information in the full and proper sense of the word and that a different theory is needed. Instead of deriving our understanding of what information is and how it operates from physics and biology, we must understand information in its own right and on its own terms. Physical and biological systems could well be derived from or have their existence within information, and even consist of information, but they do not found and explain information. If information is a higher level of emergent order, then it is coded in its own way and computation on this level is not the same as the kind of computation that a computer does, or for that matter, the kind of computation the brain does. This brings us to the question of how semantically coded information can be “computed” or to ask the same question more broadly, what is a general theory of computation that is based on a theory of information located beyond the physical and biological levels. This is the question that needs to be answered in order to speak about Gaia, climate change, ecology, and society beyond the systems paradigm.

Q: Alan Turing described a machine that used a finite set of instructions, an algorithm, to process information inputs into outputs. When the outputs become inputs and the instructions are to maintain certain values, this is a cybernetic machine operating comparably to the homeostatic operations of an organism. When environmental conditions select whether the values that must remain stable can be maintained, then we have what could be called natural selection. And when there is a mechanism that can randomly alter the instructions and the values so that under certain environmental conditions only certain systems will be able to operate, then we have evolution. Is there a theory of information or a version of computationalism, that does not correspond to this description, and which has the explanatory power to become a new paradigm of information?

A: If computation is defined in the most general way as the manipulation of information by means of rules, then this definition contains no assumption about what information is and what the rules may be. Information as well as rules can be different on all three levels of emergent order, the physical, the biological, and the level of meaning. Furthermore, if every higher level of emergent order integrates within it the lower level, then this explains how life can manipulate chemical reactions in ways that on the purely physical level of matter and energy do not happen. This is what emergence means. Emergence is the coming into being of something on the basis of, but not reducible to, something else. Life emerged in this sense from matter. We still do not know what life is and how it came into being, but we know that emergence has something to do with complexity. Theories of the emergence of life attempt to describe highly improbable increases in complexity, as for example, Assembly Theory proposed by Lee Cronin and Sarah Walker. Because of its higher complexity and variability, a higher level of emergent order such as life can use physical and chemical processes in completely unexpected ways. Life in its own way could be said to “engineer” nature. The genetic coding of life integrates the physical and chemical coding of matter into a higher level of order and thus can change it in ways that would not be possible on the physical level alone. This is what the Gaia hypothesis claims. Life is creating is own sustainable environment on the Earth. The advent of the Anthropocene in which human intervention has upset the homeostatic regulation of this Earth ecosystem has made these complex processes and interdependencies apparent.

The same can be said of meaning. Meaning can be understood to have emerged from life. We do not know what consciousness, language, thought etc. are nor how these phenomena came into being. We cannot derive meaning from life or even the big brains of human beings. Meaning is a higher level of emergent order beyond life. The semiotic coding of meaning is much more complex and variable than genetic coding. It therefore has the ability to manipulate life and matter in ways that could not occur on these levels alone. Technology is a case in point. On the level of meaning, there is not merely material engineering but also genetic engineering. But what is technology? What is meaning? It could be that the answers to these questions tell us what information is in the most encompassing sense of the term. There is bits and bytes on the physical level, neuro-activity on the biological level, and meaning on the level in which we exist as informational beings. Considering that the lower levels of emergent order are always integrated into the higher levels and thus no longer have a separate mode of being, the Gaia hypothesis makes perfectly good sense. The physical processes of the Earth have become part of the life processes and are regulated by them. On the other hand, this implies that when speaking of Gaia, we must remember that Gaia, as the very notion of the Anthropocene suggests, exists within the level of meaning, that is, within what could be called “society”. Gaia is not biological, it is social just as the Anthropocene is not merely human, but technological. Because society is usually understood to mean the interaction of human beings among themselves apart from nature, Latour prefers to speak of the “collective” since matter and life are integrated into meaning. As Heidegger would put it, Being is meaning. Once meaning has emerged as a third level of emergent order there is only meaning and nothing outside of or beyond or behind meaning. The world in which we live, including the entire universe, is a world of meaning. All physics, biology and other sciences as well as culture in all its forms is meaning and exists within meaning. This is not idealism as opposed to materialism. Meaning is not a mental construction within the brain or even a transcendental consciousness. If meaning is Being, and this is what we are proposing, then everything that is, exists because it is information not because it is perceived or thought by some kind of knowing subject. Heidegger calls this the “hermeneutical ‘as’”, which is to say that whatever appears does so “as” this or that kind of thing. The hermeneutical “as” marks the emergence of meaning and locates reality on the level of meaning. This is one way to understand Floridi’s concept of an infosphere. For this reason, when discussing climate change and Gaia we should talk less about life and more about information.

Q: Even if one accepts this account of meaning as a higher level of emergent order beyond life, it could still be possible to model meaning as a system. Systems theory could be adapted to a theory of meaning and information. Has not Luhmann, for example, done precisely this? Luhmann asserts that meaning is a higher level of emergent order but describes cognition and social order as self-regulating, operationally closed systems. From the point of view of a general systems theory, meaning is simply a different kind of system. Why is meaning not a system? And what is meaning if it is not a system, that is, something basically similar to life or to cybernetic machines?

A:  Latour has offered an alternative theory to that of systemic order. For Latour social order, or the level of meaning, does not have the characteristics of a system, but the characteristics of a network. Networks are not systems. Networks are associations arising from processes of that Latour calls “translation” and “enrollment”. These could be seen as the “rules” that can be said to manipulate information in a network-based computational model. The application of these rules does not merely presuppose information, but constructs information by means of constructing relations among entities. The relations or associations that form networks are information and from out of these relations the entities themselves emerge “as” things endowed with meaning. It could be said that for Latour Heidegger’s hermeneutical “as” is networking. This is a relational ontology. Being is relation, association, networking. As opposed to systems, networks are not closed, but open and infinitely scalable, as they would have to be in order to constitute a world. As opposed to systems, networks are multipurposed, which systems cannot be, because a system operates in order to maintain fixed goals or values. This is what homeostasis means. A system that did not know what values it should strive to maintain, or which had to many and conflicting values could not operate in such a way as to ensure autopoiesis and self-reference. Furthermore, networks are not subject to any pressures of selection, since they have no outside, no system/environment difference, a difference which is constitutive for any kind of systemic order. There is no evolution of networks, only a dynamic of differentiation, contraction and expansion, diversification, and branching. Within a network paradigm, systems do not simply disappear. Systems can be understood as a specific kind of network, which Latour calls a “black box”, that is, a relatively stable input/output machine. This applies to what Luhmann has called semi-autonomous social subsystems such as business, politics, law, education, science, religion and so on. But society as a whole, despite what Luhmann claims, cannot be modelled as a system. Instead, society must be understood as a network. The implication of the network paradigm is that Gaia or the critical zone is a network and not a system. A further implication is that ecology is actually a network science and not a systems science. Gaia is not a super organism, but a network. As a network, and this an important point that has been completely ignored in the entire Gaia discussion, Gaia is a form of meaning and not a form of life. After the emergence of meaning, the Earth, indeed, the entire universe, does not live, it “means”, or in other words, it “networks”. The important question in relation to climate change is therefore what is the best way to construct the network, or in other words, how can one construct information in the right way. As Latour has pointed out, this question can be understood as the question of design. Geo-engineering and climate regulation are not primarily social and political problems. As almost all experts say, they are not technological problems. They are design problems. But what is design? The digital transformation could be interpreted as the challenge to redesign human agency in terms of meaning construction. Information designing information is perhaps what Lovelock meant by envisioning a world of cyborgs beyond the Anthropocene. Governing the collective, Latour’s name for society, becomes a question of design.

Share