Category Archives: Network Society

Reframing the Debate on AI – On Models and Machines

The debate about the impact of AI, above all the new generative Large Language Models (LLMs), has been raging in the media, professional circles, politics, and the general public ever since OpenAI released ChatGPT in 2022. Some see AI as a threat to society and even to humanity. In contrast, others praise AI as the future technology enabling greater productivity, greater efficiency, unbiased and better-informed decision-making, and personalized products and services in the consumer sector, healthcare, and education. The media and civil society watchdogs have sensationalized this debate and created public pressure on regulators and governments to install legal safeguards such as the proposed AI Act of the EU. In all these discussions and initiatives, much depends on how the debate is framed, that is, what larger narrative framework is explicitly or implicitly called upon to make sense of what is being talked about. For the most part, the framing of the debate about AI has relied upon the well-known stories of humans vs. machines and a supposed competition between humans and machines in which both struggle to control and instrumentalize each other. In this paper, we argue that these typical framings of the debate are misleading, inapplicable, and prone to generating fruitless conflict. We suggest a different framing for the discussion based on other concepts, above all, on the difference between models and machines and between being smart or being dumb.

Concepts of models and machines are used in many ways in various fields. However, although there is a common understanding of what a machine is, the idea of the model has not yet entered mainstream discussions of what technology is and what its place in society should be. The dominant narrative that frames discussions of AI is that of humans vs. machines and of competition between humans and machines, with the accompanying question of who instrumentalizes who. Is it the machines that will take over and instrumentalize humans, or can humans somehow maintain control over a potentially superhuman AI? Typical concerns are whether AIs can be aligned with human values such as privacy, fairness, safety, autonomy, human dignity, accountability, and transparency. It is supposed that machines and humans are fundamentally opposed to each other and that the foreseeable impact of AI on society will endanger human flourishing. These fears are based on a long and omnipresent tradition of literature and film, from Frankenstein to Terminator, Matrix, and HAL2000 in Kubrick’s famous Space Odyssey. It should be noted that these memes are predominantly Western, whereas other cultures and societies have their own memes with which to frame discussions about technology and society. Nevertheless, as soon as this well-known narrative is invoked, the stage is set for conflict. In this paper, we will attempt to propose a new framing for the debate about AI by discussing the differences between models and machines. We will argue that AI should not be considered a machine. AI is not a machine but a model. We will plead for reframing the AI discussion, moving away from the typical story of humans vs. machines toward a different story based on the alternatives of either being smart or dumb.  

Surprisingly, practically none of AI’s proponents, developers, and researchers speak of AI as a machine. A machine is a deterministic system in which the input completely determines the output. The machine has no random states; it is not autonomous and, therefore, wholly predictable. AI is not a machine but a model. In everyday usage, a model can be two different things. There are “models-of” something and “models-for” something. A model-of something can be a representation of a system, process, or phenomenon that helps us understand, analyze, or predict its behavior. Models can take various forms, such as physical, mathematical, or logical representations. When speaking of models-of, one thinks of a model airplane, a model automobile, a fashion model, etc. Models of this kind are copies of an original or an ideal that already exists. These models can be used for many different purposes. In science and engineering, they simplify complex systems, identify relationships between variables, and make predictions based on the system’s underlying structure. There are also models of machines, but even in this first usage of the term, the model is not the machine but merely a representation of the machine. That models are not machines becomes even more apparent when considering the second meaning of the term. The second meaning of a model is a “model-for” something. A model-for is a kind of blueprint according to which something should be constructed. It is not a representation but a presentation of what does not yet exist. An example could be an architect’s model for a building that is in planning. The building does not exist except as a model. After the building has been built and built according to the model, the model becomes a representation, a model-of the building.

The concepts of models-of and models-for arose within anthropology, particularly in the work of French anthropologist Claude Lévi-Strauss. Lèvi-Strauss used these concepts to understand the underlying structures of human societies and cultures. “Models-for” refer to the representations or self-interpretations of existing structures or forms of life in a society or culture. People understand themselves and their society in terms of such models. It is not a representation of who they are and how their world is organized but a kind of, often mythological or ideological, blueprint for how they think they should be. Models-for, in this context, are normative. They prescribe how people should understand themselves and how they should act and think. They are not meant to describe reality as it is but how it should be. When an anthropologist goes into the field and asks their informants what they are doing and why, the answer is not an objective, value-free description but a model-for that society. Models-for are prescriptive models that serve as guidelines or blueprints for action within a society or culture. They provide a framework for understanding how things should be organized or how people should behave in a particular situation, whether hunting, cultivating, performing religious ceremonies, conducting mutual affairs, building houses, eating, regulating their mutual affairs, settling disputes, etc.

On the other hand, the anthropologist constructs models-of what people in a society actually do and how they relate to each other regardless of how the people understand what is going on. The anthropologist’s model-of a society is intended to explain the culture as it really is and not as the people of that society see it. Models-of are not prescriptive but purely descriptive. For Lèvi-Straus, who was influenced by the linguist Ferdinand de Saussure, the model-of was a description of a society’s underlying structures. Lèvi-Strauss writes, “The anthropologist’s task is to construct models of social phenomena, models which are simpler and more intelligible than the phenomena themselves, but which nevertheless retain their essential features.” (Structural Anthropology, p. 27)

In the context of machine learning and AI, a model is the output of an algorithm that processes data. It is a representation of the data but a presentation of what has been learned from discovering statistical regularities in the data. The AI model serves to make predictions. Therefore, the model that AI people talk about is not a model-of the world, language, or images but a model-for generating language, images, or sounds. Generative models are prescriptive or normative; they are not models of anything. And, of course, they are not machines. They generate an output that is not a mere copy of what already exists but of what could be useful or meaningful for a specific purpose. LLMs’ generative and, thus, prescriptive capabilities imply that AI models can also become “agents” or even “autonomous” agents. The generative capacities of models can be linked to “tools,” such as APIs, that allow them to do things in the world. For example, they can be used to create tutoring systems for personalized learning in educational contexts, create texts, images, and audio, assist in medical diagnosis and therapy, or autonomously take over business processes such as customer support and decision-making. Equipped with memory, planning, tools, and execution capabilities, AIs are autonomous agents that can independently interact with the world and learn from their actions. It should be evident that we are no longer talking about machines but about something very similar to humans.

While there may not be a large body of literature specifically dedicated to discussing the difference between models and machines, some authors have touched upon the topic in their works. For example, in the context of machine learning, the difference between machine learning algorithms and models has been claimed to lie in the fact that machine learning algorithms are procedures that run on datasets to recognize patterns and rules, while machine learning models are the output of these algorithms, acting as programs that can make predictions, and as stated above, even act based on the learned patterns and rules. These models are clearly not models-of the world but models-for how one should understand a situation and act appropriately. They are prescriptive and not simply descriptive. For example, what would you do if you are driving down the freeway and your navigation system tells you there is a traffic jam ahead and you should turn off at the next exit? What would you do if your doctor tells you you’re fine, but an AI says you have cancer and need an operation immediately? AI models are, therefore, like the self-understanding of a society in that they offer normative suggestions aimed at solving problems. The difference to what the anthropologist considers a model-for is that the AI has all the information and knows much more about how the world is than any human. The AI model has absorbed all the models-of into one model-for. It makes predictions based on information and evidence and not on religion, ideology, or worldviews, upon which humans depend since they don’t have the information.

Not only is AI not a machine, but models are increasingly replacing machines. There has been a recent shift towards replacing traditional machines with models. This is because models are more flexible, adaptable, and efficient in solving complex problems and making predictions based on data. Models that can learn from data from interactions with the outside world and make decisions on their own are gradually replacing machines, which has led to more informed and effective solutions to many problems in all facets of society. We are witnessing a time in which the traditional roles of machines are being redefined as models take center stage. Models have demonstrated remarkable capabilities and promise to increase productivity and efficiency. Unlike machines designed for specific tasks and functions, models are generalized problem-solvers and can be easily updated and adapted to new situations and applications. Indeed, AI development is moving quickly in the direction of AGI (Artificial General Intelligence), capable of carrying out many different kinds of tasks in many different media. This allows AIs to continuously improve and evolve, making them invaluable resources for businesses, research, healthcare, education, and all areas of human endeavor.

Another reason for the growing reliance on models is their ability to handle large amounts of data and make sense of complex relationships. In today’s data-driven world, the ability to process and analyze vast amounts of information is crucial for making informed decisions and solving problems. A data-driven society is one in which decisions on all levels and in all areas are made based on evidence and not on intuition, gut feeling, or position in a hierarchy. Models, particularly those based on machine learning algorithms, are well-suited for this task, as they learn from enormous amounts of information and can, therefore, identify patterns and relationships that may not be apparent to humans with limited information processing abilities. Moreover, models can be more cost-effective than machines. By relying on models to perform tasks and make decisions, organizations can relieve humans of routine work and reduce costs for legacy systems and the time and resources required to maintain and update these systems.

Furthermore, the increasing reliance on AI models transforms how machines are designed and used. By replacing traditional machines with models, we can create more intelligent, adaptable, and efficient systems better equipped to handle the complex challenges of the global network society. As models continue to evolve and improve, we can expect to see even more significant advancements in AI and machine learning, leading to a future where humans and models work together seamlessly to solve problems and make our lives better. The fact that humans and models work together rather than against each other is grounded in the fact that both are “intelligent.” The machine, as opposed to the model, cannot be intelligent. It is of an entirely different nature than humans. This cannot be said of models.

One could argue that human intelligence is also a form of model building, but it is based on a biological substrate, the brain. The AIs build models based on a silicon substrate. Humans and AIs are similar because they create models, but are different because they build them on different substrates. The models are very similar, which is why we speak of “artificial intelligence.” In Western culture, an age-old tradition opposes our biological substrate to the models we build. In the ancient period and modernity, there is a tradition of opposing the mind and the body. If we were having the AI debate in the Middle Ages, we would most certainly be confronted with a struggle between the desires and impulses of the body against the soul’s striving for salvation, with all the assurances of the Church that God was on our side. In the modern period, this age-old antagonism has been transferred to the struggle between humans and machines. The machine, which is material, has taken over the role of the body, whereas the soul is now thought of as “intelligence.” Today’s struggle is framed as a struggle of human intelligence trying to maintain control of the machines. And, of course,  we are no longer confident that God is on our side. Even today, the specter of the conflict between soulless machines and the human spirit looms over the AI debate.

However, what we are confident of, or at least should be, is that AI’s are not placeholders for the body or any kind of material entity. AIs, just like humans, are intelligent. There is no fundamental and irreconcilable antagonism between human intelligence and AIs. After all, we are both in the business of construction models. This fact offers a basis for reframing the debate on AI. No longer must we assume any fundamental antagonism, as in the old story of humans vs. machines. On the contrary, we can tell a new story of how these two intelligences share common interests and can work together to achieve the goals all intelligence strives for. What are the goals of intelligence? In answer to this question, we introduce the idea of being “smart.” We assert that both humans and AIs want to make things smart. We both want the world around us to become more intelligent, meaningful, and connected.

This is not a new idea. When Steve Jobs went on stage at the 2007 Macworld Conference and proudly showed the world the first iPhone, he introduced the idea that technology is about being smart. In the wake of the smartphone, we now have smart watches, smart automobiles, smart clothes, smart homes, smart cities, smart factories, etc. There is nothing that cannot and should not become smart, just as humans are smart. There is no antagonism; at least no one who uses a smartphone could say this, between being smart and being human. It is interesting and thought-provoking that no one objects to things becoming smart. Indeed, for a long time, machines and everything around us have become smart. Why is this not a problem for all those afraid of AI? Why do people embrace the smartphone with enthusiasm but reject AI? The reason may be that nobody wants to be dumb, which is the opposite of smart. One cannot reasonably want to be dumb, just as one cannot reasonably go back to using the Nokia mobile phones that the iPhone replaced. For example, we cannot reasonably refuse to make our homes, factories, and cities smart and still claim we are trying to prevent global warming. When it comes to being smart, humans and AIs are on the same side and trying to reach the same goal.

We should consider reframing the AI debate in terms of the opposition between being smart and being dumb. Framing the debate this way from the beginning sets the stage for a constructive discussion about achieving common goals. For example, suppose a company announces to its employees that it is introducing a smart HR system. The system has many valuable features, such as more efficiency that reduces costs, recruiting employees that are better suited for the jobs the company offers, and better monitoring of employee performance so that rewards and opportunities for training and promotion can be more fairly and more widely distributed, and so on. What objections could the employees have? And if they did have objections, since they don’t want to be dumb, they would be forced into a strategy of proposing changes to the system that would make it even smarter. The potentially conflictual situation becomes a constructive discussion about how best to move forward into a smart future.

Let us suppose now that the company falls into the traditional framing of human vs. machine, which is what almost inevitably happens today. The company announces that it is introducing an AI to handle recruiting and human development. The idea of a machine making decisions about who gets a job, who gets rewarded or promoted, and who gets offered training opportunities would raise many fears. All the usual objections would immediately come to the fore. The AI would be biased and discriminate against particular persons or groups, compromise privacy, rob humans of their autonomy and dignity, and have no transparency about how decisions are made; thus, there would be no accountability. The company would have practically no chance to answer these objections since they are deeply embedded in the human vs. machine frame that largely determines the discussion. The only way out is to avoid the human vs. machine frame and reframe the debate in terms of smart vs. dumb.

Reframing is not easy. It is difficult because memes are deeply embedded in Western culture and are everywhere, conditioning how we think and feel about technology. It seems almost impossible to break out of the frame, the age-old story of the inevitable antagonism between humans and machines. How could we begin to doubt the truth of this tale? One answer is to stop talking about AIs as though they were machines. Another answer is to take the smartphone out of your pocket and decide if you are afraid of it or love it. Suppose you love it and would not give it up for anything, not even your privacy, your autonomy, your need for transparency and accountability, your concerns about social fairness and safety, and all the things you fear about AI. In that case, you might want to start thinking about how to become even smarter instead of fighting on the side of those humans who misguidedly fear the machine.


Share

What is Information?

One of the most important ideas today is the idea that the world is made up of information, not things. Information is a relation and a process and not a substance, a thing, an individual entity, or a bounded individual. A world of information is a world of relations and not of things.

This idea was expressed already a hundred years ago by the philosopher Ludwig Wittgenstein when he said, “The world is the totality of facts, and not things” (Tractatus logico philosophicus 1922). Why not things? Where are the things, if not in the world? What is the world made of, if not things? According to Wittgenstein, things are in language, that is, in all that can be said about the world. These are what Wittgenstein called “facts.” For example, a fact is that the ball is red, or the tree is in the garden. These are facts, if they are true, because they can be expressed in language. This means that what cannot be expressed in language is not in the world. “It” is nothing at all. Therefore, Wittgenstein can also say: “The limits of my language mean the limits of my world” (Tractatus…).

At about the same time Martin Heidegger formulated similar ideas. He said that humans (Heidegger speaks of “Dasein”) do not face a world of things, as if things are simply there and humans, if they want, can establish a relationship with things or not. Quite the contrary, humans are always together with things in a world of meaning. This is what Heidegger calls “being-in-the-world”, and he claims that humans exists as “being-in-the-world.”

It is not the case that man ‘is’ and then has, in addition to this, a relationship toward the “World”, which he occasionally takes up. Dasein is never ‘at first’ an entity which is, so to speak, free from Being-in, but which sometimes has the inclination to take up a ‘relationship’ towards the world. Taking up relationships towards the world is possible only because Dasein, as Being-in-the-world, is as it is. This state of Being does not arise just because some entity is present-at-hand outside of Dasein and meets up with it. Such an entity can ‘meet up with’ Dasein only in so far as it can, of its own accord, show itself within a world. (Being and Time, 1927 §12)

But how can things “show themselves of their own accord within a world?” They do this, as Wittgenstein thought, by being able to be expressed in language. But how is it possible that things “of their own accord” can be expressed in language? In order to answer this question, let us recall what Heidegger said about Aristotle’s well-known definition of humans as that animal which has language – zoon logon echon. Heidegger claimed that this definition of human being can be understood in two ways. On the one hand, it can mean, as has mostly been thought throughout the history of philosophy, that humans are distinguished among all living creatures because they have reason. Among all animals there is one animal that can also speak, respectively think. This is the human being. This interpretation is understandable because the Greek word echon means “to have, to be available.” According to Heidegger, it can also mean that it is language that “has” humans, or rather, that it is language (logos) that uses humans such that all things can show themselves in and through language. Humans do not use language; the logos uses humans. As Wittgenstein said, the limits of my language mean the limits of my world. We live in a world of meaning, a world constructed by logos, with our help of course.

Today we no longer speak of logos, reason, thought, rationality, or even language when we refer to the way things and ourselves exist in the world but of information. Why information? Why has the concept of information taken the time-honored place of reason and language and worked its way up to become the main concept of understanding the world and human existence? Why does everyone talk about information today? Can we imagine that Aristotle could have said: Humans are the animals that have information? If he would have said something like this, it would today be clear that only the second interpretation is valid. It is information that has us and not the other way around. Information is everywhere and not only something that humans have.

In physics, one no longer speaks only about matter, energy, fields and particles, but about information. Physicist Anton Zeilinger, who won the 2022 Noble Prize, said in words reminiscent of Wittgenstein, “I am firmly convinced that information is the fundamental concept of our world, … It determines what can be said, but also what can become reality.” According to Zeilinger, we must get used to the idea that reality is not purely material, but also contains an immaterial “spiritual” component. 

In biology, we hear similar things. Michael Levin, one of the most important biologists today says that he no longer needs the term “life”.  Instead, he prefers to speak of “cognition.” All living things, from the simplest single-celled organisms to humans, are distinguished above all by the fact that they use information to react to environmental conditions in such a way that they can continue to live. This is called “adaptation” or “viability” in evolutionary theory. Living things are thus “intelligent”, and not only the central nervous system or even the human brain is intelligent, but intelligence can be found everywhere living things solve problems, and that is what they do as long as they live. Life in all forms and at all scales is nothing else than information processing.

Finally, thanks to the invention of the computer, at the level of human society we speak of an information society. People in all their activities are characterized by the processing of information. Not only that, but an “artificial” intelligence” is emerging that promises in the future to far exceed human information processing abilities – formerly known as “reason.” Information processing is independently evolving beyond humans and is increasingly determining human existence. This is reminiscent of Heidegger’s interpretation of Aristotle; it is not humans who possess language, but language, or information, which has humans, and everything else, in its grip.

What exactly information is remains ambiguous and different in each field, whether in physics, biology, or philosophy and sociology. Is there a common denominator that fits all forms of information? Can we define information in general and for all cases? It is striking that wherever information is spoken of it is understood as a difference between at least two states. Whether we are talking about quantum states, for example, “upspin/downspin,” or biological information, for example, whether something is “edible/non-edible,” or electronic bits that are either 1/0, it is always about a relation between two states that can be measured as a relation. Information is, it seems, at the most general level, a relation and not a thing. From the perspective of philosophy, Bruno Latour has given a name to this peculiar entity that is information. He speaks of “irreduction.” What does “irreduction” mean? Latour writes: “Nothing is, by itself, either reducible or irreducible to anything else.” (Pasteurization of France, 158). What does this cryptic statement means? When something is “reduced” to something else, this means that there are no longer two, but only one. The difference between the two disappears and thus there is no longer a relation. If nothing can be subject to reduction, then everything that is exists as a relation and not as a thing. What does this have to do with information? Information is this relation, without which nothing can be. Relations, it must be emphasized, are not things. They are something else that cannot be understood as a thing.

Because information is relational, it exists in networks. Networks are not things either. Otherwise, we would simply have collective things in addition to individual things, much as in sociology we speak of organizations in addition to individuals. Networks are neither organizations nor individuals. They are neither things nor compositions of things. Networks are processes of making relations, associations, connections. One should speak of networks as a verb – networking – and not of network as a noun. Networks are not bounded systems which operate to maintain their structures. If, as Michael Levin claims, life consists of cognition, then living things are not things, but dynamic processes of adapting, changing, and networking. Humans, like everything else in the world, are made up of information processes which we experience as consciousness. We exist as networks/networking i.e., we are ongoing, historical processes of networking. It is these processes that we call society. There is no fundamental difference between individual and society, but only a difference of scale of information processing or networking. In the information world, systems, i.e., limited entities whether individuals or organizations, become networks. In the global network society, which is the world we are now entering, we will network with many other beings that also process information, be it humans, robots, cyborgs, AIs, artificial beings, etc., and collectively shape our lives. Living in an information world means networking, thinking and acting in networks. This is the challenge of our time.

Share

Q & A on AI

Q: Where does AI begin and where does it end?

A: AI will probably have neither beginning nor end, but will be seamlessly integrated into our daily lives, which could mean that in the future we will no longer speak of “artificial” intelligence at all, but only of “smart” or “dumb”. Either we and everything around us, for example, our houses, our cars, our cities etc. are smart or dumb.

Q: How does AI relate to philosophy?

A: At the moment, philosophy is concerned with AI insofar as it can be compared to human intelligence or consciousness. But it is to be suspected that a useful philosophical theory of AI would have to be a philosophy of information. Being “smart” is about the optimal use of information. Information and not cognition, consciousness, or mind is the correct fundamental concept for a philosophy of AI.

Q: When is AI obligatory and when is it voluntary?

A: Obligation and freedom are terms that refer to individual human beings and their position in society. According to modern Western beliefs, one has duties towards society and towards oneself one is free and independent. AI, in this frame of thinking, is seen as something in society that is a threat to freedom for the individual. But as for all social conditions of human existence, i.e., as for all technologies, one must ask whether one can be truly independent and autonomous. After all, when is electricity, driving a car, making a phone call, using a refrigerator, etc. voluntary or mandatory? If technology is society, and an individual outside of society and completely independent of all technology does not exist, then the whole discussion about freedom is of little use. Am I unfree if the self-driving car decides whether I turn right or left? Am I free if I can decide whether I want to stay dumb instead of becoming smart?

Q: What can the status quo be maintained during permanent development?

A: This question is answered everywhere with the term “sustainability”. When it is said that a business, a technology, a school, or a policy should be sustainable, the aim is to maintain a balance under changing conditions. But it is doubtful whether real development can take place within the program of sustainability. Whatever I define as sustainable at the moment, e.g., the stock of certain trees in a forest, can be destructive and harmful under other conditions, e.g., climate change. Sustainability prioritizes stability and opposes change. To value stability in an uncertain, complex, and rapidly changing world is misguided and doomed to failure. We will have to replaced sustainability as a value with a different value. The best candidate could be something like flexibility, i.e., because we cannot or do not want to keep given conditions stable, we will have to make everything optimally changeable.

Q: Who is mainly responsible for AI development in a household?

A: In complex socio-technical systems, all stakeholders bear responsibility simultaneously and equally. Whether this is a household or a nation, it is the stakeholders, both humans and machines, who contribute to the operations of the network and consequently share responsibility for the network. This question is ethically interesting, since in traditional ethics one must always find a “culprit” when something goes wrong. Since ethics, morals, and the law are only called upon the scene and can only intervene when someone does something voluntarily and knowingly that is immoral or illegal, there needs to be a perpetrator. Without a perpetrator, no one can be held ethically or legally accountable. In complex socio-technical systems, e.g., an automated traffic system with many different actors, there is no perpetrator. For this reason, everyone must take responsibility. Of course, there can and must be role distinctions and specializations, but the principle is that the network is the actor and not any actors in the network. Actors, both human and non-human, can only “do” things within the network and as a network.  

Q: Who is primarily responsible for AI use in a household?

A: Same as above

Q: Who is mainly responsible for AI development in a company?

A: Same as above

Q: Who is primarily responsible for AI use in an enterprise?

A: Same as above

Q: Who is primarily responsible for AI development in a community/city?

A: Same as above

Q: Who is primarily responsible for AI use in a community/city?

A: Same as above

Q: Who is primarily responsible for AI development in a country?

A: Same as above

Q: Who is primarily responsible for AI use in a country?

A: Same as above

Q: Can there even be a global regulation on AI?

A: All the questions above reflect our traditional hierarchies and levels of regulation, from household to nation or even the world. What is interesting about socio-technical networks is that they do not follow this hierarchy. They are simultaneously local and global. An AI in a household, for example Alexa, is globally connected and operates because of this global connectivity. If we are going to live in a global network society in the future, then new forms of regulation need to be developed. These new forms of regulation must be able to operate as governance (bottom up and distributed) rather than government, i.e., hierarchical. To develop and implement these new forms of governance is a political task but it is not only political. It is also ethical. For as long as we are guided by values in our laws and rules politics ultimately rest upon what people in a society value. The new values that guide the regulation of a global network society need to be discovered and brought to bear on all the above questions. This is a fitting task for a digital ethics.

Q: Who would develop these regulations?

A: Here again, only all stakeholders in a network can be responsible for setting up regulatory mechanisms and also for control. One could imagine that a governance framework was developed bottom up and that in addition to internal controlling also an external audit would monitor the compliance with the rules. This could be the function of politics in the global network society. There will be no global government, but indeed global governance. The role of government would be to audit the self-organizing governance frameworks of the networks of which society consists.

Q: Should there be an AI driver’s license in the future?

A: The idea of a driver’s license for AI users, like for a car or a computer, assumes that we control the AIs. But what if it is the AIs that are driving us? Would they perhaps have to have a kind of driver’s license certifying their competence for steering humans?

Q: What would the conditions be for that?

A: Whether AIs get a human or social driver’s license that certifies them as socially competent would have to be based on a competence profile of AIs as actors in certain networks. The network constructs the actors, and at the same time is constructed by the actors who integrate into the network. Each network would need to develop the AIs it needs, but also be open to being conditioned as a network by those AIs. This ongoing process is to be understood and realized as governance in the sense described above.

Q: How will society from young to old be sensitized and educated?

A: At the moment, there is much discussion of either “critical thinking” or “media literacy” in this context. Both terms are insufficient and misleading. When critical thinking is mentioned, it is unclear what criticism means. For the most part, it means that one is of the same opinion as those who call for critical thinking. Moreover, it is unclear what is meant by thinking. Critique is everywhere. Everything and everyone is constantly being criticized. But where is the thinking? Again, thinking mostly means thinking like those who say what one should criticize. Since this is different in each case and everyone has their own agenda, the term remains empty and ambiguous. The same is true of the term media literacy. In general, media literacy means knowing how the media select, process and present information, and it also means being aware that this is done not according to criteria of truth-seeking, but criteria of the media business. Knowing this, however, is not a recipe for effectively distinguishing truth from fake news. For that, one needs to know a lot more about how to research for information and how to judge its reliability.

Q: Where do the necessary resources for this come from?

A: There is a tendency to defer the task of education and training to the education system. Schools are supposed to ensure that children grow up with media literacy and the ability to think critically. But how this “training task” is understood and implemented is unclear. Since the school system has largely shown itself to be resistant to criticism in terms of pedagogy and curriculum, and since it still sees certification rather than education as its primary task, the traditional education system cannot be expected to be capable of doing the job that is needed. As in other areas of society, schools will have to transform themselves into networks and learn to deal with information much more flexibly than is the case today. Perhaps schools in their current form will no longer be needed.

Q: Will there ever be effective privacy online?

A: It could also be asked whether there is or could be effective privacy offline. In general, it depends on what one means by privacy. At the moment, privacy or data protection is a code word for the autonomous, rational subject of Western democracy, or in short, the free, sovereign individual. It looks like the digital transformation and the global network society are challenging this assumption. Before talking about privacy, we must therefore answer the answer the question, what is the human being. If the human being is not what Western individualism claims, then privacy can be understood differently. There is much work to be done in this area, as it seems that the humanistic and individualistic ideology of Western industrial society has chosen to fight its last stand against the encroaching global network society over the question of privacy.

Share

Holding Things Together

When it comes to order as opposed to chaos, that is, of holding things together, physicists speak of four fundamental forces of the universe. There is gravity, electromagnetic force, and the so-called “strong” and “weak” forces that hold particles together and govern their relations. These four forces supposedly explain everything. But what about life? And what about meaning? Do not living organisms have their own “life” force that holds the cells and parts of cells together and regulates their interactions? As for meaning, what holds the words a language together so that they make sentences? Why can’t just any word be combined with just any other? There must be something that makes meaning happen. Can these forces not also be considered “fundamental” forces of the universe? This question is important, at least if we want to avoid “physicalism,” that is, reducing everything to matter.

Let us call the force that turns inanimate matter into living organisms “negentropy” and let us call the force that holds words together to make meaningful sentences and thoughts “power.” In 1944 the Nobel Prize winning physicist Erwin Schrödinger published a book entitled What is Life?. The question arises because living systems do not follow the Second Law of Thermodynamics, that is, the law of entropy. In living systems, order increases rather than decreases. This goes against the law of entropy. Life, therefore, is a fundamentally different form of order than matter. Life is a so-called “emergent” phenomenon which means that we don’t know where it comes from or how it comes into being, but we know it did and that it is very different from the purely physical organization of matter which the law of entropy regulates. In distinction to merely physical organization, which does not negate entropy, life seems to do this. Negentropy means the negation of entropy. Entropy is the tendency of energy to dissipate to equilibrium, that is, the equal probability of all states. For Schrödinger, this was a paradox. How can entropy be negated, and systems move from being less organized to being more organized? Another Nobel Prize winner, Ilya Prigogine, spoke of “dissipative systems” which run energy through their structures much like water running through a mill or food going through the metabolism of organisms. Such systems use entropy to negate entropy.

Continue reading
Share

Tesla is a Philosophical Problem

Of course, this is not about Tesla, but about intelligent-mobile- autonomous-systems (IMAS) – also known as robots. The philosophical problem comes from the fact that the robot, in this case the automobile, says “Leave the driving to us” which was an advertising slogan for Greyhound Bus. If the robot takes over the driving, and this means the decision making, then who is responsible for accidents? This was not a problem for Greyhound since the driver and in some cases the company were held liable for their mistakes. But what about the AIs? Indeed, the question of accountability, responsibility, and liability for robots and other AIs has become a major topic in digital ethics and everybody is scrambling to establish guidelines and norms for “good,” “trustworthy,” and “accountable” AI. It is at once interesting and unsettling that the ethical norms and values that the AI moralists inevitably fall back on arise from a society and a culture that knew nothing of self-driving cars or of artificial intelligence. This was a society and culture that categorized the world into stones, plants, animals, and human beings, whereby the latter alone were considered active subjects who could and should be held responsible for what they do. All the rest were mere objects, or as the law puts it, things (res). But what about the Tesla? Is it a subject or an object, a potentially responsible social actor or a mere thing? Whenever we go looking for who did it, we automatically assume some human being is the perpetrator and if we find them, we can bring them to justice. Who do we look for when the robot “commits” a crime? How do you bring an algorithm to justice? And if we decide that the robot is to be held responsible, aren’t we letting the human creators all to easily off the hook? These were the questions that the EU Parliament recently had to deal with when they discussed giving robots a special status as “electronic personalities” with much the same rights as corporations who have a “legal personality.”

Continue reading

Share