Tag Archives: Philosophy

The AI Disruption – Coping Strategies for Individuals and Societies

The typical framing of AI disruption discourse as a technical problem—how to make AI systems safe, controllable, or value-compliant—overlooks the fact that AI is primarily a societal and cultural challenge that requires new forms of social organization, governance, responsibility, and human self-understanding.

It is important to emphasize that AI is not a bounded, individual “system” that can be dealt with in isolation from society. Instead, AI must be understood as a socio-technical network—a dynamic constellation of humans, non-humans, institutions, regulations, economic incentives, data infrastructures, algorithms, and much more. This conceptual shift has profound implications for how individuals and societies can respond to AI-induced disruption.

For societies, the most important coping strategy is abandoning the illusion of technical containment. Just as automobiles cannot be blamed in isolation for traffic deaths, pollution, or urban sprawl, AI cannot be held solely responsible for social harm or benefit. Responsibility is distributed across designers, deployers, users, regulators, markets, and diverse cultural expectations.

This implies that societies must:

  • Develop collective responsibility frameworks rather than scapegoating AI.
  • Treat AI governance as an ongoing institutional practice, not a one-time regulatory fix.
  • Accept that AI disruption reflects pre-existing social conflicts, inequalities, and power asymmetries rather than creating them ex nihilo.

For individuals, this means:

  • Admitting that AI is not a mere tool, or an object opposed to human subjectivity, but a social partner.
  • Recognizing that AI is not an external force acting upon society but something in which both humans and non-humans are already entangled—as users, data sources, workers, citizens, and decision-makers.
  • Realizing that coping thus involves understanding one’s own role in AI networks rather than imagining oneself as a passive victim or sovereign controller.

In light of the above assumptions, there are three levels of coping, each requiring different strategies.

(1) Technical safety and robustness

At this level, AI is still treated as a tool, as one technology among others. Societal coping involves engineering safeguards, testing, verification, and reliability standards. While necessary, this level is insufficient on its own. Safety measures cannot address misuse, power concentration, or unintended systemic effects, nor can they address cultural transformation.

(2) Prevention of misuse

The assumption at this level is that disruption arises from human actors using AI for harmful purposes—economic exploitation, surveillance, manipulation, crime, or terrorism. Coping requires institutional oversight, legal accountability, and political coordination, especially at transnational levels. Individuals cannot shoulder this burden alone; democratic societies must not only strengthen but also reconceptualize regulatory measures.

(3) Social integration of AI

Once AI becomes an autonomous or semi-autonomous actor, societies face not a tool problem but a coexistence problem. Disruption now affects foundational concepts: responsibility, agency, accountability, labor, autonomy, self-determination, and even the meaning of intelligence itself. Coping means that societies must prepare for a post-human world not by attempting to retain humanist values and asserting human dominance over AI, but by learning how to integrate non-human actors into a new form of social order. It must be admitted that traditional concepts such as fairness, justice, dignity, or freedom are vague and context-dependent, culturally pluralistic, historically and socially contested, and inapplicable to a post-humanist, global network society.  

On the other hand, moral consensus cannot be outsourced to AI and encoded in algorithms.  Attempts to encode “the good” risk freezing contested norms, amplifying dominant interests, or creating brittle systems that fail under novel conditions. Rather than demanding that AI embody final moral truths, societies must develop procedural mechanisms that allow norms to be negotiated, revised, and contested over time. Not substantive values but procedural values should guide coping strategies. Instead of attempting to define what AI should aim for, societies should define how socio-technical networks ought to operate.

This approach mirrors democratic constitutionalism in that the legitimacy of socio-technical networks derives not from outcomes but from processes. Such procedural values could be:

  • Taking account of all affected actors, prioritizing risk analysis, preventing tunnel vision, and catastrophic oversimplification.
  • Producing stakeholders rather than victims or perpetrators, thus enabling participation rather than passive subjection.
  • Prioritizing and instituting bottom-up governance frameworks in transparent, revisable ways rather than through top-down, inflexible government regulation.
  • Balancing local and global concerns, acknowledging scalability without erasing contextual specificity.
  • Separating powers, preventing concentrations or asymmetries of informational, economic, or political control.

For societies, this translates into governance architectures that are adaptive, pluralistic, and reflexive. For individuals, it implies participation, contestation, and literacy rather than blind trust or rejection.

Given the impending post-labor economy, it is to be expected that AI will initially exacerbate existing power asymmetries, bartering productivity gains against mass unemployment, weakened labor bargaining power, and extreme capital concentration. Coping strategies in this domain could be:

  • Rethinking the idea of the market as the fundamental mechanism of the material reproduction of society and designing new productive and distributive mechanisms.
  • Rethinking the relationship between labor, income, social participation, and identity. Human existence and self-understanding need not be defined by labor, as it has been for most people over the last 5,000 years.
  • Institutional experimentation beyond closed systems to open networks in organizations in all areas of society, as well as in politics.

We do not need a new “enlightenment” to regain human autonomy from the dominance of functional systems, as the European Enlightenment once freed the individual from feudal and clerical domination.

We need to shift from fantasies of control to situated agency and cooperative integration in complex socio-technical networks. Coping with AI disruption does not mean understanding every algorithm, but recognizing one’s role as a network participant, demanding institutional accountability, participating in the design of governance frameworks for acceptable procedures, and resisting anthropomorphic myths that obscure the constructive relations among humans and non-humans.

AI disruption cannot be “solved” in the traditional sense. Coping with AI means learning to live with non-humans as social partners, distributed agency, and post-human, network norms. Societies must replace the dream of control, autonomy, and individuality with social practices of ongoing integration grounded in procedural governance and collective responsibility. In this view, the AI future becomes less of a technical issue than a continuous social process, mirroring the open-ended nature of society itself.

Share

What is Information?

One of the most important ideas today is the idea that the world is made up of information, not things. Information is a relation and a process and not a substance, a thing, an individual entity, or a bounded individual. A world of information is a world of relations and not of things.

This idea was expressed already a hundred years ago by the philosopher Ludwig Wittgenstein when he said, “The world is the totality of facts, and not things” (Tractatus logico philosophicus 1922). Why not things? Where are the things, if not in the world? What is the world made of, if not things? According to Wittgenstein, things are in language, that is, in all that can be said about the world. These are what Wittgenstein called “facts.” For example, a fact is that the ball is red, or the tree is in the garden. These are facts, if they are true, because they can be expressed in language. This means that what cannot be expressed in language is not in the world. “It” is nothing at all. Therefore, Wittgenstein can also say: “The limits of my language mean the limits of my world” (Tractatus…).

At about the same time Martin Heidegger formulated similar ideas. He said that humans (Heidegger speaks of “Dasein”) do not face a world of things, as if things are simply there and humans, if they want, can establish a relationship with things or not. Quite the contrary, humans are always together with things in a world of meaning. This is what Heidegger calls “being-in-the-world”, and he claims that humans exists as “being-in-the-world.”

It is not the case that man ‘is’ and then has, in addition to this, a relationship toward the “World”, which he occasionally takes up. Dasein is never ‘at first’ an entity which is, so to speak, free from Being-in, but which sometimes has the inclination to take up a ‘relationship’ towards the world. Taking up relationships towards the world is possible only because Dasein, as Being-in-the-world, is as it is. This state of Being does not arise just because some entity is present-at-hand outside of Dasein and meets up with it. Such an entity can ‘meet up with’ Dasein only in so far as it can, of its own accord, show itself within a world. (Being and Time, 1927 §12)

But how can things “show themselves of their own accord within a world?” They do this, as Wittgenstein thought, by being able to be expressed in language. But how is it possible that things “of their own accord” can be expressed in language? In order to answer this question, let us recall what Heidegger said about Aristotle’s well-known definition of humans as that animal which has language – zoon logon echon. Heidegger claimed that this definition of human being can be understood in two ways. On the one hand, it can mean, as has mostly been thought throughout the history of philosophy, that humans are distinguished among all living creatures because they have reason. Among all animals there is one animal that can also speak, respectively think. This is the human being. This interpretation is understandable because the Greek word echon means “to have, to be available.” According to Heidegger, it can also mean that it is language that “has” humans, or rather, that it is language (logos) that uses humans such that all things can show themselves in and through language. Humans do not use language; the logos uses humans. As Wittgenstein said, the limits of my language mean the limits of my world. We live in a world of meaning, a world constructed by logos, with our help of course.

Today we no longer speak of logos, reason, thought, rationality, or even language when we refer to the way things and ourselves exist in the world but of information. Why information? Why has the concept of information taken the time-honored place of reason and language and worked its way up to become the main concept of understanding the world and human existence? Why does everyone talk about information today? Can we imagine that Aristotle could have said: Humans are the animals that have information? If he would have said something like this, it would today be clear that only the second interpretation is valid. It is information that has us and not the other way around. Information is everywhere and not only something that humans have.

In physics, one no longer speaks only about matter, energy, fields and particles, but about information. Physicist Anton Zeilinger, who won the 2022 Noble Prize, said in words reminiscent of Wittgenstein, “I am firmly convinced that information is the fundamental concept of our world, … It determines what can be said, but also what can become reality.” According to Zeilinger, we must get used to the idea that reality is not purely material, but also contains an immaterial “spiritual” component. 

In biology, we hear similar things. Michael Levin, one of the most important biologists today says that he no longer needs the term “life”.  Instead, he prefers to speak of “cognition.” All living things, from the simplest single-celled organisms to humans, are distinguished above all by the fact that they use information to react to environmental conditions in such a way that they can continue to live. This is called “adaptation” or “viability” in evolutionary theory. Living things are thus “intelligent”, and not only the central nervous system or even the human brain is intelligent, but intelligence can be found everywhere living things solve problems, and that is what they do as long as they live. Life in all forms and at all scales is nothing else than information processing.

Finally, thanks to the invention of the computer, at the level of human society we speak of an information society. People in all their activities are characterized by the processing of information. Not only that, but an “artificial” intelligence” is emerging that promises in the future to far exceed human information processing abilities – formerly known as “reason.” Information processing is independently evolving beyond humans and is increasingly determining human existence. This is reminiscent of Heidegger’s interpretation of Aristotle; it is not humans who possess language, but language, or information, which has humans, and everything else, in its grip.

What exactly information is remains ambiguous and different in each field, whether in physics, biology, or philosophy and sociology. Is there a common denominator that fits all forms of information? Can we define information in general and for all cases? It is striking that wherever information is spoken of it is understood as a difference between at least two states. Whether we are talking about quantum states, for example, “upspin/downspin,” or biological information, for example, whether something is “edible/non-edible,” or electronic bits that are either 1/0, it is always about a relation between two states that can be measured as a relation. Information is, it seems, at the most general level, a relation and not a thing. From the perspective of philosophy, Bruno Latour has given a name to this peculiar entity that is information. He speaks of “irreduction.” What does “irreduction” mean? Latour writes: “Nothing is, by itself, either reducible or irreducible to anything else.” (Pasteurization of France, 158). What does this cryptic statement means? When something is “reduced” to something else, this means that there are no longer two, but only one. The difference between the two disappears and thus there is no longer a relation. If nothing can be subject to reduction, then everything that is exists as a relation and not as a thing. What does this have to do with information? Information is this relation, without which nothing can be. Relations, it must be emphasized, are not things. They are something else that cannot be understood as a thing.

Because information is relational, it exists in networks. Networks are not things either. Otherwise, we would simply have collective things in addition to individual things, much as in sociology we speak of organizations in addition to individuals. Networks are neither organizations nor individuals. They are neither things nor compositions of things. Networks are processes of making relations, associations, connections. One should speak of networks as a verb – networking – and not of network as a noun. Networks are not bounded systems which operate to maintain their structures. If, as Michael Levin claims, life consists of cognition, then living things are not things, but dynamic processes of adapting, changing, and networking. Humans, like everything else in the world, are made up of information processes which we experience as consciousness. We exist as networks/networking i.e., we are ongoing, historical processes of networking. It is these processes that we call society. There is no fundamental difference between individual and society, but only a difference of scale of information processing or networking. In the information world, systems, i.e., limited entities whether individuals or organizations, become networks. In the global network society, which is the world we are now entering, we will network with many other beings that also process information, be it humans, robots, cyborgs, AIs, artificial beings, etc., and collectively shape our lives. Living in an information world means networking, thinking and acting in networks. This is the challenge of our time.

Share

Can Networks be Virtuous?

This is about ethics. Ethics tells us what we ought to do. It is based on the distinction between what we really do, the “is” and what we should do, the “ought.” If everybody did what they should, then we wouldn’t need ethics. But let’s face it, people don’t do what they ought to do. Why not? Has ethics failed? Are people inherently immoral? And if so, what good does it do to keep telling them that they should do otherwise? Despite enormous efforts for centuries, ethics seems to be a futile enterprise divorced from reality. One answer to the apparent futility of ethics is to say that people do not do what they ought to do, but what they are. If people do the right thing, that’s not because of ethics, or because of being told what they ought to do. It’s because that is simply what they are. There is no “ought.” There is only what “is.” In other words, you shall know them by their actions – and not by their proclaimed or hidden motives. But what are people? What should we be reading from their actions?

Continue reading
Share