Tesla is a Philosophical Problem

Of course, this is not about Tesla, but about intelligent-mobile- autonomous-systems (IMAS) – also known as robots. The philosophical problem comes from the fact that the robot, in this case the automobile, says “Leave the driving to us” which was an advertising slogan for Greyhound Bus. If the robot takes over the driving, and this means the decision making, then who is responsible for accidents? This was not a problem for Greyhound since the driver and in some cases the company were held liable for their mistakes. But what about the AIs? Indeed, the question of accountability, responsibility, and liability for robots and other AIs has become a major topic in digital ethics and everybody is scrambling to establish guidelines and norms for “good,” “trustworthy,” and “accountable” AI. It is at once interesting and unsettling that the ethical norms and values that the AI moralists inevitably fall back on arise from a society and a culture that knew nothing of self-driving cars or of artificial intelligence. This was a society and culture that categorized the world into stones, plants, animals, and human beings, whereby the latter alone were considered active subjects who could and should be held responsible for what they do. All the rest were mere objects, or as the law puts it, things (res). But what about the Tesla? Is it a subject or an object, a potentially responsible social actor or a mere thing? Whenever we go looking for who did it, we automatically assume some human being is the perpetrator and if we find them, we can bring them to justice. Who do we look for when the robot “commits” a crime? How do you bring an algorithm to justice? And if we decide that the robot is to be held responsible, aren’t we letting the human creators all to easily off the hook? These were the questions that the EU Parliament recently had to deal with when they discussed giving robots a special status as “electronic personalities” with much the same rights as corporations who have a “legal personality.”

There is another side to this coin. Not only are machines becoming more human, but humans are becoming more integrated with technology. Instead of speaking of robots, we speak of “cyborgs.” Cyborgs are technologically enhanced humans such that many speak no longer of human beings at all, but of “transhumans.” Here again, the question arises of whether or not such transhumans are to be treated as normal humans or as something entirely different with different rights and responsibilities and perhaps even subject to different regulations so that their superhuman abilities do not create social inequalities and new forms of discrimination.

When we flip the coin, no matter which side comes up, whether robot or cyborg, we are dealing with entities that do not neatly fit into the traditional ways we categorize the world. Instead of attempting to adjust these categories and create new entities alongside of or in between old ones, it might be useful to call our entire world view into question. This is the philosophical problem. The digital transformation reaches deep into our ontology and our understanding of what the world is made of. What would the world consist of, if not types of beings, where it doesn’t really matter if there are only four or a few more added on? What would the world look like if it was not a sack full of different kinds of beings, but something entirely different? And what would this be? We could imagine a world that does not consist of individual entities, whether these be stones, plants, animals, humans, robots, or cyborgs, but of relations. A relation is not a thing and it is not dependent on things, that is, is must not be thought of as something that qualifies things or is an attribute of things. Of course, we normally speak of relations as something secondary to things. First there are things and then secondly things enter into relations with each other. This implies that there can be things without relations. Indeed, the philosophical concept of substance implies exactly this, that things exist as individual entities, as things-in-themselves that may or may not enter into relations with other things. But what if it were the other way round and first there are relations and then out of relations come things?

The old philosophy of substance also determines our understanding or what it means to be a human being. Humans are primarily individuals pursuing their own benefit. When it seems to be beneficial to make a social contract to end the war of all against all then these individuals enter into society. Deciding upon what is beneficial is a characteristic that humans possess that is called reason. As Descartes put it, humans are “thinking things” (res cogitans). Humans are not only substance but also subject. Everything else in the world is an object of knowledge, it is what humans think about. Recently this deeply entrenched and honorable philosophical tradition has come into question. First or all, it turns out that everything cognitive is relational and not substantial. Thinking and knowledge consist of relations. A thought is like a word in a language. It is what it is and means what it means because of its relations to many other thoughts. We know what a cat is because we know how it relates to many other things such as dogs, tress, mice, people, etc. Indeed, meaning is nothing but relations. We do not stumble over ideas or meaning as we do a stone on the path. Where do these relations come from? Non-Cartesian cognitive science shows that cognition is not something that thinking things, that is, brains do, but is embodied, enacted, extended, and embedded beyond the brain into the environment. In other words, cognition arises on the basis of relations among the brain and many other things in the world. We can call this “distributed cognition.” But if thinking is not the isolated ability of an individual human being, then why should decision-making and responsibility be ascribed to individuals alone? Free will has traditionally been considered a faculty of the individual mind. If cognition is distributed then why not agency also. The network is the actor and not any individual in the network. Distributed cognition and distributed agency raise the question of identity. Who am I after all, when “my” thinking and “my” actions are only possible on the basis of many complex relations that go far beyond my individual self? In our imagined world that consists of relations instead of things, the answer is that identity is also distributed among many different “actors.” I like to think of myself, for example, as a teacher. This is not only my professional, but also my personal identity. But how can I be a teacher without students, without classrooms or schools, without textbooks, exams, grades, certifications, etc. etc.? Take all these things and the relations that constitute them away, I am no longer a teacher. So, what is our new world made of? What we have in our new world instead of different kinds of things are networks of relations that constitute actors, who together make up reality. The world consists of actor-networks and not things, not even thinking things.

Let us say that these networks, which are the result of distributed cognition, distributed agency, and distributed identity are what is real and things, whether stones, plants, animals, humans, robots, or cyborgs are conventions that we have come to agree upon in order to regulate our lives. Let us further claim that robots and cyborgs don’t fit into the old world of things and can no longer be regulated by legal and moral rules that were designed to apply to supposedly free human subjects and what they do to passive objects.  The arrival of robots and cyborgs in our world force or invite us to imagine a different way of ordering the world. Who is responsible for my Tesla? Well the entire network in which autonomous mobility functions and makes sense. What is “good” AI? Well the entire network in which AI operates. This network includes not only the manufacturer, the users, and the possible victims, but many social, legal, cultural, and technological actors who together build a network that can be considered the actor both when things go well and when things go wrong. Instead of looking for the human or semi-human culprit when a crime has been committed or harm has been done, one should pay attention to the regulation, or better, the “governance” of the network. Instead of creating e-personalities to make machines responsible like humans or guidelines and ethics to make humans more responsible than they are usually, why not drop Western individualism, the ontology of substance, and anthropocentrism altogether. Perhaps what the digital world needs is not more ethics, but new forms of governance based upon a networked reality.