Category Archives: Actor-Network Theory

ant

Q & A on AI

Q: Where does AI begin and where does it end?

A: AI will probably have neither beginning nor end, but will be seamlessly integrated into our daily lives, which could mean that in the future we will no longer speak of “artificial” intelligence at all, but only of “smart” or “dumb”. Either we and everything around us, for example, our houses, our cars, our cities etc. are smart or dumb.

Q: How does AI relate to philosophy?

A: At the moment, philosophy is concerned with AI insofar as it can be compared to human intelligence or consciousness. But it is to be suspected that a useful philosophical theory of AI would have to be a philosophy of information. Being “smart” is about the optimal use of information. Information and not cognition, consciousness, or mind is the correct fundamental concept for a philosophy of AI.

Q: When is AI obligatory and when is it voluntary?

A: Obligation and freedom are terms that refer to individual human beings and their position in society. According to modern Western beliefs, one has duties towards society and towards oneself one is free and independent. AI, in this frame of thinking, is seen as something in society that is a threat to freedom for the individual. But as for all social conditions of human existence, i.e., as for all technologies, one must ask whether one can be truly independent and autonomous. After all, when is electricity, driving a car, making a phone call, using a refrigerator, etc. voluntary or mandatory? If technology is society, and an individual outside of society and completely independent of all technology does not exist, then the whole discussion about freedom is of little use. Am I unfree if the self-driving car decides whether I turn right or left? Am I free if I can decide whether I want to stay dumb instead of becoming smart?

Q: What can the status quo be maintained during permanent development?

A: This question is answered everywhere with the term “sustainability”. When it is said that a business, a technology, a school, or a policy should be sustainable, the aim is to maintain a balance under changing conditions. But it is doubtful whether real development can take place within the program of sustainability. Whatever I define as sustainable at the moment, e.g., the stock of certain trees in a forest, can be destructive and harmful under other conditions, e.g., climate change. Sustainability prioritizes stability and opposes change. To value stability in an uncertain, complex, and rapidly changing world is misguided and doomed to failure. We will have to replaced sustainability as a value with a different value. The best candidate could be something like flexibility, i.e., because we cannot or do not want to keep given conditions stable, we will have to make everything optimally changeable.

Q: Who is mainly responsible for AI development in a household?

A: In complex socio-technical systems, all stakeholders bear responsibility simultaneously and equally. Whether this is a household or a nation, it is the stakeholders, both humans and machines, who contribute to the operations of the network and consequently share responsibility for the network. This question is ethically interesting, since in traditional ethics one must always find a “culprit” when something goes wrong. Since ethics, morals, and the law are only called upon the scene and can only intervene when someone does something voluntarily and knowingly that is immoral or illegal, there needs to be a perpetrator. Without a perpetrator, no one can be held ethically or legally accountable. In complex socio-technical systems, e.g., an automated traffic system with many different actors, there is no perpetrator. For this reason, everyone must take responsibility. Of course, there can and must be role distinctions and specializations, but the principle is that the network is the actor and not any actors in the network. Actors, both human and non-human, can only “do” things within the network and as a network.  

Q: Who is primarily responsible for AI use in a household?

A: Same as above

Q: Who is mainly responsible for AI development in a company?

A: Same as above

Q: Who is primarily responsible for AI use in an enterprise?

A: Same as above

Q: Who is primarily responsible for AI development in a community/city?

A: Same as above

Q: Who is primarily responsible for AI use in a community/city?

A: Same as above

Q: Who is primarily responsible for AI development in a country?

A: Same as above

Q: Who is primarily responsible for AI use in a country?

A: Same as above

Q: Can there even be a global regulation on AI?

A: All the questions above reflect our traditional hierarchies and levels of regulation, from household to nation or even the world. What is interesting about socio-technical networks is that they do not follow this hierarchy. They are simultaneously local and global. An AI in a household, for example Alexa, is globally connected and operates because of this global connectivity. If we are going to live in a global network society in the future, then new forms of regulation need to be developed. These new forms of regulation must be able to operate as governance (bottom up and distributed) rather than government, i.e., hierarchical. To develop and implement these new forms of governance is a political task but it is not only political. It is also ethical. For as long as we are guided by values in our laws and rules politics ultimately rest upon what people in a society value. The new values that guide the regulation of a global network society need to be discovered and brought to bear on all the above questions. This is a fitting task for a digital ethics.

Q: Who would develop these regulations?

A: Here again, only all stakeholders in a network can be responsible for setting up regulatory mechanisms and also for control. One could imagine that a governance framework was developed bottom up and that in addition to internal controlling also an external audit would monitor the compliance with the rules. This could be the function of politics in the global network society. There will be no global government, but indeed global governance. The role of government would be to audit the self-organizing governance frameworks of the networks of which society consists.

Q: Should there be an AI driver’s license in the future?

A: The idea of a driver’s license for AI users, like for a car or a computer, assumes that we control the AIs. But what if it is the AIs that are driving us? Would they perhaps have to have a kind of driver’s license certifying their competence for steering humans?

Q: What would the conditions be for that?

A: Whether AIs get a human or social driver’s license that certifies them as socially competent would have to be based on a competence profile of AIs as actors in certain networks. The network constructs the actors, and at the same time is constructed by the actors who integrate into the network. Each network would need to develop the AIs it needs, but also be open to being conditioned as a network by those AIs. This ongoing process is to be understood and realized as governance in the sense described above.

Q: How will society from young to old be sensitized and educated?

A: At the moment, there is much discussion of either “critical thinking” or “media literacy” in this context. Both terms are insufficient and misleading. When critical thinking is mentioned, it is unclear what criticism means. For the most part, it means that one is of the same opinion as those who call for critical thinking. Moreover, it is unclear what is meant by thinking. Critique is everywhere. Everything and everyone is constantly being criticized. But where is the thinking? Again, thinking mostly means thinking like those who say what one should criticize. Since this is different in each case and everyone has their own agenda, the term remains empty and ambiguous. The same is true of the term media literacy. In general, media literacy means knowing how the media select, process and present information, and it also means being aware that this is done not according to criteria of truth-seeking, but criteria of the media business. Knowing this, however, is not a recipe for effectively distinguishing truth from fake news. For that, one needs to know a lot more about how to research for information and how to judge its reliability.

Q: Where do the necessary resources for this come from?

A: There is a tendency to defer the task of education and training to the education system. Schools are supposed to ensure that children grow up with media literacy and the ability to think critically. But how this “training task” is understood and implemented is unclear. Since the school system has largely shown itself to be resistant to criticism in terms of pedagogy and curriculum, and since it still sees certification rather than education as its primary task, the traditional education system cannot be expected to be capable of doing the job that is needed. As in other areas of society, schools will have to transform themselves into networks and learn to deal with information much more flexibly than is the case today. Perhaps schools in their current form will no longer be needed.

Q: Will there ever be effective privacy online?

A: It could also be asked whether there is or could be effective privacy offline. In general, it depends on what one means by privacy. At the moment, privacy or data protection is a code word for the autonomous, rational subject of Western democracy, or in short, the free, sovereign individual. It looks like the digital transformation and the global network society are challenging this assumption. Before talking about privacy, we must therefore answer the answer the question, what is the human being. If the human being is not what Western individualism claims, then privacy can be understood differently. There is much work to be done in this area, as it seems that the humanistic and individualistic ideology of Western industrial society has chosen to fight its last stand against the encroaching global network society over the question of privacy.

Share

Tesla is a Philosophical Problem

Of course, this is not about Tesla, but about intelligent-mobile- autonomous-systems (IMAS) – also known as robots. The philosophical problem comes from the fact that the robot, in this case the automobile, says “Leave the driving to us” which was an advertising slogan for Greyhound Bus. If the robot takes over the driving, and this means the decision making, then who is responsible for accidents? This was not a problem for Greyhound since the driver and in some cases the company were held liable for their mistakes. But what about the AIs? Indeed, the question of accountability, responsibility, and liability for robots and other AIs has become a major topic in digital ethics and everybody is scrambling to establish guidelines and norms for “good,” “trustworthy,” and “accountable” AI. It is at once interesting and unsettling that the ethical norms and values that the AI moralists inevitably fall back on arise from a society and a culture that knew nothing of self-driving cars or of artificial intelligence. This was a society and culture that categorized the world into stones, plants, animals, and human beings, whereby the latter alone were considered active subjects who could and should be held responsible for what they do. All the rest were mere objects, or as the law puts it, things (res). But what about the Tesla? Is it a subject or an object, a potentially responsible social actor or a mere thing? Whenever we go looking for who did it, we automatically assume some human being is the perpetrator and if we find them, we can bring them to justice. Who do we look for when the robot “commits” a crime? How do you bring an algorithm to justice? And if we decide that the robot is to be held responsible, aren’t we letting the human creators all to easily off the hook? These were the questions that the EU Parliament recently had to deal with when they discussed giving robots a special status as “electronic personalities” with much the same rights as corporations who have a “legal personality.”

Continue reading

Share

Being smart or what I can learn from my iPhone

Everyone is talking about smart. Everything is – or is becoming – smart. It started with smart phones. Suddenly, a familiar object that everyone used became not only functional, as all technologies in some way are, but smart also. After the smart phones came the smart watches and smart jewelry and even smart clothes. The trend to smart did not stop at apparel and accessories, but appliances such as smart refrigerators, smart cooking stoves, smart vacuum cleaners invaded the home. Indeed, the entire house is becoming smart. And if entire houses can be smart, why not entire cities? Finally, the Internet of Things is ushering in a 4th industrial revolution extending smartness to everything including not only cities, but smart factories, smart logistics, smart energy and so on. It would seem that being smart is becoming an important qualification for being itself. It would seem that existence today, and probably even more so in the future, depends on being smart and what is not smart or cannot become smart will have no place in the world. This trend should cause not only hope for a better future, but also raise some basic questions about what it is that we are calling smart. What does “smart” mean?

The adjective smart is usually applied to people who are considered clever, bright, intelligent, sharp-witted, shrewd, able, etc. It is interesting that we would hardly think of things in this way. This implies that smart technologies are changing the definition of what it means to be smart. If everything around us is becoming smart then these things are smart in a different way than we traditionally ascribe to human beings. My iPhone is not quick-witted, shrewd, or astute, but it does have qualities that demand to be called smart. What makes smart technologies smart?

This is not an idle question because when our homes, our places of work, our communication and transportation networks, and much more are all smart in a certain way, we humans will find that we are not the ones defining what it means to be smart. We will find ourselves in need of adapting to how the world around us is smart in order to become and remain smart ourselves. Floridi (The Fourth Revolution) speaks of a 4th revolution in which humans must learn to share the attribute of intelligence with machines, recognize themselves as “inforgs,” informational beings, and acknowledge that the world has become an “infosphere.” In a smart world humans are no longer the only ones in possession of intelligence and they are not the only ones who can say what intelligence means. Instead, we are part of an all-encompassing “socio-technical ensemble” that as a whole determines what it means to be smart. If we want to find out what smart means, then we have to take a step back from the mirror of Cartesian reflection and look at the whole socio-technical network. As actor-network theory puts it, the network is the actor.

Continue reading

Share

Being Meaning

Let’s face it, being is meaning. That is, of course, unless you know about something that has no meaning. If you do, please tell us. Remember, however, as Frank Ramsey once told Wittgenstein, you can’t whistle it either. So as long as we are talking we are in the realm of meaning and that’s it. There is nothing else. And if there were, it would be inside the realm of meaning. The outside is paradoxically inside. We draw the borders, we make the exclusions, it is we who put things outside. The problem here is the “we“. Who are we who make meaning? Are we those Homo sapiens with the big brains, the heroes of radical constructivism? If so, then why would our otherwise selfish, inconsiderate, and destructive species be so generous as to make everything else in the world? Not to mention the sheer unimaginable diversity and creativity of things. Do we really think we make meaning? If not, then who? God has been the best answer to this question for ages. But here again there are so many Gods that it is difficult to understand how they manage to cooperate, especially since they seem not to want to have anything to do with each other. So the God answer is not very satisfying if you look at the big picture and not merely your own garden. The next best answer would appear to be that meaning makes itself. After all, nothing comes from nothing. Selforganization, autocatalysis, spontaneous emergence; take your pick. These seem to be the best answers we have. But then we should admit that “we” is no longer our humble species, but “everything.” Everything has a “voice” of its own and contributes to the “collective.” Everything is involved in making meaning. This is what it “means” to exist. We human beings may play an important role in this process, but maybe not as important as we think. Anyway, we have a certain “responsibility.” We are obliged to “respond” to the many voices, claims, interventions, and disturbances that things are doing in their efforts to come to be.  This could be thought of as the “moral responsibility” of being human, to respond not only to other people, but to all things as well.  If language were indeed a gift, then responsibility in this sense would amount to acknowledging the gift and showing some kind of thankfulness. Heidegger pointed out the close associations between “thinking,” “thing,” and “thanking.” He referred to this interdependency as “gathering.” Gathering things together into one world, that is, allowing (and helping) everything to have its voice, its “say” in what the world means. Latour has formulated this moral responsibility in terms of the institution of a “parliament of things” and proposed a new constitution for the anthropocene in which humans and nonhumans together share responsibility for gathering the “collective.” In an age when the anthropos is seen as the dominating factor, it is perhaps appropriate that humans accept responsibility and no longer push it off onto God.  This seems to make ecology into the most atheistic of all sciences. But no one said the Gods have to be excluded, after all, they also have something to say, each in their own way. So let’s face it, being is meaning, but meaning is not yours or mine or anybody’s; meaning belongs to everything, indeed it is the expression of belonging, the belonging together of one world.

Share

Personal Informatics and Design

Design discourse is admittedly mostly technical in the sense of focusing on product development, marketing, and business planning. Nonetheless there is a deeper and, for the social scientist, more interesting background for questions relating to design. At stake is fundamentally a techné of the self in the sense of Foucault’s ethics and Heidegger’s interpretation of technology as poiesis. In a well-known book entitled Sciences of the Artificial, Herbert Simon developed a concept of design that can be traced from Greek techné and applied to Foucault’s technology of self as ethics. For Simon (1996)

“Engineers are not the only professional designers. Everyone designs who devises courses of action aimed at changing existing situations into preferred ones. The intellectual activity that produces material artifacts is no different fundamentally from the one that prescribes remedies for a sick patient or the one that devises a new sales plan for a company or a social welfare policy for a state. Design, so construed, is the core of all professional training…. Schools of engineering, as well as schools of architecture, business, education, law, and medicine, are all centrally concerned with the process of design.” (111)

Bruno Latour would agree to this and add that the concept of design today “has been extended from the details of daily objects to cities, landscapes, nations, cultures, bodies, genes, and … to nature itself… (Latour 2008: 2). Furthermore, this extension of the idea of design to all aspects of reality means that the concept of “design” has become “a clear substitute for revolution and modernization” (5); those two ideals that have led Modernity into an inescapable responsibility for planetary ecology. Finally, for Latour “the decisive advantage of the concept of design is that it necessarily involves an ethical dimension which is tied into the obvious question of good versus bad design” (5). The ethical dimension that Latour finds at the heart of design joins Foucault’s idea of an ethical technology of self for “humans have to be artificially made and remade” (10). Understanding self-knowledge as an ethical and technical (in the sense of techné) task of design should not lead us into post-humanist speculations and the discussion of cyborgs. Instead, that which makes design both ethically good and aesthetically beautiful is its ability to take as many different aspects of what something is and can become into account, to respect all the different claims that can be made on someone or something, to insure that nothing important is overlooked, and to allow for surprises and the unexpected. To design something well, including oneself, in the functional, ethical, and aesthetic dimensions, is to take account of as much information as one can in the process of constructing. Latour proposes that networking, that is, the techné of constructing actor-networks, should be understood as design. This means that design is a “means for drawing things together – gods, non-humans, and mortals included” (13).

Continue reading

Share