Category Archives: Network Norms

the rules of networking

Q & A on AI

Q: Where does AI begin and where does it end?

A: AI will probably have neither beginning nor end, but will be seamlessly integrated into our daily lives, which could mean that in the future we will no longer speak of “artificial” intelligence at all, but only of “smart” or “dumb”. Either we and everything around us, for example, our houses, our cars, our cities etc. are smart or dumb.

Q: How does AI relate to philosophy?

A: At the moment, philosophy is concerned with AI insofar as it can be compared to human intelligence or consciousness. But it is to be suspected that a useful philosophical theory of AI would have to be a philosophy of information. Being “smart” is about the optimal use of information. Information and not cognition, consciousness, or mind is the correct fundamental concept for a philosophy of AI.

Q: When is AI obligatory and when is it voluntary?

A: Obligation and freedom are terms that refer to individual human beings and their position in society. According to modern Western beliefs, one has duties towards society and towards oneself one is free and independent. AI, in this frame of thinking, is seen as something in society that is a threat to freedom for the individual. But as for all social conditions of human existence, i.e., as for all technologies, one must ask whether one can be truly independent and autonomous. After all, when is electricity, driving a car, making a phone call, using a refrigerator, etc. voluntary or mandatory? If technology is society, and an individual outside of society and completely independent of all technology does not exist, then the whole discussion about freedom is of little use. Am I unfree if the self-driving car decides whether I turn right or left? Am I free if I can decide whether I want to stay dumb instead of becoming smart?

Q: What can the status quo be maintained during permanent development?

A: This question is answered everywhere with the term “sustainability”. When it is said that a business, a technology, a school, or a policy should be sustainable, the aim is to maintain a balance under changing conditions. But it is doubtful whether real development can take place within the program of sustainability. Whatever I define as sustainable at the moment, e.g., the stock of certain trees in a forest, can be destructive and harmful under other conditions, e.g., climate change. Sustainability prioritizes stability and opposes change. To value stability in an uncertain, complex, and rapidly changing world is misguided and doomed to failure. We will have to replaced sustainability as a value with a different value. The best candidate could be something like flexibility, i.e., because we cannot or do not want to keep given conditions stable, we will have to make everything optimally changeable.

Q: Who is mainly responsible for AI development in a household?

A: In complex socio-technical systems, all stakeholders bear responsibility simultaneously and equally. Whether this is a household or a nation, it is the stakeholders, both humans and machines, who contribute to the operations of the network and consequently share responsibility for the network. This question is ethically interesting, since in traditional ethics one must always find a “culprit” when something goes wrong. Since ethics, morals, and the law are only called upon the scene and can only intervene when someone does something voluntarily and knowingly that is immoral or illegal, there needs to be a perpetrator. Without a perpetrator, no one can be held ethically or legally accountable. In complex socio-technical systems, e.g., an automated traffic system with many different actors, there is no perpetrator. For this reason, everyone must take responsibility. Of course, there can and must be role distinctions and specializations, but the principle is that the network is the actor and not any actors in the network. Actors, both human and non-human, can only “do” things within the network and as a network.  

Q: Who is primarily responsible for AI use in a household?

A: Same as above

Q: Who is mainly responsible for AI development in a company?

A: Same as above

Q: Who is primarily responsible for AI use in an enterprise?

A: Same as above

Q: Who is primarily responsible for AI development in a community/city?

A: Same as above

Q: Who is primarily responsible for AI use in a community/city?

A: Same as above

Q: Who is primarily responsible for AI development in a country?

A: Same as above

Q: Who is primarily responsible for AI use in a country?

A: Same as above

Q: Can there even be a global regulation on AI?

A: All the questions above reflect our traditional hierarchies and levels of regulation, from household to nation or even the world. What is interesting about socio-technical networks is that they do not follow this hierarchy. They are simultaneously local and global. An AI in a household, for example Alexa, is globally connected and operates because of this global connectivity. If we are going to live in a global network society in the future, then new forms of regulation need to be developed. These new forms of regulation must be able to operate as governance (bottom up and distributed) rather than government, i.e., hierarchical. To develop and implement these new forms of governance is a political task but it is not only political. It is also ethical. For as long as we are guided by values in our laws and rules politics ultimately rest upon what people in a society value. The new values that guide the regulation of a global network society need to be discovered and brought to bear on all the above questions. This is a fitting task for a digital ethics.

Q: Who would develop these regulations?

A: Here again, only all stakeholders in a network can be responsible for setting up regulatory mechanisms and also for control. One could imagine that a governance framework was developed bottom up and that in addition to internal controlling also an external audit would monitor the compliance with the rules. This could be the function of politics in the global network society. There will be no global government, but indeed global governance. The role of government would be to audit the self-organizing governance frameworks of the networks of which society consists.

Q: Should there be an AI driver’s license in the future?

A: The idea of a driver’s license for AI users, like for a car or a computer, assumes that we control the AIs. But what if it is the AIs that are driving us? Would they perhaps have to have a kind of driver’s license certifying their competence for steering humans?

Q: What would the conditions be for that?

A: Whether AIs get a human or social driver’s license that certifies them as socially competent would have to be based on a competence profile of AIs as actors in certain networks. The network constructs the actors, and at the same time is constructed by the actors who integrate into the network. Each network would need to develop the AIs it needs, but also be open to being conditioned as a network by those AIs. This ongoing process is to be understood and realized as governance in the sense described above.

Q: How will society from young to old be sensitized and educated?

A: At the moment, there is much discussion of either “critical thinking” or “media literacy” in this context. Both terms are insufficient and misleading. When critical thinking is mentioned, it is unclear what criticism means. For the most part, it means that one is of the same opinion as those who call for critical thinking. Moreover, it is unclear what is meant by thinking. Critique is everywhere. Everything and everyone is constantly being criticized. But where is the thinking? Again, thinking mostly means thinking like those who say what one should criticize. Since this is different in each case and everyone has their own agenda, the term remains empty and ambiguous. The same is true of the term media literacy. In general, media literacy means knowing how the media select, process and present information, and it also means being aware that this is done not according to criteria of truth-seeking, but criteria of the media business. Knowing this, however, is not a recipe for effectively distinguishing truth from fake news. For that, one needs to know a lot more about how to research for information and how to judge its reliability.

Q: Where do the necessary resources for this come from?

A: There is a tendency to defer the task of education and training to the education system. Schools are supposed to ensure that children grow up with media literacy and the ability to think critically. But how this “training task” is understood and implemented is unclear. Since the school system has largely shown itself to be resistant to criticism in terms of pedagogy and curriculum, and since it still sees certification rather than education as its primary task, the traditional education system cannot be expected to be capable of doing the job that is needed. As in other areas of society, schools will have to transform themselves into networks and learn to deal with information much more flexibly than is the case today. Perhaps schools in their current form will no longer be needed.

Q: Will there ever be effective privacy online?

A: It could also be asked whether there is or could be effective privacy offline. In general, it depends on what one means by privacy. At the moment, privacy or data protection is a code word for the autonomous, rational subject of Western democracy, or in short, the free, sovereign individual. It looks like the digital transformation and the global network society are challenging this assumption. Before talking about privacy, we must therefore answer the answer the question, what is the human being. If the human being is not what Western individualism claims, then privacy can be understood differently. There is much work to be done in this area, as it seems that the humanistic and individualistic ideology of Western industrial society has chosen to fight its last stand against the encroaching global network society over the question of privacy.

Share

Holding Things Together

When it comes to order as opposed to chaos, that is, of holding things together, physicists speak of four fundamental forces of the universe. There is gravity, electromagnetic force, and the so-called “strong” and “weak” forces that hold particles together and govern their relations. These four forces supposedly explain everything. But what about life? And what about meaning? Do not living organisms have their own “life” force that holds the cells and parts of cells together and regulates their interactions? As for meaning, what holds the words a language together so that they make sentences? Why can’t just any word be combined with just any other? There must be something that makes meaning happen. Can these forces not also be considered “fundamental” forces of the universe? This question is important, at least if we want to avoid “physicalism,” that is, reducing everything to matter.

Let us call the force that turns inanimate matter into living organisms “negentropy” and let us call the force that holds words together to make meaningful sentences and thoughts “power.” In 1944 the Nobel Prize winning physicist Erwin Schrödinger published a book entitled What is Life?. The question arises because living systems do not follow the Second Law of Thermodynamics, that is, the law of entropy. In living systems, order increases rather than decreases. This goes against the law of entropy. Life, therefore, is a fundamentally different form of order than matter. Life is a so-called “emergent” phenomenon which means that we don’t know where it comes from or how it comes into being, but we know it did and that it is very different from the purely physical organization of matter which the law of entropy regulates. In distinction to merely physical organization, which does not negate entropy, life seems to do this. Negentropy means the negation of entropy. Entropy is the tendency of energy to dissipate to equilibrium, that is, the equal probability of all states. For Schrödinger, this was a paradox. How can entropy be negated, and systems move from being less organized to being more organized? Another Nobel Prize winner, Ilya Prigogine, spoke of “dissipative systems” which run energy through their structures much like water running through a mill or food going through the metabolism of organisms. Such systems use entropy to negate entropy.

Continue reading
Share

Tesla is a Philosophical Problem

Of course, this is not about Tesla, but about intelligent-mobile- autonomous-systems (IMAS) – also known as robots. The philosophical problem comes from the fact that the robot, in this case the automobile, says “Leave the driving to us” which was an advertising slogan for Greyhound Bus. If the robot takes over the driving, and this means the decision making, then who is responsible for accidents? This was not a problem for Greyhound since the driver and in some cases the company were held liable for their mistakes. But what about the AIs? Indeed, the question of accountability, responsibility, and liability for robots and other AIs has become a major topic in digital ethics and everybody is scrambling to establish guidelines and norms for “good,” “trustworthy,” and “accountable” AI. It is at once interesting and unsettling that the ethical norms and values that the AI moralists inevitably fall back on arise from a society and a culture that knew nothing of self-driving cars or of artificial intelligence. This was a society and culture that categorized the world into stones, plants, animals, and human beings, whereby the latter alone were considered active subjects who could and should be held responsible for what they do. All the rest were mere objects, or as the law puts it, things (res). But what about the Tesla? Is it a subject or an object, a potentially responsible social actor or a mere thing? Whenever we go looking for who did it, we automatically assume some human being is the perpetrator and if we find them, we can bring them to justice. Who do we look for when the robot “commits” a crime? How do you bring an algorithm to justice? And if we decide that the robot is to be held responsible, aren’t we letting the human creators all to easily off the hook? These were the questions that the EU Parliament recently had to deal with when they discussed giving robots a special status as “electronic personalities” with much the same rights as corporations who have a “legal personality.”

Continue reading

Share

AI Now or AI as it Could Be

The 2018 Symposium organized by the AI Now Institute (https://symposium.ainowinstitute.org/) under the title of “Ethics, Organizing, and Accountability” is interesting for a number of reasons. The AI Now Institute is an interdisciplinary research institute dedicated to exploring the social implications of artificial intelligence which was founded in 2017 by Kate Crawford (https://en.wikipedia.org/wiki/Kate_Crawford and Meredith Whittaker (https://en.wikipedia.org/wiki/Meredith_Whittaker) and is housed at the Ney York University.

The name is significant. AI “now” is indeed about AI as it is now, that is, not as it could and should be. Emphasizing the “now” has a critical edge. The focus is on what AI is actually doing, or more accurately, not doing right in the various areas of concern to the Institute, namely law enforcement, security, and social services. The AI Now Institute’s explicit concern with the “social implications” of AI translates into a rather one-sided civil rights perspective. What the institute explores are primarily the dangers of AI with regard to civil rights issues. This is well and good. It is necessary and of great use for preventing misuse or even abuse of the technology. But is it enough to claim that simply dropping AI as it is now into a social, economic, and political reality riddled with discrimination and inequality will not necessarily enhance civil rights and that the technology should therefore either not be used at all or if it is used, then under strict regulative control? Should one not be willing and able to consider the potential of AI to address civil rights issues and correct past failings, and perhaps even to start constructively dealing with the long-standing injustices the Institute is primarily concerned with? Finally, quite apart from the fact that the social implications of AI go way beyond civil rights issues, should not the positive results of AI in the areas of law enforcement, crime prevention, security, and social services also be thrown onto the scale before deciding to stop deployment of AI solutions? One cannot escape the impression that the general tenor of the participants at the symposium is the throw the baby out with the bathwater.

Continue reading

Share

Being smart or what I can learn from my iPhone

Everyone is talking about smart. Everything is – or is becoming – smart. It started with smart phones. Suddenly, a familiar object that everyone used became not only functional, as all technologies in some way are, but smart also. After the smart phones came the smart watches and smart jewelry and even smart clothes. The trend to smart did not stop at apparel and accessories, but appliances such as smart refrigerators, smart cooking stoves, smart vacuum cleaners invaded the home. Indeed, the entire house is becoming smart. And if entire houses can be smart, why not entire cities? Finally, the Internet of Things is ushering in a 4th industrial revolution extending smartness to everything including not only cities, but smart factories, smart logistics, smart energy and so on. It would seem that being smart is becoming an important qualification for being itself. It would seem that existence today, and probably even more so in the future, depends on being smart and what is not smart or cannot become smart will have no place in the world. This trend should cause not only hope for a better future, but also raise some basic questions about what it is that we are calling smart. What does “smart” mean?

The adjective smart is usually applied to people who are considered clever, bright, intelligent, sharp-witted, shrewd, able, etc. It is interesting that we would hardly think of things in this way. This implies that smart technologies are changing the definition of what it means to be smart. If everything around us is becoming smart then these things are smart in a different way than we traditionally ascribe to human beings. My iPhone is not quick-witted, shrewd, or astute, but it does have qualities that demand to be called smart. What makes smart technologies smart?

This is not an idle question because when our homes, our places of work, our communication and transportation networks, and much more are all smart in a certain way, we humans will find that we are not the ones defining what it means to be smart. We will find ourselves in need of adapting to how the world around us is smart in order to become and remain smart ourselves. Floridi (The Fourth Revolution) speaks of a 4th revolution in which humans must learn to share the attribute of intelligence with machines, recognize themselves as “inforgs,” informational beings, and acknowledge that the world has become an “infosphere.” In a smart world humans are no longer the only ones in possession of intelligence and they are not the only ones who can say what intelligence means. Instead, we are part of an all-encompassing “socio-technical ensemble” that as a whole determines what it means to be smart. If we want to find out what smart means, then we have to take a step back from the mirror of Cartesian reflection and look at the whole socio-technical network. As actor-network theory puts it, the network is the actor.

Continue reading

Share