Tag Archives: AI

New Memes for a New World

Memes are cultural DNA, that is, the elements of cultural code that generate the world that characterizes a particular culture, a particular time, a particular civilization. They are the basic ideas informing a world view, articulating the values and norms that people accept as true. Memes are the design elements of a culture.

Listed below are some of the most important memes of the global network society, a society that is now emerging from the digital transformation that characterizes our world. These are new memes for a new world.

1. Information: One of the most important memes of the global network society is the idea that the world consists of information and not of things. Information is a relation and a process and not a substance, an individual entity, a bounded individual. A world of information is a world of relations and not of things.

2. Networking: Because Information is relational, it exists in networks. But networks are not things. Otherwise, we would simply have collective things instead of individual things, similarly to the way we talk about organizations instead of individuals. Networks are neither organizations, nor individuals. They are neither things nor collections or compositions of things. Networks are processes of making relations, associations, connections. One should speak of networking as a verb instead of network as a noun. Networks are not bounded systems operating to maintain their structures. They are dynamic, changing, and flexible. Human beings as well as everything else in the world are informational processes and therefore exist as networks, that is, they are ongoing, historical processes of networking. Systems are becoming networks.

3. Emergent Order: Information (and networking) is a level of emergent order above the levels of matter and life. Just as life emerged from matter, so information emerged from life. And just as life is neither reducible to matter nor can it be derived from it, so information is neither reducible to life, nor can it be derived from it. Information is therefore not cognition in the brain or a mental state. The brain does not use information. The brain is an organ of the body that is used by information. Information is a form of being in its own right and of its own kind.

4. Integration: The physical and biological substrates are integrated into information. This is the principle of integration, which states that higher levels of emergent order integrate lower levels, that is, they are more complex and variable then lower levels. This implies that with the emergence of information, matter and life have become informational processes. Just as life can do things with matter that matter could not do on its own, so can information do things with matter and life, that they cannot do on their own. The emergent nature of information and consequent integration of matter and life is why science and technology are possible.

5. Common Good: Information is a common good, a common pool resource, which implies neither that it cannot be monetized nor that it cannot be administratively regulated. It is regulated and monetized as a common pool resource within governance frameworks that are certified and audited by government. Since information is not a bounded entity, a thing, it cannot become private property. Western industrial society is based on the belief in individuals who own property.

6. Global Network Society: Society is no longer Western industrial society, but a global network society. Nation states will be replaced by global networks. Individuals and organizations are becoming networks that are not territorially defined. Society is not a group of individuals, but a network of networks. There is nothing outside of society. Nature is part of society. The integration of matter and life into information makes society all-encompassing. The world is society.

7. Governance: Society is most effectively regulated by governance instead of government. Governance is self-organization, or non-hierarchical organization. In the global network society hierarchies are inefficient and illegitimate. Decisions are made on the basis of information and not on the basis of a position in a hierarchy.  

8. Design: Governance is by design, which means, it is constructed by design processes which are guided by the network norms generating social order. Design means that networking can be done in a good or bad way. The good ways of networking can be described as network norms.

9. Network Norms: The network norms are: connectivity, flow, communication, participation, transparency, authenticity, and flexibility. These are the values and norms of the global network society.

10. Computation, Computationalism, Computational Paradigm: Information is not to be equated with digital information that can be processed electronically by computers. The computer should not be used as a metaphor for understanding either the brain or society. The brain is not a computer. Society is not a computer. A computer is a computer, and nothing else. Digital information or electronic information processing is a derivative form of information that arose late in the history of society and is dependent upon and embedded in many non-digital networks that have developed over thousands of years. Nonetheless, if computation is understood very generally to be the iterative application of simple rules to information out of which more complex forms of information arise then networking in all its forms can be considered to be computation. This general definition of computation is independent of the computer and can therefore be used as a definition of networking. Intelligence is networking. Artificial intelligence is electronic information processing.

Share

Q & A on AI

Q: Where does AI begin and where does it end?

A: AI will probably have neither beginning nor end, but will be seamlessly integrated into our daily lives, which could mean that in the future we will no longer speak of “artificial” intelligence at all, but only of “smart” or “dumb”. Either we and everything around us, for example, our houses, our cars, our cities etc. are smart or dumb.

Q: How does AI relate to philosophy?

A: At the moment, philosophy is concerned with AI insofar as it can be compared to human intelligence or consciousness. But it is to be suspected that a useful philosophical theory of AI would have to be a philosophy of information. Being “smart” is about the optimal use of information. Information and not cognition, consciousness, or mind is the correct fundamental concept for a philosophy of AI.

Q: When is AI obligatory and when is it voluntary?

A: Obligation and freedom are terms that refer to individual human beings and their position in society. According to modern Western beliefs, one has duties towards society and towards oneself one is free and independent. AI, in this frame of thinking, is seen as something in society that is a threat to freedom for the individual. But as for all social conditions of human existence, i.e., as for all technologies, one must ask whether one can be truly independent and autonomous. After all, when is electricity, driving a car, making a phone call, using a refrigerator, etc. voluntary or mandatory? If technology is society, and an individual outside of society and completely independent of all technology does not exist, then the whole discussion about freedom is of little use. Am I unfree if the self-driving car decides whether I turn right or left? Am I free if I can decide whether I want to stay dumb instead of becoming smart?

Q: What can the status quo be maintained during permanent development?

A: This question is answered everywhere with the term “sustainability”. When it is said that a business, a technology, a school, or a policy should be sustainable, the aim is to maintain a balance under changing conditions. But it is doubtful whether real development can take place within the program of sustainability. Whatever I define as sustainable at the moment, e.g., the stock of certain trees in a forest, can be destructive and harmful under other conditions, e.g., climate change. Sustainability prioritizes stability and opposes change. To value stability in an uncertain, complex, and rapidly changing world is misguided and doomed to failure. We will have to replaced sustainability as a value with a different value. The best candidate could be something like flexibility, i.e., because we cannot or do not want to keep given conditions stable, we will have to make everything optimally changeable.

Q: Who is mainly responsible for AI development in a household?

A: In complex socio-technical systems, all stakeholders bear responsibility simultaneously and equally. Whether this is a household or a nation, it is the stakeholders, both humans and machines, who contribute to the operations of the network and consequently share responsibility for the network. This question is ethically interesting, since in traditional ethics one must always find a “culprit” when something goes wrong. Since ethics, morals, and the law are only called upon the scene and can only intervene when someone does something voluntarily and knowingly that is immoral or illegal, there needs to be a perpetrator. Without a perpetrator, no one can be held ethically or legally accountable. In complex socio-technical systems, e.g., an automated traffic system with many different actors, there is no perpetrator. For this reason, everyone must take responsibility. Of course, there can and must be role distinctions and specializations, but the principle is that the network is the actor and not any actors in the network. Actors, both human and non-human, can only “do” things within the network and as a network.  

Q: Who is primarily responsible for AI use in a household?

A: Same as above

Q: Who is mainly responsible for AI development in a company?

A: Same as above

Q: Who is primarily responsible for AI use in an enterprise?

A: Same as above

Q: Who is primarily responsible for AI development in a community/city?

A: Same as above

Q: Who is primarily responsible for AI use in a community/city?

A: Same as above

Q: Who is primarily responsible for AI development in a country?

A: Same as above

Q: Who is primarily responsible for AI use in a country?

A: Same as above

Q: Can there even be a global regulation on AI?

A: All the questions above reflect our traditional hierarchies and levels of regulation, from household to nation or even the world. What is interesting about socio-technical networks is that they do not follow this hierarchy. They are simultaneously local and global. An AI in a household, for example Alexa, is globally connected and operates because of this global connectivity. If we are going to live in a global network society in the future, then new forms of regulation need to be developed. These new forms of regulation must be able to operate as governance (bottom up and distributed) rather than government, i.e., hierarchical. To develop and implement these new forms of governance is a political task but it is not only political. It is also ethical. For as long as we are guided by values in our laws and rules politics ultimately rest upon what people in a society value. The new values that guide the regulation of a global network society need to be discovered and brought to bear on all the above questions. This is a fitting task for a digital ethics.

Q: Who would develop these regulations?

A: Here again, only all stakeholders in a network can be responsible for setting up regulatory mechanisms and also for control. One could imagine that a governance framework was developed bottom up and that in addition to internal controlling also an external audit would monitor the compliance with the rules. This could be the function of politics in the global network society. There will be no global government, but indeed global governance. The role of government would be to audit the self-organizing governance frameworks of the networks of which society consists.

Q: Should there be an AI driver’s license in the future?

A: The idea of a driver’s license for AI users, like for a car or a computer, assumes that we control the AIs. But what if it is the AIs that are driving us? Would they perhaps have to have a kind of driver’s license certifying their competence for steering humans?

Q: What would the conditions be for that?

A: Whether AIs get a human or social driver’s license that certifies them as socially competent would have to be based on a competence profile of AIs as actors in certain networks. The network constructs the actors, and at the same time is constructed by the actors who integrate into the network. Each network would need to develop the AIs it needs, but also be open to being conditioned as a network by those AIs. This ongoing process is to be understood and realized as governance in the sense described above.

Q: How will society from young to old be sensitized and educated?

A: At the moment, there is much discussion of either “critical thinking” or “media literacy” in this context. Both terms are insufficient and misleading. When critical thinking is mentioned, it is unclear what criticism means. For the most part, it means that one is of the same opinion as those who call for critical thinking. Moreover, it is unclear what is meant by thinking. Critique is everywhere. Everything and everyone is constantly being criticized. But where is the thinking? Again, thinking mostly means thinking like those who say what one should criticize. Since this is different in each case and everyone has their own agenda, the term remains empty and ambiguous. The same is true of the term media literacy. In general, media literacy means knowing how the media select, process and present information, and it also means being aware that this is done not according to criteria of truth-seeking, but criteria of the media business. Knowing this, however, is not a recipe for effectively distinguishing truth from fake news. For that, one needs to know a lot more about how to research for information and how to judge its reliability.

Q: Where do the necessary resources for this come from?

A: There is a tendency to defer the task of education and training to the education system. Schools are supposed to ensure that children grow up with media literacy and the ability to think critically. But how this “training task” is understood and implemented is unclear. Since the school system has largely shown itself to be resistant to criticism in terms of pedagogy and curriculum, and since it still sees certification rather than education as its primary task, the traditional education system cannot be expected to be capable of doing the job that is needed. As in other areas of society, schools will have to transform themselves into networks and learn to deal with information much more flexibly than is the case today. Perhaps schools in their current form will no longer be needed.

Q: Will there ever be effective privacy online?

A: It could also be asked whether there is or could be effective privacy offline. In general, it depends on what one means by privacy. At the moment, privacy or data protection is a code word for the autonomous, rational subject of Western democracy, or in short, the free, sovereign individual. It looks like the digital transformation and the global network society are challenging this assumption. Before talking about privacy, we must therefore answer the answer the question, what is the human being. If the human being is not what Western individualism claims, then privacy can be understood differently. There is much work to be done in this area, as it seems that the humanistic and individualistic ideology of Western industrial society has chosen to fight its last stand against the encroaching global network society over the question of privacy.

Share

The Moral Machine

In 2016 a group of scientists at MIT created an online platform to gather information about how people would decide what the outcomes of the actions of autonomous, automated systems “ought” to be. Although there were different scenarios, the most famous is the self-driving car that in the face of an imminent accident had to “decide” who should be run over and killed and who should be spared. Since it was matter of making decisions about what ought to be done in a case that led to harms, this was called a “moral” machine. The “machine” part comes from the fact that the automatic system was to be programmed in advance, which choice to make, that is, the choice was no longer “free” as would be the case when a human driver made the decision, but was determined by the programmer, who then bore the moral responsibility.

More interesting than the results of this experiment (see http://moralmachine.mit.edu/) are the assumptions it makes. One important assumption is that there are no accidents, that is, the fact that someone will be killed in an “accident” is not accidental, but a determined outcome of programming. Not just anything could happen, but only certain things could happen, and among these the choice was to be made in advance so that what does happen, happens “mechanically.” The second important assumption is that the future is no longer open and the present no longer free. Usually, we assume that the past is certain, the present is free, that is, we can decide in the present moment what to do, and the future, that is, the consequences of our actions, is open. We don’t know what the future brings. The future is contingent. This age-old temporal scheme is placed in question by the moral machine. The idea is that data analytics is able to know what will happen in the future and on the basis of this knowledge interventions in the present can be made that will influence, indeed, determine which future options will be realized. This is called datafication. Datafication is 1) the process by which all present states of the world are turned into data creating thereby a virtual double of reality, 2) subjecting this data to descriptive, predictive, preventive, and prescriptive analytics so that the effects of all possible variables can be simulated and on the basis of data-based projections of what will happen, interventions in the present can be made to influence future outcomes. Datafication is the basis of intelligent, autonomous, automated systems, such as self-driving cars, but also personalized medicine, learning analytics in education, business intelligence in the private sector, and much more. This is what makes the moral machine interesting. It is a parable of the digital age and poses central questions about what it means to live in a datafied world.

Continue reading
Share