Author Archives: David J. Krieger

About David J. Krieger

PhD University of Chicago. Habilitation 1. Science of Religions, University of Lucerne, Switzerland, Habilitation 2. Communication Science, University of Lucerne, Switzerland. Co-Director Institute for Communication & Leadership IKF, Lucerne. Focus: Hermeneutics, Systems Theory, Network Theory, Semiotics, Intercultural Communication, New Media, eSociety

Can Networks be Virtuous?

This is about ethics. Ethics tells us what we ought to do. It is based on the distinction between what we really do, the “is” and what we should do, the “ought.” If everybody did what they should, then we wouldn’t need ethics. But let’s face it, people don’t do what they ought to do. Why not? Has ethics failed? Are people inherently immoral? And if so, what good does it do to keep telling them that they should do otherwise? Despite enormous efforts for centuries, ethics seems to be a futile enterprise divorced from reality. One answer to the apparent futility of ethics is to say that people do not do what they ought to do, but what they are. If people do the right thing, that’s not because of ethics, or because of being told what they ought to do. It’s because that is simply what they are. There is no “ought.” There is only what “is.” In other words, you shall know them by their actions – and not by their proclaimed or hidden motives. But what are people? What should we be reading from their actions?

Continue reading
Share

Tesla is a Philosophical Problem

Of course, this is not about Tesla, but about intelligent-mobile- autonomous-systems (IMAS) – also known as robots. The philosophical problem comes from the fact that the robot, in this case the automobile, says “Leave the driving to us” which was an advertising slogan for Greyhound Bus. If the robot takes over the driving, and this means the decision making, then who is responsible for accidents? This was not a problem for Greyhound since the driver and in some cases the company were held liable for their mistakes. But what about the AIs? Indeed, the question of accountability, responsibility, and liability for robots and other AIs has become a major topic in digital ethics and everybody is scrambling to establish guidelines and norms for “good,” “trustworthy,” and “accountable” AI. It is at once interesting and unsettling that the ethical norms and values that the AI moralists inevitably fall back on arise from a society and a culture that knew nothing of self-driving cars or of artificial intelligence. This was a society and culture that categorized the world into stones, plants, animals, and human beings, whereby the latter alone were considered active subjects who could and should be held responsible for what they do. All the rest were mere objects, or as the law puts it, things (res). But what about the Tesla? Is it a subject or an object, a potentially responsible social actor or a mere thing? Whenever we go looking for who did it, we automatically assume some human being is the perpetrator and if we find them, we can bring them to justice. Who do we look for when the robot “commits” a crime? How do you bring an algorithm to justice? And if we decide that the robot is to be held responsible, aren’t we letting the human creators all to easily off the hook? These were the questions that the EU Parliament recently had to deal with when they discussed giving robots a special status as “electronic personalities” with much the same rights as corporations who have a “legal personality.”

Continue reading

Share

Data – Information – Knowledge

Who doesn’t know the classic distinction between data, information, and knowledge? And who hasn’t seen at least one version of the famous pyramid with data on the bottom, information making up the next level and knowledge making up the layer above that with the peak consisting of wisdom? There are several assumptions and implication of this model of data, information, and knowledge: First, it is assumed that they are qualitatively and quantitively different. Data, for example, is different from information and there is more data than information just as there is more information than knowledge with wisdom being the rarest sort of knowledge. Second, it is assumed that they are hierarchically interdependent, that is, you cannot have information without data or knowledge without information, but you could have data without information and information without knowledge. Third, the hierarchy implies a value judgement, that is, data is not as valuable as information and information is not as valuable as knowledge, and of course, wisdom is the most valuable of all. Finally, the hierarchy also implies a kind of temporal or ontological priority. Since information depends on data, data must come first, and since knowledge depends on information, information comes before knowledge, at least temporally. This means that first we have data, then we somehow construct information out of data, and then we can go on to construct knowledge out of information. Data is something like the raw material out of which is constructed information and information is the raw material out of which knowledge is constructed. There is nothing in the model that implies how this construction process works. The model itself does not tell us where data comes from or how exactly information is constructed out of data or knowledge out of information. In order to answer these questions, we are left to speculation.

There is of course a kind of consensus among interpreters that there are also different kinds of construction. In short, these may be termed “transcription,” “cognition,” and “praxis.” Data are said to be constructed by means of some kind of transcription, that is, something is preserved, fixed in some material form, in some medium, whether it be sound, text, or pictures. Today, data is above all transcribed into bits and bytes, that is, into digital media, which, as the dominant media in today’s world, also determine what is usually meant by the term “data.” Data are just bits and bytes, 1s and 0s, electronically fixed upon some memory medium. Information is usually thought to be constructed out of data. When the otherwise meaningless bits and bytes are combined into signs in a language and are given meaning, then data becomes information. This is above all a cognitive process. Somebody makes “sense” of the images or marks on paper or the bits and bytes on the chip via a cognitive process of “reading.” This is information. But it is not yet knowledge. Knowledge is what information becomes when it used practically to solve some specific problem. The practical use of information in problem-solving activities is called “praxis.” It is in praxis that mere information, for example, mere theory or mere textbook knowledge, becomes situated in a particular context in the real world. It is through praxis that we know what information is good for, what it can do, how it can be used in complex situations. Knowledge is knowing by doing. This separates the apprentice from the master, the inexperienced from the experienced. It is the experienced master who alone can be said to possess knowledge.

Continue reading

Share

AI Now or AI as it Could Be

The 2018 Symposium organized by the AI Now Institute (https://symposium.ainowinstitute.org/) under the title of “Ethics, Organizing, and Accountability” is interesting for a number of reasons. The AI Now Institute is an interdisciplinary research institute dedicated to exploring the social implications of artificial intelligence which was founded in 2017 by Kate Crawford (https://en.wikipedia.org/wiki/Kate_Crawford and Meredith Whittaker (https://en.wikipedia.org/wiki/Meredith_Whittaker) and is housed at the Ney York University.

The name is significant. AI “now” is indeed about AI as it is now, that is, not as it could and should be. Emphasizing the “now” has a critical edge. The focus is on what AI is actually doing, or more accurately, not doing right in the various areas of concern to the Institute, namely law enforcement, security, and social services. The AI Now Institute’s explicit concern with the “social implications” of AI translates into a rather one-sided civil rights perspective. What the institute explores are primarily the dangers of AI with regard to civil rights issues. This is well and good. It is necessary and of great use for preventing misuse or even abuse of the technology. But is it enough to claim that simply dropping AI as it is now into a social, economic, and political reality riddled with discrimination and inequality will not necessarily enhance civil rights and that the technology should therefore either not be used at all or if it is used, then under strict regulative control? Should one not be willing and able to consider the potential of AI to address civil rights issues and correct past failings, and perhaps even to start constructively dealing with the long-standing injustices the Institute is primarily concerned with? Finally, quite apart from the fact that the social implications of AI go way beyond civil rights issues, should not the positive results of AI in the areas of law enforcement, crime prevention, security, and social services also be thrown onto the scale before deciding to stop deployment of AI solutions? One cannot escape the impression that the general tenor of the participants at the symposium is the throw the baby out with the bathwater.

Continue reading

Share

Being smart or what I can learn from my iPhone

Everyone is talking about smart. Everything is – or is becoming – smart. It started with smart phones. Suddenly, a familiar object that everyone used became not only functional, as all technologies in some way are, but smart also. After the smart phones came the smart watches and smart jewelry and even smart clothes. The trend to smart did not stop at apparel and accessories, but appliances such as smart refrigerators, smart cooking stoves, smart vacuum cleaners invaded the home. Indeed, the entire house is becoming smart. And if entire houses can be smart, why not entire cities? Finally, the Internet of Things is ushering in a 4th industrial revolution extending smartness to everything including not only cities, but smart factories, smart logistics, smart energy and so on. It would seem that being smart is becoming an important qualification for being itself. It would seem that existence today, and probably even more so in the future, depends on being smart and what is not smart or cannot become smart will have no place in the world. This trend should cause not only hope for a better future, but also raise some basic questions about what it is that we are calling smart. What does “smart” mean?

The adjective smart is usually applied to people who are considered clever, bright, intelligent, sharp-witted, shrewd, able, etc. It is interesting that we would hardly think of things in this way. This implies that smart technologies are changing the definition of what it means to be smart. If everything around us is becoming smart then these things are smart in a different way than we traditionally ascribe to human beings. My iPhone is not quick-witted, shrewd, or astute, but it does have qualities that demand to be called smart. What makes smart technologies smart?

This is not an idle question because when our homes, our places of work, our communication and transportation networks, and much more are all smart in a certain way, we humans will find that we are not the ones defining what it means to be smart. We will find ourselves in need of adapting to how the world around us is smart in order to become and remain smart ourselves. Floridi (The Fourth Revolution) speaks of a 4th revolution in which humans must learn to share the attribute of intelligence with machines, recognize themselves as “inforgs,” informational beings, and acknowledge that the world has become an “infosphere.” In a smart world humans are no longer the only ones in possession of intelligence and they are not the only ones who can say what intelligence means. Instead, we are part of an all-encompassing “socio-technical ensemble” that as a whole determines what it means to be smart. If we want to find out what smart means, then we have to take a step back from the mirror of Cartesian reflection and look at the whole socio-technical network. As actor-network theory puts it, the network is the actor.

Continue reading

Share