Category Archives: Privacy

Q & A on AI

Q: Where does AI begin and where does it end?

A: AI will probably have neither beginning nor end, but will be seamlessly integrated into our daily lives, which could mean that in the future we will no longer speak of “artificial” intelligence at all, but only of “smart” or “dumb”. Either we and everything around us, for example, our houses, our cars, our cities etc. are smart or dumb.

Q: How does AI relate to philosophy?

A: At the moment, philosophy is concerned with AI insofar as it can be compared to human intelligence or consciousness. But it is to be suspected that a useful philosophical theory of AI would have to be a philosophy of information. Being “smart” is about the optimal use of information. Information and not cognition, consciousness, or mind is the correct fundamental concept for a philosophy of AI.

Q: When is AI obligatory and when is it voluntary?

A: Obligation and freedom are terms that refer to individual human beings and their position in society. According to modern Western beliefs, one has duties towards society and towards oneself one is free and independent. AI, in this frame of thinking, is seen as something in society that is a threat to freedom for the individual. But as for all social conditions of human existence, i.e., as for all technologies, one must ask whether one can be truly independent and autonomous. After all, when is electricity, driving a car, making a phone call, using a refrigerator, etc. voluntary or mandatory? If technology is society, and an individual outside of society and completely independent of all technology does not exist, then the whole discussion about freedom is of little use. Am I unfree if the self-driving car decides whether I turn right or left? Am I free if I can decide whether I want to stay dumb instead of becoming smart?

Q: What can the status quo be maintained during permanent development?

A: This question is answered everywhere with the term “sustainability”. When it is said that a business, a technology, a school, or a policy should be sustainable, the aim is to maintain a balance under changing conditions. But it is doubtful whether real development can take place within the program of sustainability. Whatever I define as sustainable at the moment, e.g., the stock of certain trees in a forest, can be destructive and harmful under other conditions, e.g., climate change. Sustainability prioritizes stability and opposes change. To value stability in an uncertain, complex, and rapidly changing world is misguided and doomed to failure. We will have to replaced sustainability as a value with a different value. The best candidate could be something like flexibility, i.e., because we cannot or do not want to keep given conditions stable, we will have to make everything optimally changeable.

Q: Who is mainly responsible for AI development in a household?

A: In complex socio-technical systems, all stakeholders bear responsibility simultaneously and equally. Whether this is a household or a nation, it is the stakeholders, both humans and machines, who contribute to the operations of the network and consequently share responsibility for the network. This question is ethically interesting, since in traditional ethics one must always find a “culprit” when something goes wrong. Since ethics, morals, and the law are only called upon the scene and can only intervene when someone does something voluntarily and knowingly that is immoral or illegal, there needs to be a perpetrator. Without a perpetrator, no one can be held ethically or legally accountable. In complex socio-technical systems, e.g., an automated traffic system with many different actors, there is no perpetrator. For this reason, everyone must take responsibility. Of course, there can and must be role distinctions and specializations, but the principle is that the network is the actor and not any actors in the network. Actors, both human and non-human, can only “do” things within the network and as a network.  

Q: Who is primarily responsible for AI use in a household?

A: Same as above

Q: Who is mainly responsible for AI development in a company?

A: Same as above

Q: Who is primarily responsible for AI use in an enterprise?

A: Same as above

Q: Who is primarily responsible for AI development in a community/city?

A: Same as above

Q: Who is primarily responsible for AI use in a community/city?

A: Same as above

Q: Who is primarily responsible for AI development in a country?

A: Same as above

Q: Who is primarily responsible for AI use in a country?

A: Same as above

Q: Can there even be a global regulation on AI?

A: All the questions above reflect our traditional hierarchies and levels of regulation, from household to nation or even the world. What is interesting about socio-technical networks is that they do not follow this hierarchy. They are simultaneously local and global. An AI in a household, for example Alexa, is globally connected and operates because of this global connectivity. If we are going to live in a global network society in the future, then new forms of regulation need to be developed. These new forms of regulation must be able to operate as governance (bottom up and distributed) rather than government, i.e., hierarchical. To develop and implement these new forms of governance is a political task but it is not only political. It is also ethical. For as long as we are guided by values in our laws and rules politics ultimately rest upon what people in a society value. The new values that guide the regulation of a global network society need to be discovered and brought to bear on all the above questions. This is a fitting task for a digital ethics.

Q: Who would develop these regulations?

A: Here again, only all stakeholders in a network can be responsible for setting up regulatory mechanisms and also for control. One could imagine that a governance framework was developed bottom up and that in addition to internal controlling also an external audit would monitor the compliance with the rules. This could be the function of politics in the global network society. There will be no global government, but indeed global governance. The role of government would be to audit the self-organizing governance frameworks of the networks of which society consists.

Q: Should there be an AI driver’s license in the future?

A: The idea of a driver’s license for AI users, like for a car or a computer, assumes that we control the AIs. But what if it is the AIs that are driving us? Would they perhaps have to have a kind of driver’s license certifying their competence for steering humans?

Q: What would the conditions be for that?

A: Whether AIs get a human or social driver’s license that certifies them as socially competent would have to be based on a competence profile of AIs as actors in certain networks. The network constructs the actors, and at the same time is constructed by the actors who integrate into the network. Each network would need to develop the AIs it needs, but also be open to being conditioned as a network by those AIs. This ongoing process is to be understood and realized as governance in the sense described above.

Q: How will society from young to old be sensitized and educated?

A: At the moment, there is much discussion of either “critical thinking” or “media literacy” in this context. Both terms are insufficient and misleading. When critical thinking is mentioned, it is unclear what criticism means. For the most part, it means that one is of the same opinion as those who call for critical thinking. Moreover, it is unclear what is meant by thinking. Critique is everywhere. Everything and everyone is constantly being criticized. But where is the thinking? Again, thinking mostly means thinking like those who say what one should criticize. Since this is different in each case and everyone has their own agenda, the term remains empty and ambiguous. The same is true of the term media literacy. In general, media literacy means knowing how the media select, process and present information, and it also means being aware that this is done not according to criteria of truth-seeking, but criteria of the media business. Knowing this, however, is not a recipe for effectively distinguishing truth from fake news. For that, one needs to know a lot more about how to research for information and how to judge its reliability.

Q: Where do the necessary resources for this come from?

A: There is a tendency to defer the task of education and training to the education system. Schools are supposed to ensure that children grow up with media literacy and the ability to think critically. But how this “training task” is understood and implemented is unclear. Since the school system has largely shown itself to be resistant to criticism in terms of pedagogy and curriculum, and since it still sees certification rather than education as its primary task, the traditional education system cannot be expected to be capable of doing the job that is needed. As in other areas of society, schools will have to transform themselves into networks and learn to deal with information much more flexibly than is the case today. Perhaps schools in their current form will no longer be needed.

Q: Will there ever be effective privacy online?

A: It could also be asked whether there is or could be effective privacy offline. In general, it depends on what one means by privacy. At the moment, privacy or data protection is a code word for the autonomous, rational subject of Western democracy, or in short, the free, sovereign individual. It looks like the digital transformation and the global network society are challenging this assumption. Before talking about privacy, we must therefore answer the answer the question, what is the human being. If the human being is not what Western individualism claims, then privacy can be understood differently. There is much work to be done in this area, as it seems that the humanistic and individualistic ideology of Western industrial society has chosen to fight its last stand against the encroaching global network society over the question of privacy.

Share

Network Publicy Governance and Cyber Security

Hardly a day goes by that the media do not confront us with headlines on the latest breaches, hacks, and attacks, whether political, criminal, or both and which effect all areas of society. Many of these attacks are not even new, but sometimes years old and have only recently been discovered and reported. It is therefore reasonable to assume that there are many security breaches that we don’t know about and perhaps, for various reasons, never will. At least with regard to what we do know, the cost of cybercrime and cyber attacks has been estimated in the hundreds of billions of dollars, quite apart from the other damaging effects, for example, loss of trust in the effectiveness of our law enforcement and security institutions.  It has become apparent that traditional law enforcement and security measures do not work when it comes to preventing or combatting cyber-warfare, cyber-crime, and cyber-terrorism. For example, it is often difficult to find the scene of the crime, the weapons or tools used in the crime, to assess the damage done, or determine who is responsible. And even if it is possible to find out who did it, this information is mostly useless. One is left with the impression that despite enormous efforts by law enforcement and security institutions, cybercriminals and hackers move through our networks with impunity.

Of course, there are many reasons for this, including our own negligence. We ourselves, whether it be infrastructure and software providers or users are often a major part of the problem. The state of simple and normal “digital hygiene,” such as updates, anti-virus software, strong passwords, and so on is so deplorable that it makes you WANNACRY.

What can we do? Whereas new technologies of trust by design and new networked organizational models are slowly becoming focuses of interest for cyber security solutions, legal and ethical proposals seem not to have moved beyond positions developed in the bygone industrial era. The digital transformation seems not to have changed much in our conceptions of what security means and how freedom, autonomy, and human dignity are to be preserved in the information age. Although ethics and discussions of values and norms may appear of only incidental significance when standing on the front in the struggle against cyber-crime, cyber-warfare, and cyber-terrorism, they play a very important role in the foundational regulative frameworks that condition law enforcement and security strategies. For this reason, it is perhaps time to take a critical look at ethics with regard to cyber security.

If values and norms do not come from God or his representatives on Earth – including pure reason –, and if they are not hardwired into our DNA, then it is at least plausible that they emerge from the interactions of social actors. What has become apparent in the digital era is that technologies, artifacts, and non-humans must also be considered to be social actors. Non-humans have become our partners in constructing social order. This means that the “affordances” of information and communication technologies (ICTs), contribute to our norms and values. It is the network as a whole that is the actor and the actor is always a network. Let us therefore ask: What do networks want? What are the norms inherent in the affordances of ICTs?

Continue reading

Share

Is There Such a Thing as “Informational Privacy”?

The concept of “information” is not very informative. This is because there are so many different meanings to the word. Almost every scientific discipline has their own definition, from physics and chemistry to biology, informatics, mathematics, philosophy, and even sociology, which has long been talking about an “information society.” So what does “information” mean? What is information? Obviously, we need to decide, that is, to filter out much of what can be discussed about the topic and select those meanings of the term that are useful for our purpose, namely, attempting to understand what is meant by informational privacy.

According to the classic definition of Alan Westin (Privacy and Freedom 1967), privacy is “the ability to determine for ourselves when, how, and to what extent information about us is communicated to others.” This definition carries with it several important implications. First, privacy is a matter of information. This information must in some way be “about” us, that is, us “personally.” Privacy therefore has to do with a specific kind of information, namely, “personal information,” or as it later became known, “personally identifiable information” (PII). Another important implication of Westin’s understanding of privacy is that it is not the information itself that is most important, but rather the “ability to determine” what information is communicated to others. Privacy therefore does not primarily reside in any particular informational content, for example, information that would somehow describe a person so intimately that he or she would not be able to communicate it without losing privacy. On the contrary, it would seem that privacy resides above all in the freedom to communicate or not to communicate information, whatever it may be. For example, it could be argued that our genome is so personal and intimate that any communication of our genome to others would automatically constitute a violation of privacy. The implication of Westin’s definition, however, is that we could well determine to do so, that is, if we wanted, we could publish our genome on the internet for the world to see and this would not constitute a violation of privacy. If someone else however, for example, our doctor were to do this without our consent, then, of course, this would constitute a violation of privacy. Privacy is therefore a matter of consent, of decision, of freedom and choice and does not reside in any particular information. This means that privacy consists primarily in the will, in the act of deciding to communicate. Only if my free choice about communicating information is infringed upon can we speak of a violation of my privacy. Finally, Westin’s definition assumes that privacy essentially has to do with communication, that is, privacy is the right to communicate or not to communicate. A right to privacy in this sense only makes sense, however, if communication is an option, something we can choose to do or not do. This means that Watzlawick must have been wrong when he stated that “we cannot not communicate.” If human beings are essentially social and human existence is constituted by communication this would make privacy as Westin defines it impossible. Only if information about a person is something that is not necessarily and automatically communicatively constituted and distributed in social space can privacy be possible.

Continue reading

Share

The Value of Privacy

Perhaps the most important legacy of Foucault and Postmodernism is to have made the business of critique much more difficult and complicated than it was back in the days when all workers wore white hats and all capitalists black. Today one has become wary of seeing any cultural, social, or political value as simply good in itself and worthy of protection, without investigating the extent to which it participates, however unwittingly, in a larger regime of power, inequality, and exploitation. Hegel had long ago pointed out that the master and the slave need each other. Each helps to make the other who he/she is. They work together in order to construct and maintain a certain regime of knowledge and power without which neither of them could exist. Postmodernism, of course, does not share Hegel’s optimism that contradictions will be resolved by progress, or even Marx’s faith in revolution. If critique is still to be possible, then it cannot take the easy route of singling out the bad guys, but must lay bare the many complex interdependencies that together constitute a society in its entirety. That this is a hard lesson to learn is illustrated by the lengthy report of the Committee on Privacy in the Information Age established by the National Research Council Engaging Privacy and Information Technology in a Digital Age (edited by J. Waldo, H. S. Lin, L. I. Millett, 2007).  Admittedly, the Committee does not understand its mission to be the elaboration of critical social theory. Nonetheless, it aims to “raise awareness of the spider web of connectedness among the actions we take, the policies we pass, the expectations we change , the ‘flip side’ of impacts policies have on privacy.” The aim of the Committee is to “paint a big picture that would sketch the contours of the full set of interactions and tradeoffs” and “take into account changes in technology, business, government, and other organizational demand for and supply of personal information…” (20).

The upshot of this ambitious program is that privacy as an undeniable and inalienable personal and social value that demands to be protected by law. This view is echoed on the international level in the Report of the Office of the United Nations High Commissioner for Human Rights, The Right to Privacy in the Digital Age (2014), which declares that “there is universal recognition of the fundamental importance, and enduring relevance, of the right of privacy and of the need to ensure that it is safeguarded, in law and in practice” (5).  The underlying assumption of both reports is that whatever may be wrong with society, privacy is not part of the problem. It is the solution. A solution that must at all costs be defended against threats arising from the digital transformation of the 21st Century. This raises at least two important questions. What is the value that privacy has for individuals and society? Why has privacy become a central issue in understanding the global network society?

Continue reading

Share

Personalized Advertising, Big Data, & the Informational Self

Whether we like it or not, advertising is a fact of life. It is also the business model of the Internet. Whoever thinks that Facebook, Instagram, or Google provide such cool services really for nothing is simply naïve. We pay for many Internet services with our data, which have value because sellers are convinced they can use this data to find customers. The more you know about your customer, the better the chances you can provide them with information that is relevant and interesting for them. Assuming people are not as easily manipulable as MadMen and critical theorists seem to think, advertising doesn’t “make” anyone buy anything. It provides information about what one can buy. When someone is not interested in the information, or the information is not relevant, advertising dollars are wasted. This is why personalized advertising based on the collection, aggregation, analysis, and brokering of personal data is big business. Personalized advertising promises to provide people with interesting and relevant information on products and services, and as a byproduct, to spare them the useless information they are constantly being bombarded with by dumb, mass advertising.

Anyone socialized in a capitalist world has his or her our own spam filter built into their cognitive apparatus. These filter out most of the informational junk that dumb advertising constantly dumps on us. Personalized advertising and personalized services of all kinds, for example, in education, healthcare, government, etc. apply the same principles guiding our own spam filters; they know what we want, what we are interested in, what we are willing to pay for, etc. Indeed, they often know more than we do about ourselves. This is because they have access to more information then we can consciously keep track of. We have at any time a relatively limited amount of knowledge about ourselves. We forget a lot of things. They have big data, and they don’t forget. While some are currently fighting in the courts for the “right to forget,” the quick (velocity) collection, aggregation, reuse, repurposing, recombining, and reanalyzing of very large (volume), very different (variety) data sets is only beginning to appear upon the radar screens of regulators. This may be because everybody, it would seem, wants to do it and hopes in one way or another to profit from it. Business, government, education, healthcare, science, etc., all are jumping on the big data bandwagon. All can profit from knowing more, indeed, knowing everything, about their “customers.” The question is, what do the customers get out of it?

Continue reading

Share