Category Archives: Connectivity

a network norm

AI Now or AI as it Could Be

The 2018 Symposium organized by the AI Now Institute (https://symposium.ainowinstitute.org/) under the title of “Ethics, Organizing, and Accountability” is interesting for a number of reasons. The AI Now Institute is an interdisciplinary research institute dedicated to exploring the social implications of artificial intelligence which was founded in 2017 by Kate Crawford (https://en.wikipedia.org/wiki/Kate_Crawford and Meredith Whittaker (https://en.wikipedia.org/wiki/Meredith_Whittaker) and is housed at the Ney York University.

The name is significant. AI “now” is indeed about AI as it is now, that is, not as it could and should be. Emphasizing the “now” has a critical edge. The focus is on what AI is actually doing, or more accurately, not doing right in the various areas of concern to the Institute, namely law enforcement, security, and social services. The AI Now Institute’s explicit concern with the “social implications” of AI translates into a rather one-sided civil rights perspective. What the institute explores are primarily the dangers of AI with regard to civil rights issues. This is well and good. It is necessary and of great use for preventing misuse or even abuse of the technology. But is it enough to claim that simply dropping AI as it is now into a social, economic, and political reality riddled with discrimination and inequality will not necessarily enhance civil rights and that the technology should therefore either not be used at all or if it is used, then under strict regulative control? Should one not be willing and able to consider the potential of AI to address civil rights issues and correct past failings, and perhaps even to start constructively dealing with the long-standing injustices the Institute is primarily concerned with? Finally, quite apart from the fact that the social implications of AI go way beyond civil rights issues, should not the positive results of AI in the areas of law enforcement, crime prevention, security, and social services also be thrown onto the scale before deciding to stop deployment of AI solutions? One cannot escape the impression that the general tenor of the participants at the symposium is the throw the baby out with the bathwater.

Continue reading

Share

Network Publicy Governance and Cyber Security

Hardly a day goes by that the media do not confront us with headlines on the latest breaches, hacks, and attacks, whether political, criminal, or both and which effect all areas of society. Many of these attacks are not even new, but sometimes years old and have only recently been discovered and reported. It is therefore reasonable to assume that there are many security breaches that we don’t know about and perhaps, for various reasons, never will. At least with regard to what we do know, the cost of cybercrime and cyber attacks has been estimated in the hundreds of billions of dollars, quite apart from the other damaging effects, for example, loss of trust in the effectiveness of our law enforcement and security institutions.  It has become apparent that traditional law enforcement and security measures do not work when it comes to preventing or combatting cyber-warfare, cyber-crime, and cyber-terrorism. For example, it is often difficult to find the scene of the crime, the weapons or tools used in the crime, to assess the damage done, or determine who is responsible. And even if it is possible to find out who did it, this information is mostly useless. One is left with the impression that despite enormous efforts by law enforcement and security institutions, cybercriminals and hackers move through our networks with impunity.

Of course, there are many reasons for this, including our own negligence. We ourselves, whether it be infrastructure and software providers or users are often a major part of the problem. The state of simple and normal “digital hygiene,” such as updates, anti-virus software, strong passwords, and so on is so deplorable that it makes you WANNACRY.

What can we do? Whereas new technologies of trust by design and new networked organizational models are slowly becoming focuses of interest for cyber security solutions, legal and ethical proposals seem not to have moved beyond positions developed in the bygone industrial era. The digital transformation seems not to have changed much in our conceptions of what security means and how freedom, autonomy, and human dignity are to be preserved in the information age. Although ethics and discussions of values and norms may appear of only incidental significance when standing on the front in the struggle against cyber-crime, cyber-warfare, and cyber-terrorism, they play a very important role in the foundational regulative frameworks that condition law enforcement and security strategies. For this reason, it is perhaps time to take a critical look at ethics with regard to cyber security.

If values and norms do not come from God or his representatives on Earth – including pure reason –, and if they are not hardwired into our DNA, then it is at least plausible that they emerge from the interactions of social actors. What has become apparent in the digital era is that technologies, artifacts, and non-humans must also be considered to be social actors. Non-humans have become our partners in constructing social order. This means that the “affordances” of information and communication technologies (ICTs), contribute to our norms and values. It is the network as a whole that is the actor and the actor is always a network. Let us therefore ask: What do networks want? What are the norms inherent in the affordances of ICTs?

Continue reading

Share

Personalized Advertising, Big Data, & the Informational Self

Whether we like it or not, advertising is a fact of life. It is also the business model of the Internet. Whoever thinks that Facebook, Instagram, or Google provide such cool services really for nothing is simply naïve. We pay for many Internet services with our data, which have value because sellers are convinced they can use this data to find customers. The more you know about your customer, the better the chances you can provide them with information that is relevant and interesting for them. Assuming people are not as easily manipulable as MadMen and critical theorists seem to think, advertising doesn’t “make” anyone buy anything. It provides information about what one can buy. When someone is not interested in the information, or the information is not relevant, advertising dollars are wasted. This is why personalized advertising based on the collection, aggregation, analysis, and brokering of personal data is big business. Personalized advertising promises to provide people with interesting and relevant information on products and services, and as a byproduct, to spare them the useless information they are constantly being bombarded with by dumb, mass advertising.

Anyone socialized in a capitalist world has his or her our own spam filter built into their cognitive apparatus. These filter out most of the informational junk that dumb advertising constantly dumps on us. Personalized advertising and personalized services of all kinds, for example, in education, healthcare, government, etc. apply the same principles guiding our own spam filters; they know what we want, what we are interested in, what we are willing to pay for, etc. Indeed, they often know more than we do about ourselves. This is because they have access to more information then we can consciously keep track of. We have at any time a relatively limited amount of knowledge about ourselves. We forget a lot of things. They have big data, and they don’t forget. While some are currently fighting in the courts for the “right to forget,” the quick (velocity) collection, aggregation, reuse, repurposing, recombining, and reanalyzing of very large (volume), very different (variety) data sets is only beginning to appear upon the radar screens of regulators. This may be because everybody, it would seem, wants to do it and hopes in one way or another to profit from it. Business, government, education, healthcare, science, etc., all are jumping on the big data bandwagon. All can profit from knowing more, indeed, knowing everything, about their “customers.” The question is, what do the customers get out of it?

Continue reading

Share

Floridi’s Fourth Revolution

With The Fourth Revolution (The Fourth Revolution. How the Infosphere is Reshaping Human Reality. Oxford University Press, Oxford 2014) Oxford philosopher of information Luciano Floridi https://de.wikipedia.org/wiki/Luciano_Floridi enters into the mainstream debate on net culture and new media. Indeed, as the title suggests, digital media are “revolutionary” and not merely an extension of broadcast media. Floridi likens the revolutionary significance of digital media to that of Copernicus’ dislocation of humankind from the center of the universe. This was the first revolution. Similarly, the second revolution, which Darwin initiated, dislocated humans from their privileged place in the animal kingdom. The third revolution was Freud’s psychoanalysis, which dislocated human consciousness from its sovereignty within the realm of mind. The fourth revolution, the age of information and communication technologies (ICT) has finally dislocated human intelligence from its claim to be the only “intelligent” form of being. What is left? Floridi’s answer is that humans have become “inforgs” (not cyborgs which Floridi considers science fiction). Inforgs are beings who are their information. Inforgs, however, are more than a bundle of bits and bytes. They also process information. This quality they admittedly share with their algorithmic neighbors in the “infosphere” (the digital domain of reality). In distinction to ICT’s, however, inforgs are semantic information processors (“semantic engines”), whereas the algorithms are only syntactic information processors (“syntactic engines”). Inforgs make meaning, whereas algorithms make calculations. This has implications for many important issues in current discussions of the digital revolution. One example is the issue of privacy.

Continue reading

Share

Personal Informatics and Design

Design discourse is admittedly mostly technical in the sense of focusing on product development, marketing, and business planning. Nonetheless there is a deeper and, for the social scientist, more interesting background for questions relating to design. At stake is fundamentally a techné of the self in the sense of Foucault’s ethics and Heidegger’s interpretation of technology as poiesis. In a well-known book entitled Sciences of the Artificial, Herbert Simon developed a concept of design that can be traced from Greek techné and applied to Foucault’s technology of self as ethics. For Simon (1996)

“Engineers are not the only professional designers. Everyone designs who devises courses of action aimed at changing existing situations into preferred ones. The intellectual activity that produces material artifacts is no different fundamentally from the one that prescribes remedies for a sick patient or the one that devises a new sales plan for a company or a social welfare policy for a state. Design, so construed, is the core of all professional training…. Schools of engineering, as well as schools of architecture, business, education, law, and medicine, are all centrally concerned with the process of design.” (111)

Bruno Latour would agree to this and add that the concept of design today “has been extended from the details of daily objects to cities, landscapes, nations, cultures, bodies, genes, and … to nature itself… (Latour 2008: 2). Furthermore, this extension of the idea of design to all aspects of reality means that the concept of “design” has become “a clear substitute for revolution and modernization” (5); those two ideals that have led Modernity into an inescapable responsibility for planetary ecology. Finally, for Latour “the decisive advantage of the concept of design is that it necessarily involves an ethical dimension which is tied into the obvious question of good versus bad design” (5). The ethical dimension that Latour finds at the heart of design joins Foucault’s idea of an ethical technology of self for “humans have to be artificially made and remade” (10). Understanding self-knowledge as an ethical and technical (in the sense of techné) task of design should not lead us into post-humanist speculations and the discussion of cyborgs. Instead, that which makes design both ethically good and aesthetically beautiful is its ability to take as many different aspects of what something is and can become into account, to respect all the different claims that can be made on someone or something, to insure that nothing important is overlooked, and to allow for surprises and the unexpected. To design something well, including oneself, in the functional, ethical, and aesthetic dimensions, is to take account of as much information as one can in the process of constructing. Latour proposes that networking, that is, the techné of constructing actor-networks, should be understood as design. This means that design is a “means for drawing things together – gods, non-humans, and mortals included” (13).

Continue reading

Share