Personalized Advertising, Big Data, & the Informational Self

Whether we like it or not, advertising is a fact of life. It is also the business model of the Internet. Whoever thinks that Facebook, Instagram, or Google provide such cool services really for nothing is simply naïve. We pay for many Internet services with our data, which have value because sellers are convinced they can use this data to find customers. The more you know about your customer, the better the chances you can provide them with information that is relevant and interesting for them. Assuming people are not as easily manipulable as MadMen and critical theorists seem to think, advertising doesn’t “make” anyone buy anything. It provides information about what one can buy. When someone is not interested in the information, or the information is not relevant, advertising dollars are wasted. This is why personalized advertising based on the collection, aggregation, analysis, and brokering of personal data is big business. Personalized advertising promises to provide people with interesting and relevant information on products and services, and as a byproduct, to spare them the useless information they are constantly being bombarded with by dumb, mass advertising.

Anyone socialized in a capitalist world has his or her our own spam filter built into their cognitive apparatus. These filter out most of the informational junk that dumb advertising constantly dumps on us. Personalized advertising and personalized services of all kinds, for example, in education, healthcare, government, etc. apply the same principles guiding our own spam filters; they know what we want, what we are interested in, what we are willing to pay for, etc. Indeed, they often know more than we do about ourselves. This is because they have access to more information then we can consciously keep track of. We have at any time a relatively limited amount of knowledge about ourselves. We forget a lot of things. They have big data, and they don’t forget. While some are currently fighting in the courts for the “right to forget,” the quick (velocity) collection, aggregation, reuse, repurposing, recombining, and reanalyzing of very large (volume), very different (variety) data sets is only beginning to appear upon the radar screens of regulators. This may be because everybody, it would seem, wants to do it and hopes in one way or another to profit from it. Business, government, education, healthcare, science, etc., all are jumping on the big data bandwagon. All can profit from knowing more, indeed, knowing everything, about their “customers.” The question is, what do the customers get out of it?

We already know the answer. We get interesting, relevant information about products and services. The market is no longer dumb, it has become intelligent, smart. So why does big data have such a bad reputation? Is big data the same as big brother (which, let us not forget, has become a very successful TV format)? Why are we afraid of big data, if we are no longer afraid of big brother? Again, let us not overlook the fact that not everyone thinks Snowdon is a hero and the NSA the bad guy. If they were bad guys, what about the data brokers that know at least as much as the NSA and sell it to anyone willing to pay (Marwick http://www.nybooks.com/articles/2014/01/09/how-your-data-are-being-deeply-mined/)? As Scott McNealy, founder and former CEO of Sun Microsystems, put it, “You have zero privacy anyway. Get over it” (Wired, Jan. 1999). The privacy issue that big data raises, raises a deeper and more difficult question: What or who are we trying to protect? The question becomes one of knowing who we are in a world in which almost everyone other than ourselves can answer this question better than we can.

Privacy usually means that whoever we are, we have a right to be left alone. The digital revolution, the emergence of the global network society, and humanity’s migration into the “infosphere” (Floridi) has consequences for what it means to be left alone. Typical solutions are to claim, even if unable to implement, restricted access to data and/or informed consent on data use. Control and consent are based on assuming information to be a kind of thing that can be effectively locked away in a safe place and/or submitted to regulated usage. However, if information is not a thing, but, let us say a “network effect,” then perhaps it cannot be enclosed in safe places or be regulated in its movements. The imperatives of networked reality are connectivity, participation, and flow. This means that, at least in principle and tendency, everything is connected to everything, everything generates information, and information flows uncontrollably throughout the network. In big data terms, this means that large data sets are available to use, to arbitrarily be combined and recombined, and to be reused for unforeseeable purposes. How then, we ask, can control and consent effectively protect anything? If we cannot – and for many reasons do not want to – restrict collection, aggregation, analysis, and brokerage of information, and we cannot know in advance for what purposes large data sets could be used, control and consent are empty words echoing a bygone era.

Another approach to problems of privacy is to focus on the consequences of big data. What consequences do big data have in terms of values such as doing no harm, promoting well-being, justice, autonomy, and trust (Collman/Matei, Ethical Reasoning in Big Data, Springer 2016)? If privacy matters at all, then it is because we need it in order to uphold these values. If personalized advertising and other personalized services in healthcare, education, government, and science to do not cause harm, lead to inequality and social disadvantages, loss of self-determination, and mistrust and intransparency, then we cannot claim that big data violates privacy in any morally or legally relevant way. Of course, big data can be used for other things than personalized services, for example, police work, credit assessment, targeted pricing, political advantage, etc. In every case of supposed misuse one would have to evaluate the consequences on a contextual basis. This is where the contextual theory of privacy is helpful (Nissenbaum, A Contextual Approach to Privacy Online, 2011). Whether someone’s privacy has been violated in a harmful way depends on the context. A social network site is a different context than using a credit card, or surfing the internet, or being geo-tracked over mobile telephone. There is no such thing as privacy per se that can be identified, described, and protected across all situations. Being left alone can mean a lot of different things to a lot of different people in a lot of different situations. Privacy and its violation may well be context dependent, but it is obvious that contexts are also not a given. In today’s global network society contexts are always contested, disrupted, quickly changing, open to many interpretations; in short, negotiable. One cannot say in advance and with any certainty what the context is. This means that the principle of contextual integrity is not a very uselful guideline for determining privacy violations. It is much rather a reformulation of the problem. If there are no clear and reliable definitions of privacy in the world, then perhaps we are looking in the wrong place. Perhaps we must look within ourselves.

Oxford philosopher of information Luciano Floridi (The Fourth Revolution, Oxford U. Press, 2014) offers a solution along this line. After the digital revolution, humans have migrated into the infosphere and exist as information. Data, information is not something we have, it is what we are. Humans are “inforgs,” rather than cyborgs. Inforgs are their information and therefore a violation of privacy is more like kidnapping than robbery. Misuse of that information that constitutes my being is clearly a violation of my autonomy, freedom, intimacy, and confidentiality. For the inforg, the important question with regard to privacy is what information makes up my being and what information is merely something that I have. Floridi proposes answering this question by assuming that all data/information that is “self-constituted” makes up the identity/being of the inforg. The problem with this answer is that inforgs exist within the infosphere, which is a networked reality, based on connectivity, flow, participation, and other network norms. Information never stands alone, but is always connected, linked, associated with other information. Furthermore, information is not the product of any one entity, human or non-human, but a product of networked actors. This is why big data is useful and important. Big data is only possible and interesting because information knows no boundaries, not even that of self and other. This means that information is never “self-constituted,” but always constituted by the network, that is, by many different actors working together. The mobile devices, digital infrastructures, operating systems, apps, and algorithms that work together to produce my Facebook profile all play their part. I did not create my Facebook profile – or any other information in the infosphere – all by myself, with my native abilities, without any help form my friends. So if information cannot be “self-constituted,” inforgs can’t either. The network is the actor, as Bruno Latour would say. Cognition, agency, and Identity are distributed. This leads us to ask: What kind of privacy, if any, does the network have? Is privacy a concept that has meaning in the infosphere at all?

The legal scholar Juli Cohen (Configuring the Networked Self, Yale U. Press, 2012) argues that current legal definitions of privacy are based on the possessive individualism of the European Enlightenment heritage coupled with modern liberalism. The autonomous rational subject of liberal tradition must be removed from the foundations of legal thought and replaced by a “networked self.” The networked self is not merely information, but is embodied in material reality and involved (for Cohen “networked”) in historically dynamic social contexts. Floridi would not deny this. The inforg is not made up of bits and bytes. The physical world is also a world of meaning and therefore information. Nonetheless, Cohen makes it clear that the networked self is not “self-constituted,” but a part of mutually constituting networks of embodied, historical, cultural reality. In this view, privacy is defined as the right to perform “socially situated processes and practices of boundary management.” The important boundary is not, as one would expect from traditional communication privacy management theory, that between self and other, or individual and society. It is the boundary between innovation, contingency, change on the one side, and conformism, heteronomy, and domination on the other. For Cohen, the interest in and right to privacy is based on the individual and collective concern for an open future, for change, development, innovation, and creativity.

This view is promising, but Cohen’s supposition that surveillance and big data semi-automatically lead to suppression of creativity is unwarranted. Of course there are many cases where complete surveillance – for example, radar traffic monitoring – does lead to conformism. If I know that every slight violation of the speed limit will automatically cost me money, I don’t make any free decisions about how fast I drive. But in the case of personalized services derived from big data, it is precisely what makes me different form all others that is being taken account of and encouraged. The more “they” know about me, the more my individuality, preferences, and idiosyncrasies, are acknowledge and supported. Paradoxically, total surveillance leads to total acceptance and encouragement of diversity. Big data is at once post-modernism’s worst nightmare and its most beautiful dream: the absolute panopticon that supports diversity, innovation, and creativity. Of course, this is a best case scenario, and critical theorists, including Cohen who openly acknowledges her debt to post-modern theory, will quickly point to abuse and assume the worst case. Even in the worst case, however, big data violations of privacy would have to be assessed on a contextual basis. Cohen’s networked self exists in social contexts. As Nissenbaum argued, expectations – and thus rights – to privacy can only be established within concrete contexts of social and communal action. There are no general, universal expectations and rights to privacy. The boundaries, whose management privacy should protect are therefore the always negotiable frames that contextualize social action and subjectivity. This means that a theory of privacy needs to ask how networked selves, or actors who are networks and whose identities are distributed throughout the network negotiate boundaries, frames, contexts, and associations. Perhaps what we need is a “network publicy management theory.” In terms of information theory, this means that a difference has the right to expect to be able to make a difference. Being a difference that makes a difference is what constitutes the informational self. This is what informational privacy is all about and it is what the legal system should protect.

Share