AI Now or AI as it Could Be

The 2018 Symposium organized by the AI Now Institute (https://symposium.ainowinstitute.org/) under the title of “Ethics, Organizing, and Accountability” is interesting for a number of reasons. The AI Now Institute is an interdisciplinary research institute dedicated to exploring the social implications of artificial intelligence which was founded in 2017 by Kate Crawford (https://en.wikipedia.org/wiki/Kate_Crawford and Meredith Whittaker (https://en.wikipedia.org/wiki/Meredith_Whittaker) and is housed at the Ney York University.

The name is significant. AI “now” is indeed about AI as it is now, that is, not as it could and should be. Emphasizing the “now” has a critical edge. The focus is on what AI is actually doing, or more accurately, not doing right in the various areas of concern to the Institute, namely law enforcement, security, and social services. The AI Now Institute’s explicit concern with the “social implications” of AI translates into a rather one-sided civil rights perspective. What the institute explores are primarily the dangers of AI with regard to civil rights issues. This is well and good. It is necessary and of great use for preventing misuse or even abuse of the technology. But is it enough to claim that simply dropping AI as it is now into a social, economic, and political reality riddled with discrimination and inequality will not necessarily enhance civil rights and that the technology should therefore either not be used at all or if it is used, then under strict regulative control? Should one not be willing and able to consider the potential of AI to address civil rights issues and correct past failings, and perhaps even to start constructively dealing with the long-standing injustices the Institute is primarily concerned with? Finally, quite apart from the fact that the social implications of AI go way beyond civil rights issues, should not the positive results of AI in the areas of law enforcement, crime prevention, security, and social services also be thrown onto the scale before deciding to stop deployment of AI solutions? One cannot escape the impression that the general tenor of the participants at the symposium is the throw the baby out with the bathwater.

There are, however, some important take-aways from the symposium. It is important to point out that AI as it is “now” often seems to be based primarily on the short term values of efficiency and optimization instead of on the long term values enshrined in civil and human rights. It is also important not to use technology in general und AI specifically as an excuse for failing to address deep-rooted discrimination and the injustice of social reality. There is no quick technological fix to centuries-old social inequality. For example, racism and brutality in law enforcement are not going to go away simply because AI systems make profiling, tracking, and predictive policing possible. What might change, and this is the argument of those who favor AI solutions, is high crime rates, insecurity, and ineffective law enforcement. The civil rights perspective judges AI on the basis of whether or not it enhances the dignity and self-determination of minorities and the poor, whereas law enforcement agencies and public services tend to judge technologies on the basis of whether they enhance the overall security and well-being of the community. The tendency of the inputs to the AI Now symposium seems to be that anyone who is white and over the poverty line has no legitimate claims on government and that governments have obligations only to minorities and the poor. This overlooks the positive effects of AI in these areas as well as the fact that AI systems are deployed in many areas other than law enforcement and social services. AI plays important roles in many areas of society such as education, business, healthcare, and science and has widespread social, cultural, and economic implications that have nothing to do with the issues of discrimination and inequality. AI, even as it is “now,” is, does, and means much more than the problems addressed by the symposium. And it is questionable to limit the understanding of AI to what is “now.” AI should also be seen against the horizon of what it could and should be, even when focusing on civil rights issues. The important point is not whether AI is “ethical” or not, it is not whether AI “now” is discriminatory, it is what sort of socio-technical ensemble we are designing. This is a question that cannot be asked or answered from the perspective of civil rights alone. There are many different stakeholders in society. Government regulation may not be the best instrument for dealing with complex problems. Instead of government, governance frameworks must be set up that allow all concerned voices to be heard.

Symposium Program https://ainowinstitute.org/ainow-program-2018.pdf

Share