I Warn Against Technological Utopianism – conversation with Sarah Brayne

2021. április 9. 20:19
My book serves as a cautionary tale, a warning against technological utopianism that suggests we can solve intractable social problems with technological solutions – Sarah Brayne, Professor at the University of Texas pointed out in a conversation with Lénárd Sándor researcher the National University of Public Service.

Sarah BRAYNE is an Assistant Professor of Sociology at The University of Texas at Austin. In her research, Brayne uses qualitative and quantitative methods to examine the social consequences of data-intensive surveillance practices. Her book, Predict and Surveil: Data, Discretion, and the Future of Policing (Oxford University Press), draws on ethnographic research with the Los Angeles Police Department to understand how law enforcement uses predictive analytics and new surveillance technologies. Prior to joining the faculty at UT-Austin, Brayne was a Postdoctoral Researcher at Microsoft Research. She received her Ph.D. in Sociology and Social Policy from Princeton University. Brayne has volunteer-taught college-credit sociology classes in prisons since 2012. In 2017, she founded the Texas Prison Education Initiative.

 

Karol Čapek, the famous Czech playwright, invented the word “robot” a hundred years ago while he was working on the famous science fiction play entitled “R.U.R”. During the last century, this science fiction has been gradually becoming an everyday reality and the Digital Revolution increasingly permeates every walk of life of the 21st century world. What, in your view, are the major societal impacts of this phenomenon?

In the digital age, we leave millions of digital traces in the course of our everyday lives. Every time we send an email, make a phone call, drive past an automatic license plate reader, use social media, or buy something with a credit card, we leave a digital trace.

These digital traces can be scooped up by law enforcement (or by other institutions who then sell these data to law enforcement) and used in their daily operations.

The proliferation of digital traces we leave—coupled with advances in the technological tools available for storing and analyzing these data—make surveillance possible at an unprecedented scale. That means that police surveillance today is wider and deeper than ever before—it includes a broader swath of people and can follow any single individual across a greater range of institutional settings, which has important implications for social inequality, privacy, and the rule of law.

In your newly published book, Predict and Surveil: Data, Discretion, and the Future of Policing, you examined the use of data-intensive surveillance practices by law enforcement. Can you shed light on how Big Data has been transforming the criminal justice system along with policing in the 21st century? How do law enforcement and police make use of automated decision-making in America and around the world?

Police use of big data has implications for almost every part of policing, from patrol to investigations, risk management, staffing, and crime analysis.

In the book, I write about two different kinds of technologically mediated surveillance, dragnet and directed. Dragnet surveillance refers to surveillance tools that gather information on everyone, rather than just people under suspicion. An example of a dragnet surveillance tool is the automatic license plate reader, or ALPR. ALPRs can be mounted at static locations like intersections or dynamic, mounted on cop cars, for example. They take two photos of every car that passes through their line of vision and records the time, date and coordinates. Just this one, relatively simple tool makes everyday mass surveillance possible on an unprecedented scale

The other type of surveillance big data makes possible at a greater scale is directed surveillance, or the surveillance of people and places deemed suspicious. Predictive policing is the classic example of this. Law enforcement agencies typically use place-based predictive policing algorithms to predict property crime and person-based predictive policing strategies to predict violent crime. Both approaches use historical crime data to predict where future crime is more likely to occur and who is more likely commit it in order to allocate patrol resources.

How widespread is the use of such surveillance tools in the law enforcement of the United States? What are the first experiences?

The department that was the focus of my case study, the Los Angeles Police Department (LAPD), is not a “typical” or representative police department. It is much larger, has more funding, and is more technologically advanced than almost every other police department. I selected it as my research site because it was one of the first police departments to use predictive policing algorithms and remains on the forefront of police use of data analytics. Therefore, it serves as a strategic case that might forecast broader trends shaping other law enforcement agencies in the coming years. Unfortunately, we do not know exactly how many police departments use predictive policing in the United States or across the world, but according to a 2014 survey of 200 police departments by the Police Executive Research Forum, 38% of responding departments were using predictive policing, and 70% of departments indicated the planned to use it by 2017

What are the major advantages and shortcomings of using such technologies? What are the social consequences of data-intensive surveillance practices and how do they affect fundamental rights such as the right to privacy or fair trial?

In theory, big data can be used to improve the administration and accountability of justice. For example, digital policing leaves digital trails. If those trails can be externally audited, they can be used to “police the police.” However, in practice, big data technologies can have disparate impact. They are used in socially patterned ways with profoundly unequal, though often elusive implications. Data-intensive surveillance practices simultaneously obscure and amplify existing inequalities.

For example, the premise behind predictive policing is that we can learn about the future from the past. The “past,” for predictive policing algorithms, is historical crime data, which are a function of both actual crime rates but also enforcement practices. When we hold up a mirror to the past, any inequalities in those data are reflected into the future.

Predictive policing can serve to legitimize the biased police practices that produced the original data,

putting individuals and places already under suspicion under new and deeper forms of quantified surveillance, while appearing to be objective, or, in the words of one captain I interviewed, “just math.”

There are also downstream consequences later in the criminal legal process. For example, there is little transparency around what surveillance tools and data sources police departments are using; proprietary predictive policing algorithms are often shrouded from public scrutiny through corporate secrecy laws. Such opacity is a barrier to due process, in that it makes it difficult if not impossible to assess bias, contest decisions, or remedy errors.

The Digital Age also allows non-state actors, especially large tech companies and private data centers to become centers of powers. What are the advantages and dangers of the collaboration between law enforcement and these types large companies?

Yes, the growing role of the private sector in public policing is part of a fundamental transformation in the practice of policing and the institution of “the police.” Historically, the police collected most of the information they use in the course of their daily operations themselves. But

the police are increasingly relying on private companies for data collection and the provision of analytic platforms.

Police departments do not typically have the in-house technical expertise needed to build analytic platforms to analyze complex and diverse corpuses of data. The problem is that private vendors can hide behind trade secrecy and nondisclosure agreements, ultimately circumventing typical public-sector transparency requirements and lowering the accountability of the police by making it harder for scholars to study, regulators to regulate, and activists to mobilize for or against specific practices.

One of the differences between the ongoing Digital Revolution and previous industrial revolutions is that this one poses a philosophical dilemma between improving and replacing human capabilities. Where, in your view, can be the line drawn between “improvement” and “replacement” in terms of using algorithmic decision-making in judicial and other law-enforcement decisions?

Although algorithmic decision-making carries with it an air of objectivity and promises to replace problematic human discretion with “unbiased” data, I think it doesn’t so much replace discretion as it does displace discretion to earlier, less visible (and therefore often less accountable) phases of the decision-making process. It displaces it to questions about what data to collect on whom and for what purpose.

What are the main conclusions you came to in you book? What future do you envision for data-intensive surveillance and predictive practices and does technology make us safer? Should we trust or should we be afraid of a future that widely uses modern technology in law enforcement?

Ironically, the evidence base is relatively weak for big data policing. We do not yet know whether or how many of the technologies deployed by police departments today make us safer. What we do know is that rather than being simply a mechanical reflection of the world, big data is fundamentally social. It both shapes and is shaped by the social world in which it is created and used. I think big data is therefore better understood as a form of capital, both a social product and a social resource used by actors within organizations. As such, big data may work to simultaneously obscure and amplify existing inequalities. In that way, I think

the book serves as a cautionary tale, a warning against technological utopianism that suggests we can solve intractable social problems with technological solutions.

However, just as the social side of big data can make it perilous, it is also what gives it promise. If big data is social, we can ask, how can we change it? How can we change the social world that underpins big data and its use? I think that requires rethinking not just policing, but also attending to the structural inequalities, institutional priorities, and organizational dynamics shaping big data, in policing and beyond.

Hozzászóláshoz és a további kommentek megtekintéséhez lépjen be, vagy regisztráljon!

Bejelentkezés