Modern technologies facilitate the genesis of the “surveillance state” - conversation with Professor David Gray

2019. szeptember 02. 15:57

Rather than privacy, the fundamental question posed by modern technology is how to strike a balance between the freedom and liberty of the people, collectively and the powers of the government to conduct surveillance – pointed out by David Gray, Professor of Law at Francis King Carey School of Law at the University of Maryland in a conversation in the interview series of Lénárd Sándor, researcher at the American Studies Research Institute of the National University of Public Service, located in Budapest.

2019. szeptember 02. 15:57
null
Dr. Sándor Lénárd

I thank you for accepting the invitation to this interview series. Professor, you are the author of the superb book, The Fourth Amendment in the Age of Surveillance. In this book you foresee the emergence of, as you call it, “surveillance state”. Could you please shed some lights on the nature of the surveillance state; where does it come from and what are the threats it poses to the people?

The contemporary “surveillance state” is a twenty-first century iteration of governance challenges that recur every so often at the intersection of two phenomena: the teleological tendency for executive agencies to expand their powers and the emergence of new technologies that facilitate the expansion of government power. This does not mean they are necessarily evil, it is just part of their nature. The internal logic of the executive branch of government leads executive agencies to  aggregate to themselves increasing degrees of power and to claim broader discretion in order to facilitate their projects of security and social control.  These efforts often are rationalized as essential to deal with emergencies, real and imagined. As the Canadian Freeholder, an eighteenth-century commentator, put the point they are “fond of doctrines of reason of state, and state necessity, and the impossibility of providing for great emergencies and extraordinary cases, without a discretionary power . . . to proceed sometimes by uncommon methods not agreeable to the known forms of law.” Left unchecked, this leads to the arbitrary use of power and, eventually, autocracy.  One of the fundamental challenges for governmental design is, therefore, to establish limits on  executive power.  This is what led to the creation of the Athenian senate and it is certainly central to the understanding of the system of checks and balances that is a central feature of the United States constitution.

Of course, that system is not static.  Every generation or so there are significant changes in the technologies available to executive agencies as they pursue their projects of security and social control.   In the late 18th and early 19th century it was the uniformed paramilitary police force.  There was a tremendous amount of controversy in England about whether to follow the then emerging continental model. The English rejected the idea and we in the United States inherited that skepticism.  That began to change with Sir Robert Peel’s creation of the Metropolitan Police Force in 1829.  Notably, those first “bobbies” did not have authority to conduct investigations for fear that those powers would be abused.  Over the following decades, police officers with full investigative powers were established in municipalities all over England and the United States.  The inevitable happened, of course, and we saw dramatic expansions in police powers, often drive by claims of necessity and emergency, and the increasing abuse and arbitrary use of those powers.  In the United States, our Supreme Court responded by imposing a series of constitutional constraints on law enforcement using the Fourth, Fifth, Sixth, and Fourteenth Amendments to the Constitution.

What are the current challenges of the “surveillance state”?

This same pattern recurred in the twentieth century.  New and emerging eavesdropping technologies, including wiretapping, provided government agents with new means and methods.  The illegal production and distribution of alcohol during Prohibition followed by the Cold War, retrenchment in the face of the Civil Rights movement, and then the War on Drugs provided a series of emergencies that executive agencies could point to as grounds for asserting broader discretionary powers to use these technologies to facilitate programs of broad and indiscriminate surveillance, the results of which were elaborated in the Church Committee reports.  Here again, the Supreme Court responded by imposing new constitutional constraints and Congress passed a series of regulatory and reform measures, including the Wiretap Act, the Foreign Intelligence Surveillance Act, and the Electronic Privacy Control Act.

What we are seeing now in the 21st century is the emergence of a new set of technologies that have made it much easier for law enforcement and intelligence agencies to track and surveil individuals and to monitor their activities and behaviors directly and by aggregating and analyzing large amounts of data.  Executive agencies have asserted discretionary authority to deploy and use these technologies in order to contend with contemporary emergencies, including the War on Terror, the War on Drugs, and, most recently, efforts to combat illegal immigration.  As a consequence,

we are now facing new but familiar threats of expanding government powers and the ability of executive agents to watch us

wherever we go and to monitor our associations and activities in the real world and online.  This threatens to shift the balance of power between the people and the government in ways, to quote Justice Sonia Sotomayor are “inimical to a democratic society.”  The challenge for the legislative judicial branches of government is to adopt prescriptive measures that can preserve our democratic order by checking the arbitrary and autocratic use of executive power, achieving the right balance between the security and the freedom of the people.

However, one must also add that Justice Samuel Alito wrote in United States v. Jones in 2012 that in the pre-computer age the greatest protection of privacy were neither constitutional, nor statutory but they were practical. He reasoned that carrying out traditional surveillance for an extended period of time used to be cumbersome and costly. Can we still expect privacy under the technological development of the digital era?

Yes, we can and ought to expect privacy, even in a digital age. It is sometimes said that nobody cares about privacy anymore.  Just look at social media.  In addition, as Justice Alito says later in that same opinion, we are often willing to trade privacy for convenience, and increasingly do, to the point that some would argue that privacy is dead. However, I think that this is a false narrative which is not justified in any empirical work that had been done in the last five or ten years. For example, people who are active participants on social media have a robust sense of privacy and those forums are regulated by unwritten, but widely accepted, and culturally enforced, privacy norms. 

Another thing I found interesting about Justice Alito’s point about the impracticality of widespread human tracking and new threats posed by technological tracking is that they parallel pretty precisely the rise of eavesdropping technologies in the twentieth century.  Absent the recruitment of large numbers of informants—such as we saw in the German Democratic Republic and other Soviet Bloc countries—the practical challenges of conducting human eavesdropping on person-to-person conversations limits the threat of broad and indiscriminate surveillance.  By the middle of the 20th century, the telephone had become a ubiquitous feature of life in the United States.  At the same time, wiretapping technologies allowed for the widespread interception and recording of telephone conversations.  So, while the Supreme Court could afford to grant executive agents unfettered discretion to deploy and use wiretaps in 1928, which it did in a case called Olmstead v. United States, we were facing the possibility of a “surveillance state” facilitated by wiretaps and other eavesdropping technologies when the Supreme Court decided Katz v. United States in 1967.  In that case, the Court shifted the focus of the Fourth Amendment toward privacy by adopting a new definition of search based on reasonable expectations of privacy, threatening to regulate wiretaps.  Congress got there first, adopting the Wiretap Act in 1968, which sets limits on the use of wiretapping technology that is probably more robust than the Fourth Amendment requires. 

So, Justice Alito’s observation about tracking is true, but he is wrong to the extent he suggests sighing abdication to the inevitable.  Our history points us in a different direction. One of the critical contributions of my book is to emphasize the importance of focusing on technologies and the collective rights of the people to be secure against threats posed by the arbitrary use of searches and seizures.  Those threats are particularly significant when we are dealing with technologies capable of facilitating programs of broad and indiscriminate surveillance, which is what we mean when we talk about a surveillance state.  Tracking technologies certainly fall in that category. These new technologies mean that law enforcement officers no longer have to go out and follow people around. Instead, they can deploy GPS tracking devices or they can use cell site location information to monitor precisely the movements of anybody and everybody.  That capacity is not exclusively a threat to privacy.  It is a threat to our democratic order.  So, the challenge for a constitutional court is not just to think about the preservation of privacy. Rather, the fundamental question is how to strike a balance between the freedom and liberty of the people, collectively, and the power of the government to deploy and use tracking technologies in the name of security and social control.

How do emerging social media platforms alter the protection of privacy where law enforcement does not even need modern tracking technology to get access to information on many people’s life? How can we protect the privacy against ourselves?

This is part of a very interesting public conversation about how we should think about these new spaces—Twitter, Facebook and so forth.  Are they like the public square?  Are they like our homes?  Are they like salons, union halls, and churches—places central to civil society?  Or are they—as some digital exceptionalists would have it—something completely new and unprecedented?  Our answers to these questions matter a lot from a Fourth Amendment perspective.  The Fourth Amendment as it was originally understood protected homes and papers because these were the social media of the age.  There was a clear understanding of the linkage between security from unreasonable searches and seizures and other political rights guaranteed to “the people”—such as the right to elect legislators and the right to assemble.  After all, the plaintiffs in the famous General Warrants cases—Entick, Wilkes, etc.—were pamphleteers targeted because they were publishing their criticisms of King Charles.  The court struck down general warrants in those cases because they granted unfettered licenses for executive agents to search homes and seized papers, threatening “the person and property of every man in this kingdom” in a way that was “totally subversive of the liberty of the subject.”  So, from the very beginning, limits on searches and seizures have been linked to the protection of civil society and political liberty.  The twentieth century saw the emergence of new technologies like the telephone, radio, and television that provided us with new spaces for civil society to flourish and new opportunities for government agents to monitor activities in those spaces.  Congress and the courts responded, setting limits on surveillance in some of these new spaces and granting government agents broad discretion to conduct surveillance in others.  But again, I want to emphasize that these decisions were not exclusively about privacy. The more fundamental question was about where to strike the balance between liberty and security, government power and freedom, with the overall goal of preserving our constitutional order. If you make the mistake of thinking about these questions exclusively through the lens of personal privacy rather than focusing on the security and liberty of the people, then you end up with dramatic distortions.  Consider, as examples, the third-party doctrine and the public observation doctrine. The third-party doctrine is grounded in the idea that it is unreasonable for individuals to maintain expectations of privacy in information disclosed to third parties. The public observation doctrine holds that it is unreasonable for individuals to maintain expectations of privacy in anything exposed to “public” view.  Together, these doctrines granted government agents broad discretion to conduct all manner of intrusive surveillance, including tracking, visual monitoring, and data surveillance.  The emergence of new technologies over the last fifteen years has revealed that these doctrines grant far too much power to executive agencies and raising the specter of a surveillance state.  While Congress has remained largely silent, the courts have begun to respond in cases like United States v. Jones, Riley v. California, and, just last term, Carpenter v. United States.  In each of these cases, the animating threat has been against the collective interests of the people.

We should be thinking about social media platforms in this same vein.  Specifically, we should be asking about the role of these emerging spaces as loci for ethical self-development, civil society, and political discourse.  We should be affording them the protections necessary to secure those benefits.  If we think about these new spaces as centers for the exercise of our democratic rights, then granting law enforcement unfettered authority to conduct surveillance is, to borrow again from Justice Sotomayor, “inimical to a democratic society.” 

After all, in a democratic society it is the people who are surveilling, judging, and disciplining the government,

not the other way around.  In autocratic regimes it is the government who watches, judges, and disciplines the people.  That is why there is so much discussion these days about panopticons.  Like the panopticon, the surveillance state uses the constant threat of observation to exercise social control.  So, the question you raise, which is a critical one, is whether and how we are going to secure these new sites of democracy against unreasonable intrusions by the state. If a social media platform is critical to civil society and the exercise of democratic rights, then it must be secured against unreasonable intrusion by the state.  I think the Supreme Court is starting to appreciate these issues, which is why the justices are writing about political association, religion, and democratic norms in cases dealing with GPS tracking and cell site location information.

Beyond, and many times together with the new technologies of the digital era, we are witnessing the emergence of algorithmic decision making. Its use is not limited to businesses, law enforcement and the criminal justice system also began to introduce it in a lot of countries. For example Estonia announced to make use of the algorithmic decision making in the form of “robot judges” to increase efficiency. What are the constitutional threats of the algorithmic decision making and the utilization of Artificial Intelligence by law enforcement officers and by the criminal justice system?

This supports what I pointed out before that law enforcement agents’ relying on new technologies as a way to facilitate to achieve more perfect security and more perfect social control as they aggregate greater degrees of power to themselves. There are a number of intersecting conversations around algorithmic decision making. One of them is the Fourth Amendment problem, which asks whether Big Data, algorithmic decision making, and algorithmic analyses should be subject to Fourth Amendment constraints. As I explain in the book, I have no doubt that they should be.  In any familiar sense of the word, these technologies conduct searches.  Moreover, granting government agencies unfettered discretion to deploy and use the technologies would threaten the security of the people in ways akin to granting general warrants.  So, they should be subject to Fourth Amendment regulation.  The more difficult question is what form those regulations should take. For the most part, US courts have reverted to the “warrant requirement.”  So, if a police officer wants to search a home, then he must first go get a warrant based on probable cause issued by a detached and neutral magistrate that is sufficiently particular as to the place to be searched and things to be seized.  The same if he wants to access cell site location information. The problem with the warrant requirement when it comes to Big Data and algorithmic analyses is that it destroys completely the executive utility of these technologies. You cannot go out and get a warrant every time you want to deploy and use Big Data because it has to be running in the background to some degree. This presents a new set of challenges for legislatures and courts. What I suggest in the book is that we identify more bespoke regulatory measures when it comes to these technologies. I identify eight opportunities to accomplish this tailoring. Specifically, we should be thinking about deployment, what kind of data is gathered, how it is aggregated, how and how long it is stored, who has access to the data, how the data is analyzed, who has access to that analysis, and how the analysis is used. As you go down that chain, you can see different “pathological opportunities”. For example, at the gathering stage, you can imagine gathering much more information than is necessary on many more people than is necessary to accomplish the identified law enforcement goal.  At the storage stage, you can imagine holding onto information indiscriminately into perpetuity with no real reason or justification.  The National Security Agency seems to be operating with this kind of data hoarding mentality, storing massive amounts of data indiscriminately without asking critical questions about how long they should keep it, whether and when it should be destroyed, and so forth.

Your question about algorithms brings us to the data analysis stage.  The argument in favor of algorithmic decision-making has been that it actually serves to limit the arbitrary exercise of government power because it takes human discretion and human decision-making out of the process. The United States still has a heritage of racial discrimination which is evident up and down the criminal justice system.  The argument for algorithmic decision-making is that it removes human bias, erasing racial prejudice from the criminal justice system. Unfortunately, this promise has not borne out.  Where they have been used, in sentencing and parole decisions for example, algorithms have just reproduced or magnified racial disparities. That is because the data on which they rely often has racial disparities baked-in.  And, of course, the algorithms themselves are written by humans, who inevitably introduce their own conscious and unconscious biases.

Another problem with algorithmic decision-making is that suppresses critical thinking and human agency. 

We tend to grant computers an almost shamanistic status.  They are modern-day oracles. 

We therefore tend to rely unthinkingly on their outputs and to subordinate ourselves and our freewill in much the same way that Hanna Arendt observed that bureaucrats will subordinate themselves to their bureaucratic roles.  In my view, there are few excuses more chilling than “I’m just doing my job.”  Our increased reliance on algorithms suggests a new version of this: “I’m just doing what the algorithm told me to do.”  That possibility is all the more frightening given the fact that many machine learning algorithms quickly become black boxes.  Not even their programmers know how or why they are making decisions.  Here again, the potential for expanding and abusing state power come to the fore.  What is the hope for democracy if we all start hiding behind the god-like pronouncements of purportedly objective algorithms, denying our obligations as moral agents in the process?

Some experts are of the opinion that with the emergence of these new technologies, the role of the law enforcement along with the goal of policing could undergo major changes. While law enforcement is traditionally reactive and responsive in nature, it can easily become much more proactive with digitization and automated decision-makings. The technological capability of collecting, storing and analyzing massive amount of data and metadata in an inexpensive and highly scalable way can be a game changer and gradually transforms the mission of the law enforcement agencies towards an intelligence-based and preventive policing. How do you see this trend?

This is another wonderful and insightful question. Here, my training as a philosopher starts to take over. Democracy is premised on freewill. But many of these predictive technologies add to long-standing skepticism about freewill while also suggesting the possibility of manipulation at a very deep level. Our last presidential election showed us the tip of the iceberg. What is democracy if the electorate can precisely categorized, targeted, and manipulated? What is a free society if the state can shape and control behavior and even character? This may seem off-topic, a bit, but these are the big questions at stake here. As to your more precise question, the immediate threats are similar to those we were just discussing. Take, as an example, the “No Fly List.” It is virtually impossible for anyone on the list to determine how they got there or why. It is equally impossible for those wrongly on the list to get off. As a result, we have thousands of citizens who are wrongly identified as threats and scores of human beings incapable of exercising the independent agency to change the situation. The Privacy and Civil Liberties Oversight Board has initiated an investigation of these programs, but I doubt very much that they will engage in a fundamental analysis of the basic model of “data-based,” algorithmic, preventive policing. Also, critical, and too often lost, in these conversations is the effect on real people. Take as another example facial recognition technology. The purported promise here is to identify and detain dangerous individuals, such as people subject to arrest warrants. Unfortunately, databases housing records of arrest warrants are notoriously unreliable. As it stands,

thousands of people are wrongly arrested and detained every year based on warrants that were withdrawn or issued erroneously.

Those numbers would only grow with the widespread deployment of facial recognition technologies, likely resulting in some folks being detained or arrested on a daily basis. In addition, the technology itself is not as reliable as many believe. That is particularly true when it comes to people who have phenotypically African-American features. So, in the name of security and preventive policing, we are about to deploy a faulty technology that uses unreliable data. Worse, it is most prone to produce false positives when applied to African-Americans, both because it cannot distinguish between black persons and because the underlying data is the product of racial bias in the criminal justice system. This is not an oracle we can afford to trust.

Összesen 0 komment

A kommentek nem szerkesztett tartalmak, tartalmuk a szerzőjük álláspontját tükrözi. Mielőtt hozzászólna, kérjük, olvassa el a kommentszabályzatot.
Sorrend:
Jelenleg csak a hozzászólások egy kis részét látja. Hozzászóláshoz és a további kommentek megtekintéséhez lépjen be, vagy regisztráljon!