Írta: Dr. Sándor Lénárd
Karol Čapek, the famous Czech playwright, invented the word “robot” a hundred years ago while he was working on the famous science fiction play entitled “R.U.R”. During the last century, this science fiction has been gradually becoming an everyday reality and the Digital Revolution increasingly permeates every walk of life of the 21st century world. What, in your view, are the major societal impacts of this phenomenon and how does it shape societies throughout the World?
Great question! And very wide. In brief, I see a marriage between two trends: the long-term development of artificial intelligence (AI) since at least the 1950s, that in its contemporary form largely means data-dependent machine learning, on the one hand, and the digital computerisation that became the internet, and now largely has become automated platforms organising and interacting with most aspects of society and our everyday lives, on the other.
This means that these two paths and their different types of scholars meet: AI and digitalisation. This also means that
AI in a very mundane way has become key for how our everyday lives are organised,
which makes it so very important to understand what this interaction between automated and learning platforms as well as humans and our social structures means. There is a growing awareness of that we need to not only understand the optimistic notions of scalability and digital automation from the perspective of efficiency and relevancy-matching, but also the absolutely crucial challenges of how societies’ imbalances too are mirrored in design and its implementation; also misrepresentation, racism, gender inequalities, and xenophobia shapes this interaction, and may be reinforced or even amplified through various automated means. This leads me to argue for multidisciplinary needs in the scrutiny of robotic agency – material or not.
Why and how, in your view, does the Digital Revolution differ from previous industrial revolutions throughout the human history? A recent conversation with Christoper Markou shed light on an interesting dilemma between improving and replacing human capabilities. What is its importance and how would you draw the line between the two?
Well, firstly, I think that the relationship between humans and technologies is complex enough to always include both. Technological novelties seem often to have gone hand-in-hand with labour re-organisations, partly with conflicting perspectives between individual security and organisational efficiency. Contemporary debates sometimes overplay the significance of entire professions being replaced and underplay the fact that most professions continuously and dynamically are under change. To me, ultimately,
technological progress is a constant reminder that learning is a life-long venture,
but I’d leave the deeper commentary on work-life datafication and automation to those with that expertise. That being said, the sociology in my socio-legal training still urges me to keep track on the shifting power-relations and inherent dependencies between the different parties involved. As our societies become more “platformised” and dependent on large tech companies, we need to understand what that dependency brings and how we can find appropriate balances in that quite fundamental shift.
I think that the new with the new, so to speak, is the adaptability and possible agency of contemporary and coming machine learning capabilities. Things getting agency, albeit being an age-old fear, is in a computerised way very much here now. Not as a cognitively aware intelligence, but as a mimicking agency in the adaptable and predictive recommendation systems, the personalised services and the automated decisions. In contemporary AI, this sort of agency is heavily relying on the data available for its underlying pattern recognition. And, given that this data often is collected from human behaviour, it is too often a skewed source with biased legacies, leading to less precision for dark skin in facial recognition or flawed conclusions of preferring male candidates rather than the most fitting, regardless of gender, in recruitment. That “golden standard”, so to speak, is not always so golden, after all, posing not only a number of challenging tasks with regards to retrieving better data quality but also – and more importantly, I think – a normative challenge of what it ought to learn from, what it ought to reproduce and amplify in a far from equal society.
The automated decision-making that plays a growing role in either business or governmental policies has various impacts on humans as well as on certain aspects of their fundamental rights. This ranges from freedom of speech and of the press in the case of social media newsfeeds to the right to privacy and data protection to even fair trial and due process requirements in the case of the deployment of algorithmic decision-making in investigations or in judicial procedures, such as face recognition technology. How can governments step up and preserve rights and values in the Digital Age?
Again, a great, but huge, question. As mentioned above, I think much of the challenges are found in the intricate interplay between learning technologies and societal already present structures. I think that the surge of ethics guidelines in the fields of AI points to a number of important points when emphasising accountability and transparency in order to secure fairer and more trusted applied AI-systems. As many have pointed out in addition to me,
a challenge now is to move from principle to process.
That is, how to create a regulatory setting clarifying how the well-supported principled takes on AI and automated systems plays out and can be implemented. Parts are clearly regulated already, for example through European data protection, and there it is more a matter of implementing what’s already established. For other parts, for example around explainability or various notions of transparency, more work needs to be done for its clarification and contextual specificity, in the public sector, for health, etc.
We do indeed seem to be moving into a more technocratic era, where the dependency on large tech companies, that are commercialising data extraction in a semi-automated platform fashion, will be immense for most societal aspects; from news to health to education, public transport and even to policing. This is of course far from unproblematic, and we need to critically scrutinise what this means and leads to from a wider societal and humanistic perspective.
The Digital Revolution allows non-state actors, especially large tech companies to become centres of powers in an information society. What role should antitrust law play in preserving the constitutional values both overseas and in Europe?
Well, without even attempting to provide with a thorough reply, we can first observe that the competition area will be a key regulatory area for the years to come for the battles of balancing our digital societies. There will be, and already are,
heated debates between centralist proponents of “gigantism” and “the decentralisers”,
as put by the insightful Frank Pasquale. There are a number of recent indications on this battlefield’s development, with several cases against Google and Facebook in the US as well as recent investigations of Apple and Amazon in Europe by the European Commission. And, of course, the Digital Services Act package in the EU, that we saw a draft of in December 2020.
There are still a number of issues linked to the digital conglomerates of today that require much further socio-legal thought.
The implications of market-making and control linked to platforms’ own abilities to govern their infrastructures, is one of them. The roles and value of personal data in multisided platforms where so-called zero-priced services play a data-collecting role for monetisation in other parts of the platform, is another.
I see a necessary development of how consumer and data protection relates to competition policy here, and I see a need for more collaboration and supervisory methodologies developed in responsible authorities around it.
This links to a need for more transparency for these data economies that goes further than for only the largest players. The “ecosystem” of data collection, sharing and brokerage of individuals’ data needs to be better mapped, understood, and likely, regulated. Currently, the data economy is far too opaque and automated for its external scrutiny.
What are the harmful consequences of business operations of large tech companies and how, in your view can regulators step up?
A quick answer would be consumer detriment and stifling of innovation. More specifically, I’m concerned with the lack of trust that numerous studies have shown consumers to have for digital markets that collects lots of data. There is a risk that this imbalance is detrimental for many of the beneficial solutions and services that could have been. And, even worse, that some of the worst data-extractive business models contributes to and reproduces already present inequalities and social injustices, as a by-product of an attention economy gone too far.
We are still to see exactly how the Digital Services Act package plays out, for example. But that is definitely one domain in which this battle of digital balances will be fought. On the other hand, one should however not safely assume that regulators always represent an unbiased and fair perspective themselves. I think that the recent movements in relation to the COVID-19 pandemic shows that also governments may have less proud reasons for tracking its citizens. In some instances, perhaps oddly enough, may even large tech be the privacy protecting part in the mix.
Europe is currently falling behind the United States and China in terms of digital era innovation such as Artificial Intelligence. What strategies, in your view, shall Europe pursue to create an ecosystem that encourages research and innovation, which can then provide both competitive and strategic advantages for the continent?
The European “falling behind” depends on how you measure. I think that we should see the AI development less as a race for sharp biometric or predictive tools and more as a challenge for how to develop secure and accountable technologies that can be implemented in trusted and beneficial ways. Any tool can, and will, be misused, and the appropriate framing here is to ensure that the strong potential that AI represents also provides to the less privileged – and to strive for an approach of what is acceptable in a balancing act between different stakeholders rather than what is optimal in the eyes of the few. The value-base cannot be seen as an added layer of ethical consideration to be inserted at a later stage, but should be seen as a core feature setting the expectations for technological development. Fairness in accountable and explainable processes – we will increasingly hear more calls for sustainable and green AI too – needs to set the agenda.
This means investment in talents and research, for sure,
but it needs to be of a multidisciplinary variety, encouraging also the traditional academic organisations to ensure stronger collaborative abilities between different faculties, and the surrounding society. That is the race we should be in, and that is what we should measure.