Írta: Dr. Sándor Lénárd
Karol Čapek, the famous Czech playwright, invented the word “robot” a hundred years ago while he was working on the famous science fiction play entitled “R.U.R”. During the last century, this science fiction has been gradually becoming an everyday reality and the Digital Revolution along with Artificial Intelligence increasingly permeates every walk of life of the 21st century world. How, in your view does technology transform our lives and our societies including the markets in general?
Technology is transforming nearly every sector from education and healthcare to transportation, ecommerce, and national defense. On the one hand, it is relieving humans of boring, dirty, and dangerous jobs, which improves the quality of our lives. It aids in communications and allows people to complete services online. Yet technology also raises concerns in the areas of privacy, fairness, bias, transparency, and human safety. Algorithms are trained on data that are incomplete or unrepresentative and that introduces fundamental questions of equity into decision-making. We need to make sure that technology innovation respects basic human values and creates an inclusive economy. Right now, there are many who are outside the digital revolution because they lack access to high-speed broadband. This robs them of the ability to apply for jobs, purchases goods and services online, and gain the benefits of digital innovation.
In your published book co-authored with Brookings President John Allen entitled Turning Point Policymaking in the Era of Artificial Intelligence, you are arguing that AI is the transformative technology of our time. Can you shed light on why AI is a turning point?
AI is a transformative technology because of its ability to analyze data, text, and images in real-time and act intelligently based on those assessments. In conjunction with machine learning and data analytics, it enables quick decision-making in complex environments that allow humans to deal with a variety of issues. AI is at a turning point because its capabilities have risen to the point where the technology can move us towards utopia or dystopia. Many AI applications are dual-use in nature in the sense they can be used for good or ill purposes. Facial recognition software, for example, can be used as an instrument for mass surveillance or can find lost children. That quality makes it difficult to regulate because it is hard to preserve its benefits while eliminating its negative features.
What are the opportunities and shortcomings of using Artificial Intelligence or automated decision making process in business or elsewhere?
Automated decision-making is a virtue in cases of routine data processing. AI can speed up processing and make decisions based on that analysis. That liberates humans from tedious tasks and improves the efficiency of business operations. Yet as transactions and activities become more complicated, it becomes more challenging to build algorithms that act equitably and fairly. There always is the risk that software will miss important parts of the complexity or make decisions that are unfair or unsafe. For that reason, it is important to keep humans in the loop so that personal judgment ensures the algorithms act in a reasonable manner. We have to make sure that AI conforms to human values and makes decisions that are safe, fair, and transparent. There needs to be periodic assessment of AI’s impact on various groups and that it respects basic ethical principles.
One of the differences between the ongoing Digital Revolution and previous industrial revolutions is that this one poses a philosophical dilemma between improving and replacing human capabilities. Where, in your view, can be the line drawn between “improvement” and “replacement”?
In the short-run, AI is more likely to augment than replace human performance. Algorithms can help us do a better job and be more efficient in the way we analyze information. But it is not so advanced that it can replace the judgment and nuance required in most jobs. We are a long way from artificial general intelligence as most algorithms are good at specific tasks but are not able to move from one activity to another.
There are some exceptions. In the finance area, AI is being used for fraud detection and wealth management. It turns out algorithms are more rational and less emotional than humans, and those are terrific qualities for managing money. We are also starting to see fully automated retail stores that use computer vision to see what you are purchasing and automatically charge your credit card or mobile payment system. Finally, AI is getting accurate at reading Ct scans and X-rays so human radiologists may be at risk of being supplanted.
What, in your view, are or should be the ethical and legal boundaries of replacing human capabilities?
I think AI is decades if not centuries away from replacing humans. The algorithms we see now are single-purpose in nature in the sense they are very sophisticated at doing specific tasks but not capable of migrating from one task to another. For example, the AI that powers autonomous vehicles cannot instantly shift to running an automated retail store. It takes a lot of effort to master single tasks let alone think about the ability to do a wide range of human activities. It will be a long time before humans face that risk, which gives us time to define the ethical and legal boundaries of advanced technologies.
What are the roles and duties of the states in terms regulation? How can or shall they regulate Artificial Intelligence and automated decision-making?
Governments are starting to develop risk-adjusted systems for regulation in which the degree of regulatory oversight varies with the scope of the algorithm and the number of being affected. There are small-scale AI systems that don’t affect many people and therefore do not require any significant oversight. But then there is AI that takes place on a large-scale, affects millions of people, and poses significant threats to human safety. Those areas require serious oversight to make sure the AI is safe and does not harm large numbers of individuals. The key is to have the regulation proportional to the risk of the AI application.
Self-regulations of companies have been on the rise in terms of Artificial Intelligence and automated decision-making too. Some of the areas of these self-regulations might extend to core constitutional values such as free speech for example in case of tech companies that provide social media services. How do you see this dilemma? How can countries ensure compliance with their constitutions?
I think the era of self-regulation is coming to an end and we are going to see many more government policies that oversee AI and automated decision-making. Very few people trust private companies to make fundamental decisions about freedom of speech and issues of human safety. Those are traditional functions of government and soon we will be enacting laws and regulations that set the limits. That is how we have handled other areas, and I anticipate the same development will take place in the digital realm. Policy always lags emerging technologies, but when people are upset, it can catch up fast. We are at the beginning stages right now of the policy catching up with the technology.