You are the director of iHUMAN, a superb movie on Artificial Intelligence. Can you share the original motivation behind making this film?
The idea actually came from my last movie, the “DRONE” which looks at the secret CIA drone war in Pakistan. While I was working on “DRONE”, I came to realize how Artificial Intelligence (AI) not just is changing modern warfare, but how this technology is changing our everyday life as well as our societies not to mention our future. For me it is incredibly important that we now take a step back and think carefully about what this means, as this expansive and rapid change is happening without us and without really knowing what is going on and without us having a political debate about how we tackle all the ethical challenges that this technology brings. So this is why I made iHUMAN. My goal is to highlight and draw attention to all the ethical challenges of the AI revolution.
As your movie truly shows, the 21st century is increasingly defined by the presence of AI and algorithmic decision-making. What in your view are the major societal impacts of this phenomenon? How in your view AI is reshaping our lives?
We are living in a time when we are completely addicted to intelligent machines. We have AI in our phones and in our computers. We are surrounded by technologies that are listening to what we say and see what we do. All of our online activities are constantly tracked, analyzed and categorized. Today we have what I call a new Algorithmic World Order where we are constantly under the watch of algorithms. This is part of what is called Surveillance Capitalism. I think this is an excellent term because basically most of these surveillances are geared towards targeting us with personalized advertising making us into super consumers by keeping us in our echo chambers for as long as possible to make more money. What I am really concerned about is the way that big tech companies also control what we know about the world. These big tech companies, especially Google and Facebook are now leading the AI revolution and are now actually called AI companies. When you have just a few tech companies that control most of what we know and have the ability to manipulate these information to also control how we feel and think in order to sell us things, then we are facing a threat to our democracies. In iHUMAN we also show that in order to grow more, these companies collaborate with the military as well as with surveillance agencies. This is a serious situation as there is neither oversight nor international regulation of AI so these companies can operate as a “multinational mafia” with enormous amount of money, no transparency and no responsibility and no international regulations.
How in your view do large tech companies channel this technology to their own advantage?
It is important to remember that Google and Facebook use more money in developing AI than entire nations. Therefore, they often have more power than entire nations where they are also affecting and influencing the political processes through lobbying. They are diminishing our privacy. Artificial Intelligence is incredibly effective tool. It is amplifying the needs of those who control it.
You mentioned that AI would potentially create a new world order. In which ways this new world order might differ from the present one?
I am not against AI. I am incredibly fascinated by this technology and I acknowledge that it can be used to do tremendous good and like any technology this is a tool that can be used in many different ways. It is the people and the powers behind AI that I am concerned about. Today we have a new power balance in the world, where tech giants have almost an unlimited power, and they reach billions of people. Tech giants know everything about us today. They know who we are, where we are, who we are with and what we are thinking. Most importantly they also to a large extent control what we know about the world. When tech giants manipulate our information we face a new threat to our democracies that we have to take seriously.
How do you view the “improvement versus replacement” dilemma? Does algorithmic decision-making “improve” our lives or does it aim to “replace” them? What kind of threat does algorithmic decision-making pose?
Well, this is a very hard question and I do not think there is black and white answer to it. One of the advantages people are claiming is that AI will help us solve dull, dirty and dangerous jobs. To some extent this is great if we can use technology and machines to solve dangerous and really boring jobs that nobody really wants to do anymore. On the other hand we have not solved what we are going to do with the rise in unemployment. We have already begun to see, for example in San Francisco tremendous problems with homeless people who can no longer afford housing because the tech companies have caused the housing market to skyrocket. The jobs are disappearing and people can’t afford housing. One of the experts in the film says that it is predicted that AI will take over 10 million jobs just in the States when we get self-driving cars for example. But one also has to look at this dilemma from a more philosophical aspect. What is the meaning and what is our purpose of life? When it comes to algorithmic decision making, there is a lot of areas of life which needs time and where efficiency is just not the right answer. We don’t necessarily want more efficiency in the justice system or predicitive policing – where it is important to have a humane due process.
You mentioned the possible of need of regulation.
What we are facing today requires the urgent need to put up an international framework of laws and regulations that look at the new challenges AI brings. However, it is extremely complicated because AI is the most expansive and powerful technology of our time. Most of the aspects of our lives and of our societies are affected by it, and I do think it is incredibly important that States and the international community determine a red line saying that this is what AI should be used for and this is what AI should not be used for. Even though it is extremely hard to get States to agree on where this red line should go, I think it is a challenge that requires local as well as global cooperation. While we are discussing whether AI is good or evil, this technology is developing so incredibly fast that there is already a huge gap between the policymakers and the tech industry. We now start to see how Europe is taking a lead in the push for international regulations, for example they are considering a ban on facial recognition for several years until we figure out how to solve the challenges that come with this technology.
What’s the moral of your movie and why everybody should see it?
I personally think that with AI we face some of the most important challenges in today’s world. AI affects our lives and our societies and we now have to take responsibility for what kind of world we want to live in. I really believe that in order to control our future we need to create it. I do believe that a different digital future is possible, where humanity and all of our diverse voices should have a say. If there is a moral in iHUMAN, I think it is the aspiration that now more than ever we need to stay informed and take responsibility for our lives and our societies. We are at an important turning point in our history, and we now have to make sure we create the right roadmap for the development and implementation of AI. Before it is too late. We have to make sure that we stay in control of AI so that AI doesn’t end up controlling us.