Yuval Noah Harari: ‘We need China on our side to combat Artificial Intelligence’

Harari discusses his new book "Nexus" on Artificial Intelligence, in an interview with TO VIMA

The Israeli medievalist, historian and public intellectual currently serves as professor in the Department of History at the Hebrew University of Jerusalem. The esteemed author of the popular science bestsellers “Sapiens: a brief history of humankind” (2011), “Homo deus” (2016) and “21 lessons for the 21st century” (2018) talks about his new book “Nexus” concerning Artificial Intelligence (Alexandria Publishing House).

In our previous interview, in Athens, I had asked you which could be “the next Coronavirus” and you answered “perhaps a digital virus”. After the Microsoft blackout does such a threat seem closer to us?

Absolutely. The Microsoft blackout happened because of an accidental glitch in a tiny component. Now imagine what a deliberate cyber attack by a major power can cause. So much of our world – airports, hospitals, banks – depends on digital infrastructure. A digital virus could potentially kill thousands of people, and cause immense economic harm and political upheavals.

Which is the stance of populist leaders towards the accumulation of information in our days? How do they use it and what is the connection with the so-called “meta-truth”?

The typical populist strongman, like Putin or Trump, believes that power is the only reality. He himself wants unlimited power, and he believes all humans are like him. Whenever scientists, journalists or judges claim to be seeking the truth, the strongman doesn’t believe them, and sees this as a threat to his own power. For the strongman, truth and science are a restriction on his power. Therefore the strongman routinely spreads disinformation, fake news and conspiracy theories. His aim isn’t to achieve this or that temporary goal, but to undermine people’s trust in the very idea of truth.

Why is the misuse of algorithms in Myanmar so enlightening in order to understand the way they diffuse hate speech? Does the same apply to Bolsonaro’s rise?

In Myanmar in 2016-18, Facebook gave its algorithms a seemingly benign goal: to make more people spend more time on Facebook. But the algorithms then discovered by trial and error that the easiest way to achieve this goal is by spreading hate and outrage. When the algorithms recommended to users hateful conspiracy theories and outrageous fake news against the Rohingya minority, users spent more time on Facebook, and shared this content with more friends. This increased ethnic tensions in Myanmar, and ultimately led to an ethnic cleansing campaign against the Rohingya, in which thousands were killed and hundreds of thousands were expelled. Similar dynamics led to the rise of Bolsonaro in Brazil, and of other populist leaders elsewhere.

This shows both the power and the autonomy of algorithms. Nobody instructed the Facebook algorithms to spread outrage or to harm the Rohingya. They were told only to increase user engagement. It was the algorithms themselves that discovered that outrage creates engagement, and that decided to spread outrage.

We can solve the problem of social media algorithms spreading outrage. But the problem of AI algorithms gaining power and making dangerous decisions will only get worse. The most important thing people should understand about AI is that AI isn’t a tool. It is an autonomous agent. AI is the first technology in history that can make decisions by itself and create new ideas by itself. The Facebook algorithms in 2016 were still a very primitive kind of AI. They could make some decisions, but they weren’t very good at creating new ideas. They spread content created by humans. Today there are far more powerful AIs, that can create texts, images and videos by themselves. What will happen to human civilization when millions of very intelligent non-human agents join our political, social and economic networks, and start to make independent decisions and create new ideas? The result might be new weapons designed by AI, new financial crises caused by AI, new religious cults led by AI, and new totalitarian empires managed by AI.

Does part of the political game nowadays in USA belong to algorithms and the way “dark web” favors trumpist views?

Democracy is a conversation. It is therefore built on information technology. For most of history, the available technology didn’t allow large-scale political conversations. All ancient democracies, like ancient Athens, were limited to a single tribe or a single city. We don’t know of any large-scale ancient democracy. Only when modern information technologies like newspapers, radio and television appeared, did large-scale democracies become possible. The new information technology of the 21st century might again make large-scale democracy impossible, because they might make large-scale conversations impossible. If most voices in a conversation belong to non-human agents like bots and algorithms, democracy collapses. There is a lot of arguments about American politics nowadays, but I think everybody can agree on one thing: the USA has the most sophisticated information technology in history, and at precisely this moment Americans are losing the ability to hold a rational conversation.

How has Vladimir Putin manipulated the spread of information within Russia and pro-Russian countries at the expense of the Ukranian people?

Putin is spreading lies to justify his invasion of Ukraine. He tries to blame NATO for the war he himself started. The real cause of the war is obvious: For years Putin has repeatedly claimed that the Ukrainian nation doesn’t exist, and that all of Ukraine belongs to Russia. His goal is to completely destroy Ukraine and reestablish the Russian Empire. This is why every piece of territory conquered by the Russian army is annexed by the Russian state. Anyone who blames the Americans or Ukrainians for the war has been fooled by Putin’s propaganda machine.

Which is your biggest fear and your biggest hope as far as the use of Artificial Intelligence is concerned?

Throughout history, humans have lived cocooned in a culture created by other humans. In future, we might live cocooned in a culture created by non-human intelligence. My biggest fear is that AI will trap humanity inside a world of illusions, and we will not be able to break out, or even suspect what is happening. My biggest hope is that AI will not only help humans deal with many of our current problems – from disease to climate change – but also help us get to know ourselves better.

What must be the limits for a democratic society in its effort to control surveillance systems?

Democracies can certainly use new surveillance tools for good purposes, for example to monitor diseases and provide citizens with better healthcare. But limits must be placed on surveillance to protect our privacy, and whenever we increase top-down surveillance of citizens, we must balance it by also increasing bottom-up surveillance of the government and big corporations. AI can help the health ministry and big pharmaceutical companies to know more about our illnesses, but at the same time it should also help us know more about any corruption or tax evasion in the health ministry and big companies.

How would a bureaucrat from the Cold War USA or USSR react if they had the chance to travel through time and see that today global networks possess a huge amount of personal data? 

They would be amazed by the new possibilities AI opens. In the USSR, the KGB couldn’t spy on everyone all the time. The KGB didn’t have enough human agents to follow 200 million Soviet citizens 24 hours a day, and didn’t have enough human analysts to analyze all that information. Today, digital agents and analysts can do it. It is consequently becoming possible for a bureaucracy in countries like Russia to follow everyone all the time, and know more about citizens than we know about ourselves. We quickly forget most of what we do and say, but a state bureaucracy equipped with AI will never forget anything we do or say.

China is an anti-paradigm of free information flow. Is this also a big threat in our time?

In China there are strict limits on freedom of speech. On the other hand, without Chinese cooperation we have no chance of regulating AI. No power will limit its own development of AI unless other powers do the same, because nobody wants to be left behind in an AI arms race. It should be possible for the West and China to cooperate on this, because we do have common interests. If AI enslaves or destroys humanity, this will be bad for the Chinese as well as for Westerners. In addition, autocratic regimes have their own unique fears of AI. Throughout history, the biggest danger to every autocrat came from his own subordinates, and to stay in power autocrats constantly terrorized their own soldiers and bureaucrats. But how can you terrorize an AI? If it disobeys, would you send it to a digital gulag? Nothing frightens an autocrat more than powerful subordinates he cannot control. This means that there are good reasons even for autocratic regimes to cooperate with democracies to regulate and control AI.

After all, could one be optimistic up to a certain point that we can still define our future and the use of information networks?

Yes, there is still room for optimism. AI can be used for many good purposes. It all depends on our ability to control it. AI-enthusiasts say that every technology comes with risks, but that shouldn’t prevent us from developing and using it. We use cars despite the risk of car accidents. But when you learn to drive a car, the first thing they teach you is how to use the breaks. Only afterwards you learn how to press the accelerator. With AI, we are pressing the accelerator before we have learned to use the breaks. That is very dangerous.

Follow tovima.com on Google News to keep up with the latest stories
Exit mobile version