From Deepfakes to Deep Impact: AI’s Influence on Εlections

The 19th edition of The Global Risks Report was published in the wake of the World Economic Forum in Davos and brings together the ten most serious short-term global risks in order of severity. At the top were misinformation and disinformation. These two concepts are sure to be a concern in 2024, a bumper year […]

The 19th edition of The Global Risks Report was published in the wake of the World Economic Forum in Davos and brings together the ten most serious short-term global risks in order of severity. At the top were misinformation and disinformation. These two concepts are sure to be a concern in 2024, a bumper year for elections in which more than 4 billion people will be entitled to vote in at least one of the electoral contests to be staged in more than 60 countries.

At the same time, not even 15 months have passed since ChatGPT arrived with such a bang, introducing the general public to generative AI. This technological revolution found lawmakers and citizens unprepared and has left them stunned.

This technology did not emerge overnight. We have been training machine learning models for years using human-generated big data. Importantly, models of this sort do not produce new knowledge. What they actually do—and do convincingly—is reproduce human thinking, with all its biases and weaknesses. What we now simplistically refer to as ‘artificial intelligence’, since it is based on statistical rules and billions of bits of data, could perhaps better be rendered as “the reflection of the average”.

Given that these weaknesses are inherent in models that can perform operations at speeds exponentially faster than the human mind, it is clear that this year’s election races find the world ill-prepared for a great danger.

The danger becomes greater still when we consider AI’s potential. For example, the technology can produce totally plausible deepfake images, videos and audio. Given today’s fragmented electorates, tampering of this sort with the boundaries between reality and unreality could proof a decisive factor in who wins and who loses.

Online ecosystems, as modern-day Agoras, shape the average person’s perception of the political realm. Who could challenge an army of fake account bots supporting one candidate over another? Who would question search engine results if they are inundated with “journalistic” articles that use the right keywords to majestically manipulate SEO?

These are just two examples of how artificial intelligence could be maliciously employed. Indeed, in a world where these models are widely accessible and in which—for the time being, at least—there is no regulatory framework or safety net (e.g. digital watermarks) in place, the end product, misinformation, must be addressed. Never before has there been such an urgent need for accurate information, effective fact checking and, most importantly of all, digitally literate citizens transmitting and receiving information.

Artificial intelligence is neutral. It is not inherently bad or good, just another tool with which humans can do good or bad—but do it at a far, far faster pace. As a multiplier, it forces us to confront the issues major and minor which we preferred to push under the carpet, magnifying old inequalities and widening social divisions. The solution does not lie in limiting the possibilities of these models, but in ensuring that they function ethically and in identifying and eradicating the inequalities that are currently magnified by this reflection of humanity.

The results of this election year will be decisive for democracy and, by extension, for the future course of artificial intelligence. We are at a crossroads in history and we must decide whether we will live in human-centered societies with a focus on morality, or allow inequalities to drag us to extremes.

*Petros Karpathiou is Digital Communications Coordinator at ELIAMEP

Follow tovima.com on Google News to keep up with the latest stories
Exit mobile version