The dynamic nature of technology and the ongoing spread of innovative ideas and applications make tech-related forecasts a very risky business. Of course, that does not mean we cannot make initial assessments and predict that certain dynamic trends will emerge.
In the year now ending, ChatGPT and other similar models have not only brought this technology closer to the man on the street, they have also made AI’s iconoclastic challenges even more real and the need for control still more compelling. 2024 is expected to be marked by the commercial dominance of this type of AI, which brings benefits to the tech companies that supply the models and the infrastructure that ‘trains’ them, but also—crucially—to the non-tech companies that will adopt it to reduce production costs and boost productivity.
The next-generation Generative AI tools will not only be more accessible still to the average user. They will also make it harder to distinguish between human- and computer-generated material, raising thorny legal issues in relation to intellectual property along with serious ethical dilemmas concerning human creativity and its coexistence alongside the more ‘creative’ aspects of artificial intelligence. Numerous experts have pointed out that 18 months from now, 90% of internet content will be AI-generated. In 2024, the adoption of AI Analytics and the Internet of Things (IoT) will speed up due to the more widespread application of digital-twin technologies which can analyse sensor and operational data in real time and create replicas of complex systems such as physical objects and personas.
One of the biggest challenges for the year ahead will thus be laying solid foundations for the responsible use and effective control of the most advanced version of this technology. In Europe, the foundations for such a framework were recently put in place, following a series of dramatic marathon meetings, with the adoption of the General Regulation on Artificial Intelligence. Which means the year ahead will be devoted to preparing state administrations and companies to comply with the new rules of the game, which seek to ensure transparency in the design and use of AI. At the same time, other international organizations including the Council of Europe and UNESCO are expected to adopt rules on AI use which focus on the need to protect human rights. Alongside the legislative attempts to chart and monitor this dynamic technology, 2024 is expected to witness increased reactions to and against the widespread use of ChatGPT-4by multiple actors including universities, research centres, scientists, along with artists and men and women of letters, who we saw mobilize in 2023’s major Hollywood screenwriters’ strike.
Within this framework, we can expect to see two main trends emerging: First, both specialists in legal compliance vis-a-vis the implementation of specific high-risk applications and experts in text authentication (watermarking, data curation etc.) becoming increasingly in demand, given that ethical/regulatory standards will soon be essential in numerous areas, even at the international level. Second, a new market emerging focused exclusively on the creation of artificial intelligence products whose comparative advantage will lie in their being designed to comply with specific legal or ethical rules.
But 2024 will be crucial in terms of AI use for one more reason: it is the biggest election year in decades, with electoral contests upcoming in the US, the EU, India, Japan, Mexico and other populous countries. The fear is that the new AI systems will be used to manipulate public opinion and/or specific population groups—through microtargeting, big data analytics and deep fakes—and alter the election result in one way or another. At this point, it is worth noting that the European Digital Services Act, which is expected to put the fundamentals in place for the protection of (digital) citizens on 19 major tech platforms and beyond, will come into force in 2024.
In other words, the year to come may well prove decisive both in terms of whether the technological predictions of the dominance of Generative AI prove well-founded (a crucial element here will be whether vast pools of personal data will be available or not), and whether the voices calling for the temporary prohibition or strict regulation of certain creative applications will be heeded in the interim period between now and our amassing the experience and know-how required to control an evolving technological phenomenon.
Still more importantly, it remains to be seen whether the coming year, with the challenges it poses, will push the political system into providing support for those who are likely to become collateral damage of the ongoing technological revolution: the elderly, who are generally not equipped to enjoy any of its benefits; the children who will inevitably become still more addicted to a virtual reality whose allure will to grow in parallel with its degree of personalization; and those who will be rendered ‘disposable’ or even ‘redundant’ by the rapid automation of the creative tasks they currently perform.