Johannes Bahrke on AI Act: ‘We want to make sure that people can trust the technology, because if they don’t trust it, they will not use it’

To Vima had an interview with the EU Commission Spokesperson on digital economy, research and innovation, Johannes Bahrke, who explained in detail what AI Act means and the dangers of Artificial Intelligence

European Union moves closer to adopting world’s first AI Act. The European Parliament gave the green light to a set of rules that could make citizens’ lives easier and safer. The law also needs to be formally endorsed by the Council.

EU Commission Spokesperson on digital economy, research and innovation, Johannes Bahrke

To Vima, talked to EU Commission’s spokesperson Johannes Bahrke who explained in details what AI Act means and the dangers of Artifial Intelligence. Mr Bahrke underlines that “what the AI Act does is basically to see what is harmful, what is not harmful, or what is potentially risky or not” adding that “We want to make sure that people can trust the technology, because if they don’t trust it, they will not use it. If they don’t use it, well, then, we will miss that train, because it’s a huge innovation potential”.

What does AI mean for Europeans and their lives? Is it useful? Is it dangerous?

Johannes Bahrke: Ιt’s an easy question with a difficult answer. What we want in Europe is to use AI in a trustworthy way. We see that there’s a lot of potential. That’s why we have said we want to have excellence and trust in AI. Excellence meaning we want to have good researchers. We want to support the AI. We give, like more than €1 billion from Horizon Europe and the Digital Europe program every year for research into AI. And we want to stimulate, €20 billion. That’s the long term target for the end of the decade, for member states and industry chipping in. Because we want to be good in the sector. If you’re not good, then others would be. We take that potential, but we want to have trustworthy AI. We want to make sure that people can trust the technology, because if they don’t trust it, they will not use it. If they don’t use it, well, then, we will miss that train, because it’s a huge innovation potential. AI can do many good things, but many people are also worried. And this is why we have the AI Act, you know, like the first legal framework in the world. Actually, that regulates where it’s necessary and which is a minority, by the way. We speak about like 10 to 15% of cases that are really, let’s say, high risk and which are very stringent. Checks and balances and market actors that put AI products or services on the market need to fulfill certain criteria, such as a conformity assessment, implement quality and risk management systems, as well as register the system in a public EU database.

Could you elaborate on the dangers that have to do with AI, for example, and how, AI Act can tackle them.

Johannes Bahrke: What the AI Act does is basically to see what is harmful, what is not harmful, or what is potentially risky or not. It’s called high risk or low risk, or also at risk that is actually not tolerable for our society and stuff are excluded. If you start at the let’s say low risk end, you will have things like a chatbot where it’s just important that people know that you don’t speak with you but with the chatbot, so that they can will, also let me know what they can expect from the system. But this is not harmful. At the end of the day, if the service is bad, you might go somewhere else. If your bank offers you a chatbot and you think, okay, I’m not getting anywhere, will then maybe go to another bank. That is not risky. Then we come to high risk cases where we have to find used cases in an annex to the act, of several areas. We may say here we need to look more closely so that can be, for example, because many people are worried about this. In the world of work, you know, like if you are, applying for a job and your CV is screened by a system, if you are then having an interview and maybe online and doing the interview is basically, checked by an AI, or you are nervous or you are lying in these kind of things, and then these systems need to fulfill certain criteria. And we have the data. The data needs to be of quality. It needs to be actually traced back where the data is coming from. You would need to have, human in control. You need to know why, how the decisions go about these kind of things. This is a longer list of criteria that need to be fulfilled, in order for such a system to be, legally on the market in Europe (some of these criteria are mentioned above). And it doesn’t depend if that comes from the European system or from abroad. If it’s on the European market, it is needed. It’s a bit like a car, right? If it comes from another country or continent, it has to fulfill requirements. And then we have of course risks that we don’t want to have. Like AI used cases that we don’t want to have in Europe as a society, for example, social scoring, which is forbidden. When you have a lot of use of the AI and learned it tricks you into things, you know, that is that, it’s not this, this is illegal under the AI. And then we have an area that many people also are interested in, which is biometrics. The remote biometric identification of people, in real time by law enforcement, which in general is forbidden, but allowed under certain exceptions. It’s a defined list of crimes of nearly every crimes. And then you need to have, let’s say also like a face and there’s a law that regulates this. Then a judge needs to give a permission. So, it’s not that you just can permanently check everyone everywhere. This is not the case. But if you have, let’s say, a suspicion of even a terrorist attack or you look for a missing child or you have a murderer fleeing a murder you might, under these conditions, get permission to use biometrics to identify a victim or a perpetrator of crime, by law enforcement.

As it has to do with it has to do with fundamental rights?

Johannes Bahrke: Nowadays we have two directives or like two, two pieces of law. One is the General Data Protection Regulation, and the other one is the law enforcement. In both there are exceptions already now. But the AI Act clarifies this. And also, you know, has a standard for the entire EU. But once again, it’s an exceptional case. Because, your fundamental rights, for your personal rights as a person, I mean, you need normally to give permission, for your personal data to be used because this is also the data that makes you recognizable. But in this case, you don’t, because you have this overriding public interest of preventing a terrorist attack, for example, or of finding a perpetrator of a crime. But once again, the safeguards are very high for this. So this is an exceptional use of this technology.

There is also the first European AI office. How is this going to work?

Johannes Bahrke: The office is, an innovation coming from the AI Act. It can have a role as, on the one hand, to oversee the implementation of the Act and on the other hand, specifically to work on general purpose. The general purpose is basically developed without knowing, for which case it would be used eventually. In total, 98 posts will be made available for the AI Office.

What we should we expect in the future regarding AI?

Johannes Bahrke: After Parliament vote, there will be Council’s adoption and then the publication in Official EU Journal. It is always the same for each law. We have rules and they become applicable in six months. So already in six months, everything that I mentioned earlier that would be prohibited is prohibited. Provisions regarding general purpose AI Act will be applicable after 12 months, and all the other provisions after two years. But you obviously already start working, so it’s not that you will lose time. All the provisions of the AI act would come into force after two years. And so the kind of need to bridge that time in the way I mean, on the one hand, we will have the AI Pact, which is actually very important. We have to expect, maybe invite companies to sign up to the provisions of the AI Pact already before it’s actually legally enforceable. And to say, we get ready earlier than we need because it helps us also to comply. We have around 400 companies who have shown an interest or signed up, let’s say. And then we also have the work of the international level.

Follow tovima.com on Google News to keep up with the latest stories
Exit mobile version