Articles Default

Marcel Kolaja: The AI Act from the CULT perspective

GAELLE LEWYLLIE

As an opinion rapporteur for the Artificial Intelligence Act in the Committee for Culture and Education (CULT), I will present a proposal for amending the Artificial Intelligence Act in mid-February. The draft focuses on several key areas of Artificial Intelligence (AI), such as the definition of AI and structure of the act, transparency obligations, high risk AI in education, high risk AI requirements and obligations, AI and fundamental rights as well as prohibited practices.

Let’s discuss some of the most problematic parts.

Practices endangering fundamental rights must be banned, without exceptions

The aim of the regulation is to create a legal framework that prevents discrimination, prohibits practices that violate fundamental rights, or endanger our safety or health. In its current form, the regulation forbids a number of uses of artificial intelligence which, I agree, must be banned. Namely AI systems that manipulate human behavior and predictive AI systems that target vulnerabilities. That is the right call; however, there is a catch: exceptions.

These prohibitions do not apply to EU governments and public authorities, if they are implemented in order to safeguard public security and provided that they are in line with EU law. For instance, public authorities would be allowed to monitor users’ location, private communications, activity on social networks, and all the records and tracks that users leave behind in the digital world. The problem is that such records can be easily used for mass surveillance of citizens. Especially the use of facial recognition systems in countries where the independence of the judicial branch is at stake.

For example, the Hungarian government persecuted journalists in the so-called interest of national security for questioning the government’s actions amid pandemic. Even the Chinese social scoring system is based on the alleged purpose of ensuring ‘security’. That is why it is absolutely necessary to set a precedent that will prevent governments from using AI systems to violate fundamental rights, especially if democracy is not in its best shape.

These concerns were repeatedly raised by an open letter signed by 116 Members of the European Parliament last month calling on the European Commission to address the risks posed by high-risk AI applications that may threaten fundamental rights. Moreover, the same request was also raised by citizens during public consultation.

High-risk applications should go through a third-party conformity assessment

The proposal includes a definition of so-called high-risk artificial intelligence systems. An example of such a high-risk artificial intelligence system is the Dutch welfare surveillance system, which aimed to predict the likelihood of an individual violating labor laws or committing tax fraud. This system was put on halt because of provable fundamental rights violation.

How about HR tools that could filter applications, banking systems that evaluate our creditworthiness (i.e. the ability to repay debts), or predictive control systems that are extremely risky to reproduce bias and have a negative impact on disparity? All of those systems fall under the definition of high-risk. In terms of education, one could primarily think of e-proctoring systems that are currently being used for remote school examinations, especially in times of a global pandemic. In short, these tools can have a serious impact on people’s lives.

According to the proposal, many of them should be subject to mere self-assessment, i.e. their own risk assessment. That is concerning because self-assessment of conformity may not be sufficient verification and should be performed by a third party.

In case of self-conformity assessment, only the AI system provider would be responsible for undertaking the risk assessment. On top of that, according to the proposal, there is no need for a competent authority to approve such assessment. Not only will the providers perform the assessment themselves, they will also evaluate whether their high-risk artificial intelligence system complies with the requirements of the new regulation. If it does, they shall declare the system approved.

Openness of the system helps preventing mistakes

Users, academics, everyone should have the right to understand the underlying logic of the artificial intelligence system we are subject to. According to the proposal, only non-technical documentation needs to be published, which I find insufficient. The proposal establishes protection for all parties in terms of commercially confidential information, trade secrets, and so-called intellectual property rights, unless their disclosure is in the public interest.

Nevertheless, this level of confidentiality is not compatible with the requirement of granting access to anyone interested to study how the system works. Companies should be encouraged not only to release the code and training data sets under a free and open license, but already design such systems in a transparent manner. Not only this allows more insight into the systems’ work, it can also address numerous problems we are currently dealing with.

In conclusion, it is simply not sufficient to only publish non-technical documentation. To verify the system authenticity, we need to look at the code and operation. We also need to keep in mind that AI systems are constantly learning and data sets are developing. Therefore, it is absolutely crucial that both authorities and civil society can audit them anytime.

How can we achieve that? The solution lies in the use of Free and Open Source Software.

What are the next steps?

We need to set clear rules. Rules that will not be easy to circumvent. And we must avoid any loopholes that could be abused.

Currently, I have been working on a draft legislative opinion which will be presented in the CULT committee in mid-February. I will do my best to fill all the gaps that I have found and identified. Especially those that may affect education. Stay tuned for the draft!

0 comments on “Marcel Kolaja: The AI Act from the CULT perspective

Leave a Reply