Close

European legislation on using AI: What does it mean for your organisation?

The rise of AI offers many opportunities for people and society but is accompanied by headaches. Doesn't our lives risk being too much determined by increasingly powerful autonomous systems? The European AI Act ensures that the risks are identified and remain manageable.
Adobe Stock 361265441 Editorial Use Only

The European AI Regulation (AI Act) was adopted in early March 2024 and will come into force in 2026. The regulation applies to anyone who produces AI themselves, markets AI, or deploys AI. The regulation highlights that AI systems that can operate autonomously, in particular, pose risks. The AI Act aims to manage these risks and ensure that individuals and businesses within the EU can rely on AI systems. They must work safely and correctly, be controllable, and be non-discriminatory and human-intervention. The AI Act focuses on a risk-based approach, transparency and explainability, and data governance.


Risk-based approach

The AI Act identifies four risk categories: unacceptable, high, low, and minimal.

Marketing update 2023 2

The category of unacceptable risks includes, for example, social scoring, where human activities are constantly monitored and rewarded with points for good behaviour, such as practices we see in countries like China. This kind of mass surveillance is banned in Europe, although there are exceptions for investigative agencies.

The regulation places particular emphasis on the 'high-risk' category. To qualify as 'high-risk', the use of AI must be such that it could cause harm to the health, safety and fundamental rights of individuals in the EU. This can arise in two cases: first, when the AI system is a safety component of a product regulated by law, such as medical devices, and secondly, when it is used in one of eight high-risk domains: biometric identification, critical infrastructure, education, employment and HR, access to and use of essential private services and essential public services, law enforcement, migration and asylum, or justice and democratic processes.

Organisations using high-risk AI systems should conduct a Fundamental Rights Impact Assessment (FRIA). This identifies risks to critical issues such as security, privacy and discrimination. A second step is to set up a risk management system to mitigate and evaluate the risks. Transparency is key. Users must understand how the AI system works, and technical documentation becomes mandatory to track and monitor automatically generated decisions. Moreover, human supervision is required when using the AI system so that interventions can be made to prevent risks or stop the system in case of deviation from its intended operation. Thus, harm and bias should be avoided.


Transparency

All AI systems - and thus also low-risk AI systems - must start meeting transparency obligations. Individuals have the right to information and explanations and must be protected from discrimination. It must be made clear why and how an organisation uses AI systems, how these decisions are made and what data is used to train the system. An example of this is that it should be clear to you as a consumer that you are talking to a chatbot rather than a person.


Data governance

The AI Act requires, at least in high-risk systems, the use of qualitative and representative data. This requires a data management system. It is, therefore, wise to have a clear vision of the data your organisation collects and uses for AI systems. Technical documentation is also required so that errors can be detected and the system's users know how to use the system correctly.


In conclusion

Despite the risks involved in using AI, this technology can be of great (societal) benefit. Consider improving customer service and optimising healthcare, for example. Aware of the advances offered by AI, the EU wants the regulation to encourage innovation. However, finding a balance between regulating and innovating will remain a difficult task.

Sven Stevensen of the Personal Data Authority stresses the importance of using AI responsibly: "Using AI responsibly means a holistic approach to AI with a good understanding of the process and its impact on data subjects." He adds, "Ethical questions should be asked before implementation: one person's progress should not lead to another person's decline." Using AI responsibly also means including the unintended effects of AI in your consideration of whether or not to use AI.

Contact

Are you curious about what the AI Act means for you or have other questions? If so, please contact Sofie Beekers. She is a Finance and technology consultant at Boer & Croon, has a background in Philosophy of Law and International Technology Law (AI), and is currently training to become an AI Compliance Officer.