How the EU Artificial Intelligence Act Affects AI Systems

Share This Post

Table of Contents

The European Union (EU) has taken a significant step in regulating the development and use of artificial intelligence (AI) systems. In 2024, the EU passed the Artificial Intelligence Act, which represents a groundbreaking regulatory framework. This law aims to encourage innovation while also mitigating the potential risks and negative impacts associated with AI. The EU AI Act classifies AI systems into different risk categories to ensure their development and deployment align with safety, ethical standards, and respect for human rights. Getting into compliance with the EU AI Act may seem daunting, but a highly skilled auditor can help. This blog post will explore the key aspects of the EU AI Act, and the specific requirements for each risk category.

What is the EU AI Act?

The European Union’s Artificial Intelligence Act passed in 2024, is a groundbreaking law that regulates the development and use of artificial intelligence systems across the 27 EU member states. It is the first comprehensive regulatory framework for AI, aiming to encourage innovation while mitigating risks and potential negative impacts. The EU AI Act classifies AI systems into different risk categories to ensure their development and deployment are aligned with safety, ethical standards, and respect for human rights. Here are examples for each category as defined by the Act:

Minimal Risk AI Systems

These are AI applications that pose little to no risk to individuals’ rights or safety and are largely unregulated under the Act. Most AI systems fall into this category, allowing for broad innovation and application. Examples include:

  • AI-Enhanced Video Games: Games that use AI to improve user experience, generate dynamic content, or adjust difficulty levels.
  • Spam Filters: AI systems that analyze email content and sender behavior to filter out spam from users’ inboxes.
  • Personal Assistants: Software like smartphone voice assistants that help with scheduling, basic information searches, and performing simple tasks based on user commands.

Limited Risk AI Systems

These systems are subject to specific transparency obligations to ensure users are aware they are interacting with an AI. They pose a slight risk, and hence, transparency is key. Examples include:

  • Chatbots: Online customer service assistants that can handle inquiries or provide information to users, where it must be clear that the responses are generated by AI.
  • Deepfake Technology for Entertainment: AI applications that generate or alter video and audio content for entertainment, which must disclose their artificial origin to prevent misinformation.

High-Risk AI Systems

AI applications in this category are associated with significant risks to health, safety, or fundamental rights, requiring strict compliance with regulatory standards. Examples include:

  • Recruitment Tools: AI systems used to screen, evaluate, and select candidates for employment, which could impact fairness and non-discrimination in hiring practices.
  • Credit Scoring: AI models that assess individuals’ creditworthiness, affecting their access to financial services and products.
  • Remote Biometric Identification: Systems that identify individuals in public spaces through facial recognition or other biometric data, raising concerns about privacy and surveillance.
  • Critical Infrastructure: AI used in managing and operating essential services like electricity and water supply, where failures could have severe consequences for public health and safety.

Unacceptable Risk AI Systems

The Act prohibits AI applications that pose an unacceptable risk by threatening people’s safety, livelihoods, and rights. These include:

  • Social Scoring by Governments: AI systems that evaluate citizens based on their behavior or personal traits, affecting their rights or access to services based on those scores.
  • Exploitative AI: Systems that use subliminal techniques or take advantage of vulnerable individuals (due to age, physical, or mental state) to manipulate their choices in a harmful way.
  • Indiscriminate Surveillance: The mass surveillance of individuals using AI without a targeted purpose, infringing on privacy and personal freedoms.
  • AI for Predictive Policing: Systems that predict the likelihood of individuals committing crimes based solely on profiling, which can perpetuate bias and discrimination.

There are also specific requirements for providers of “General Purpose AI” models capable of performing a wide range of tasks. This includes technical documentation, copyright compliance, publishing summaries of training data, and additional testing for models deemed to present systemic risks.

The EU AI Act is a significant milestone in ensuring the responsible and ethical use of AI technology. The law establishes a comprehensive regulatory framework that balances the need for innovation with the need to protect the rights, safety, and well-being of individuals. As EU member states work to implement and enforce the legislation, organizations and AI developers, both within and outside the EU, will need to carefully navigate the evolving compliance landscape. Understanding the EU AI Act’s risk-based approach will be crucial to unlocking the full potential of AI while upholding the fundamental values and principles of the law. Leveraging the guidance provided by the ISO/IEC 42001 standard on AI management systems can serve as a valuable framework for organizations seeking to align their practices with the requirements of the EU AI Act. In our next article, we’ll show you how.

Ready to align your AI practices with the EU AI Act? Reach out to Insight Assurance for expert guidance and support in managing your AI regulatory challenges effectively.

Subscribe To Our Newsletter

Get updates and learn from the best

More To Explore

Why Insight Assurance?

Elevate customer trust, reduce compliance burdens, and enhance security practices with us.

Is your organization ready?

Contact us to discuss your needs.