European Parliament Approves AI Act: Emphasising Responsible Innovation

AI Act

In a significant move toward regulating Artificial Intelligence (AI), the European Parliament has endorsed the AI Act, marking a substantial step forward in ensuring the safe and ethical development of AI technologies. 

The AI Act, now subject to the impending Council of the EU approval, aims to strike a delicate balance between promoting innovation and safeguarding against potential risks posed by AI systems. At its core, the regulation seeks to protect fundamental rights, democracy, the rule of law, and environmental sustainability from potential harms associated with high-risk AI applications.

Key Provisions of the AI Act

  • Banned Applications: The legislation prohibits certain AI applications that pose threats to citizens’ rights, including biometric categorization systems based on sensitive characteristics and the untargeted scraping of facial images for facial recognition databases. Additionally, it bans the use of AI for emotion recognition.
  • Law Enforcement Exemptions: While the use of biometric identification systems by law enforcement is restricted, limited exemptions are allowed in specific situations, subject to strict safeguards and judicial or administrative authorization. Real-time biometric identification (RBI) can only be deployed under tightly regulated conditions, such as targeted searches for missing persons or preventing terrorist attacks.
  • Obligations for High-Risk Systems: High-risk AI systems, identified based on their potential for harm to health, safety, fundamental rights, environment, and democracy, are subject to stringent obligations. These include risk assessment and mitigation, transparency, accuracy, human oversight, and the right of citizens to lodge complaints about AI systems affecting their rights.
  • Transparency Requirements: General-purpose AI systems must meet transparency requirements, including compliance with EU copyright law and publishing detailed summaries of training data. More powerful AI models facing systemic risks are subject to additional evaluation, risk assessment, and reporting requirements.
  • Support for Innovation and SMEs: The AI Act promotes innovation by establishing regulatory sandboxes and real-world testing opportunities at the national level, accessible to SMEs and startups. This initiative aims to facilitate the development and training of innovative AI technologies before market deployment.

Looking Ahead

With the AI Act almost approved, attention now shifts to its implementation and enforcement. MEPs emphasise the importance of compliance by businesses and institutions, highlighting the need for further legislation, such as directives on workplace conditions and AI, in the next mandate.

The AI Act is scheduled to come into force in May, depending on when it is approved by the Council, which has already indicated a positive stance. General-purpose AI rules will apply in 2025, followed by obligations for high-risk systems in 2028. 

National oversight agencies will play a crucial role in enforcing these regulations, ensuring adherence to ethical standards and mitigating potential risks.