The EU AI Act, a groundbreaking development in the regulation of artificial intelligence, recently achieved provisional agreement, marking a pivotal moment in the governance of this transformative technology. Going beyond mere regulation, this comprehensive legal framework is poised to set international standards for AI development and deployment. This article explores the key aspects of the EU AI Act, emphasizing its potential to protect fundamental rights, ensure transparency and accountability, and establish a risk-based categorization system.
- Risk based categorisation:
Minimal risk: Everyday applications like chatbots and spam filters face minimal compliance requirements.
High risk: Applications posing significant societal or individual risks, such as facial recognition systems and credit scoring algorithms, are subject to stricter transparency, testing, and accountability measures.
Unacceptable risk: Practices deemed inherently harmful, like social scoring and mass surveillance, are explicitly prohibited. Practices deemed “unacceptable risk” are banned, such as social scoring systems or AI used for mass surveillance.
- Protection of fundamental rights:
The EU AI Act prioritizes the protection of citizens’ fundamental rights. The law aims to ensure that AI systems used in the EU are:
Transparent: Developers and deployers of high-risk AI must provide clear and accessible information about how their systems work, what data they use, and how decisions are made.
Traceable: The origin and development of high-risk AI systems must be documented to ensure accountability and prevent misuse.
Non-discriminatory: AI systems must not discriminate against individuals or groups based on factors such as race, gender, or religion.
Environmentally friendly: The Act encourages the development and deployment of AI in a way that minimizes its environmental impact.
- Transparency and accountability are key tenets of the Act:
Developers and deployers of high-risk AI are required to provide clear and accessible information about their systems’ functionality, data usage, and potential biases. Rigorous testing and certification processes ensure high-risk AI adheres to safety and ethical standards before deployment, minimizing the risk of unforeseen consequences.
- Human oversight remains paramount:
The Act recognizes the irreplaceable role of human judgment in ensuring ethical and responsible AI application. High-risk systems require human involvement in critical decision-making processes, upholding fundamental rights and preventing lapses.
- Exclusions from the Scope
The regulations of the Act will not extend to systems exclusively for military or defense and the national security authorities of the member states or the entities entrusted with the task of natural security. Furthermore, the regulation of the Act will not apply to systems exclusively for research and innovations. Also, individuals using AI for non-professional reasons (like playing games or using personal assistants) are outside the scope of the Act.
- Timeline and Next Steps
The provisional agreement reached in December 2023 still needs formal approval from the European Parliament and Council. Once approved, the law is expected to enter into force two years later, with additional time for specific implementation provisions.
For more information about the aforesaid developments you may write to us at: solutions@bridgeheadlaw.com.
Karan Narvekar | Partner
Meghna Shukla | Associate
Views expressed are personal to the authors and do not constitute as legal advice.