On March 13th, the European Parliament gave its approval to the Artificial Intelligence Act, marking a significant milestone as the world's first comprehensive legal framework for AI. This enactment directly responds to suggestions put forth by citizens during the Conference on the Future of Europe (COFE). The regulation, which had been negotiated with member states in December 2023, received endorsement from MEPs with 523 votes in favor, 46 against, and 49 abstentions (Yakimova & Ojamo, 2024).
The primary objective of these new regulations is to cultivate trustworthiness in AI both within Europe and on a global scale. This involves ensuring that AI systems uphold fundamental rights, safety standards, and ethical principles while also addressing risks posed by highly potent and influential AI models. Concurrently, the regulation aims to alleviate administrative and financial burdens, especially for small and medium-sized enterprises (SMEs) operating in this domain.
The AI Act introduces transparency obligations for all general-purpose AI models to enhance comprehension, along with additional risk management obligations for particularly capable and impactful models. General-purpose AI (GPAI) systems and their corresponding models are required to meet specific transparency standards, including compliance with EU copyright law and the publication of comprehensive summaries regarding the data used for training. Moreover, more potent GPAI models, which could potentially pose systemic risks, will be subjected to further obligations, such as self-assessment, risk mitigation measures, incident reporting, testing, model evaluations, and cybersecurity standards.
The Regulatory Framework categorizes AI systems into four levels of risk:
Unacceptable risk: This category prohibits certain AI applications that jeopardize citizens' rights, including biometric categorization systems based on sensitive traits and the indiscriminate collection of facial images for facial recognition databases. Additionally, emotion recognition in workplaces and schools, social scoring, predictive policing based solely on profiling individuals, and AI that exploits human vulnerabilities are forbidden.
High risk: Examples of high-risk AI applications encompass critical domains like infrastructure, education, employment, healthcare, banking, law enforcement, migration, border management, justice, and democratic processes. These systems have to undergo risk assessments, maintain usage logs, guarantee transparency and accuracy, and integrate human oversight. Citizens are entitled to submit complaints about AI systems and receive explanations regarding decisions influenced by high-risk AI systems that impact their rights. All remote biometric identification systems are deemed high-risk and subject to stringent requirements. The use of remote biometric identification in publicly accessible spaces for law enforcement purposes is mainly prohibited, with limited exceptions strictly defined and regulated (European Commission, 2024).
Limited risk: This category addresses transparency concerns in AI usage, requiring providers to ensure that AI-generated content is identifiable. Developers and deployers need to ensure that end-users are conscious they are interacting with AI, such as chatbots and deepfakes. Moreover, AI-generated text published with the intent to update the public on matters of public interest must be labeled as artificially generated. Artificial or manipulated images, audio, or video content, known as "deepfakes", have to be distinctly labeled as such.
Minimal or no risk: The AI Act permits the unrestricted use of minimal-risk AI, including applications like AI-enabled video games or spam filters. Most AI systems currently utilized in the EU fall into this category – although this may change with generative AI.
Most responsibilities fall on providers (developers) of high-risk AI systems, regardless of whether they are based in the EU or a third country, as well as on users (deployers) of such systems within the EU (High-level summary of the AI Act, 2024). Regulatory sandboxes and real-world testing are to be established at the national level to facilitate the development and training of innovative AI, particularly benefiting SMEs and start-ups. Adopting a future-proof approach, the proposal allows rules to adapt to technological advancements. The newly formed European AI Office, established in February 2024 within the Commission, oversees enforcement and implementation, promoting collaboration, innovation, and research within the AI sector, and engaging in international dialogue and cooperation on AI governance, recognizing the necessity for global alignment.
Bibliography
EU. (2024, February 27). High-level summary of the AI Act. Tratto da EU Artificial Intelligence Act: https://artificialintelligenceact.eu/high-level-summary/
European Commission. (2024, March 6). AI Act. Tratto da Shaping Europe's digital future: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
Yakimova, Y., & Ojamo, J. (2024, March 13). Artificial Intelligence Act: MEPs adopt landmark law. Tratto da European Parliament: https://www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law
Comments