European AI Security and Reliability Foundation
EU Proposes Independent Third-Party Testing for Foundation AI Models
The European Union (EU) is taking a significant step towards ensuring the safety and reliability of artificial intelligence (AI) by proposing independent third-party testing and certification for foundation AI models.
In a move that could set a global standard, the EU is planning to establish formal standards, auditing frameworks, and external certification bodies to evaluate AI systems' safety, ethics, and compliance. This initiative aims to address the current lack of safety guarantees in AI technologies, particularly in high-risk applications.
The European Quality Infrastructure
The EU's proposal is rooted in its existing Quality Infrastructure system, which ensures products and services are safe, reliable, and conform to functional and quality requirements. The system includes mandatory third-party conformity assessment services, such as testing, inspection, and certification (TIC) activities.
Three Approaches to Testing and Certification
The EU's approach involves three key methods: certification of quality management systems, product testing, and post-market periodical inspection. Certification of quality management systems examines production processes and management structures to avoid conflicts of interest. Product testing involves independent examinations of products across various industries. Periodical inspections ensure safety and proper functioning after commercial distribution.
Adversarial Testing and AI Testing Certifications
Adversarial testing, or 'red-teaming', actively exploits product vulnerabilities to evaluate safety. The EU plans to develop specialized certifications for AI testing professionals, such as the ISTQB® CT-GenAI, which equips them with skills to apply AI testing techniques, detect risks like bias and hallucinations, and implement ethical and regulatory best practices.
Third-Party Audits and Post-Market Monitoring
Independent audits by specialized firms or experts assess whether AI systems align with ethical values, legal requirements, fairness, transparency, privacy, and human oversight. The EU's regulatory framework, like the EU AI Act, requires continuous provider responsibility after deployment and designates authorities to investigate and mandate corrective actions or recalls when risks are detected.
Policy Interventions
To realize this framework, the EU is proposing several policy interventions. These include mandating external third-party certification for high-risk AI systems via legislation or regulation, establishing lawful requirements for ongoing monitoring and corrective measures, adopting and harmonizing international AI safety standards, supporting capacity building and certification programs for AI testing professionals, and defining clear audit scope and frequency guidelines.
The EU's proposal for independent third-party testing and certification for foundation AI models marks a significant step towards ensuring safety and public trust in AI. By mandating these measures, assigning accountability, and promoting transparency continuously throughout the AI lifecycle, the EU aims to foster a robust consumer trust framework and competitive edge for manufacturers in the AI sector.
[1] ISO 42001:2021, Artificial Intelligence (AI) — Governance for intrinsic safety [2] EU AI Act (Proposal for a Regulation of the European Parliament and of the Council on a framework on artificial intelligence, and amending Regulation (EU) 2019/1020) [3] ISTQB® CT-GenAI: Certified Tester for General Artificial Intelligence [4] European Commission, AI White Paper (Communication from the Commission to the European Parliament, the European Council, the European Economic and Social Committee and the Committee of the Regions on Artificial Intelligence for Europe's digital decade)
The European Union's proposal for independent third-party testing and certification for foundation AI models could potentially establish global standards in AI safety, ethics, and compliance, aligning with its existing Quality Infrastructure system. To achieve this, the EU is considering mandating external third- party certification for high-risk AI systems via legislation or regulation, establishing lawful requirements for ongoing monitoring and corrective measures, adopting and harmonizing international AI safety standards, supporting capacity building and certification programs for AI testing professionals, and defining clear audit scope and frequency guidelines.
The EU's initiative aims to address current concerns about the lack of safety guarantees in AI technologies, particularly in high-risk applications, and promote transparency and public trust in AI by employing various methods such as adversarial testing, third-party audits, and AI testing certifications for professionals like ISTQB® CT-GenAI. The regulatory framework, like the EU AI Act, requires continuous provider responsibility after deployment and designates authorities to investigate and mandate corrective actions or recalls when risks are detected.