Meta chooses not to adhere to EU's voluntary AI safety regulations
In a move that could have far-reaching implications for the tech giant, Meta has decided not to sign the European Union's General-Purpose AI (GPAI) Code of Practice. This decision comes as the EU prepares to enforce its new AI Act, which aims to ensure that AI models are safe, transparent, and aligned with European values.
The GPAI Code of Practice, a voluntary tool, asks for transparency, copyright commitments, and extra safety and security measures from those offering general-purpose AI models. Compliance with the Code offers a "rebuttable presumption of conformity," reducing regulatory scrutiny. However, Meta's refusal exposes the company to increased risk of enforcement actions, potential fines, and operational restrictions within the EU market.
Meta's stance contrasts sharply with other major AI developers like OpenAI, Anthropic, and Google, who are aligning with the Code. This difference highlights a broader regulatory divide between the EU's precautionary, risk-based approach and U.S. tech firms' preference for less restrictive voluntary measures prioritizing rapid innovation.
The company argues that the Code is legally ambiguous and overly burdensome, fearing it could impede innovation, particularly in frontier AI model development. Meta's concerns centre around the documentation, copyright restrictions, and safety testing requirements, which they believe will slow development and raise costs.
However, the EU AI Act is not the only regulation Meta is facing. The company was fined €200 million in April for failing to meet consumer data privacy obligations with its "Consent or Pay" business model, which violates Europe's Digital Markets Act (DMA). Meta was also fined €797.72 million in November for tying its online classified ads service Facebook Marketplace to its Facebook social network in violation of antitrust rules.
The European Commission has issued voluntary guidelines for providers of general-purpose AI models two weeks before the EU AI Act takes effect. These guidelines provide legal certainty on the scope of the AI Act obligations for general-purpose AI providers.
Despite Meta's refusal, companies who choose to comply with the Code via other means may still face more regulatory scrutiny. An EC spokesperson, Thomas Regnier, confirmed that all GPAI providers will have to comply with the AI Act when it comes into force on August 2 this year.
The decision by Meta also raises concerns about the development and deployment of frontier AI models in Europe. Joel Kaplan, Meta's chief global affairs officer, has expressed fears that the EU AI Act could stunt the development and deployment of these models, potentially harming European companies.
As the tech industry grapples with the complexities of AI regulations, Meta's decision to shun the GPAI Code adds another layer of complexity. Balancing compliance obligations against strategic considerations about innovation, risk management, and market access will be a challenge for companies navigating the evolving AI regulatory landscape.
[1] European Commission. (2022). Code of Practice on Artificial Intelligence. Retrieved from
- Meta's decision not to sign the European Union's General-Purpose AI (GPAI) Code of Practice, which emphasizes transparency, copyright commitments, and additional security measures for AI models, might lead to increased scrutiny and potential penalties in the European market.
- Policymakers' concerns about the development and deployment of frontier AI models in Europe are raised by Meta's stance, as the tech giant argues that GPAI's documentation, copyright restrictions, and safety testing requirements will impede innovation and increase costs.
- In light of Meta's choice to forgo GPAI compliance, companies operating within the AI sector may face tougher regulatory oversight, even when employing alternative means of adherence, as outlined by the European Commission.