AI regulation deemed ineffective, likening it to an impotent tiger by tech specialist
The European Union's Artificial Intelligence (AI) Act, which entered into force on August 1, 2024, is set to significantly impact the use of high-risk AI systems by government agencies. By August 2, 2026, full compliance with the Act's requirements will be required for high-risk AI systems commonly used in sectors such as biometrics, critical infrastructure, education, employment, access to essential public services, law enforcement, immigration, and the administration of justice [1][3][4].
The phased approach of the Act allows for some preparatory and transition periods before full enforcement. Government agencies using high-risk AI systems must align with this timeline, achieving full compliance by August 2, 2026.
Key milestones for government high-risk AI systems under the EU AI Act include:
- August 1, 2024: The AI Act entered into force.
- February 2, 2025: Initial prohibitions and AI literacy obligations applied.
- August 2, 2025: Obligations for providers of general-purpose AI (GPAI) models, which may be used in government contexts, began.
- August 2, 2026: Full enforceability of the majority of high-risk AI system requirements, including those used by government agencies.
- August 2, 2027: Extended compliance deadline for AI systems embedded in regulated products and some other categories.
- 2030: Obligations on AI components in large-scale IT systems established by EU law, including freedom, security, and justice databases like Schengen Information System [1][3].
From August 2025, AI models with general-purpose use will be subject to further regulations under the European AI regulation [4]. Additionally, a labeling requirement will come into effect next summer for all deepfakes, chatbots, agents, AI-generated images, videos, music, and text [1].
Existing high-risk AI systems will not be affected by the AI Act until August 2026, and high-risk systems offered before this date do not have to comply with the Act's rules [1][3]. However, concerns have been raised about the potential inadequacy of the AI Act to address societal problems, as AI cannot solve every problem, and its benefits and drawbacks are unequally distributed [2].
Furthermore, the Federal Network Agency (BNetzA) has been nominated as the likely national authority for the AI Act, but there is still no legal basis for the boundary of responsibility between BNetzA and the Federal Data Protection Officer [2]. Anke Domscheit-Berg, a digital expert, has expressed concerns about the exceptions and transition periods in Germany's AI Act, particularly the delay in applying obligations for high-risk systems [2].
As the AI Act's enforcement date approaches, government agencies must ensure they are prepared to comply with the new regulations, particularly in light of concerns about the AI Act's ability to address societal problems and the potential resource constraints of the supervising authorities.
Sources: -NatLawReview, “Compliance Deadline for EU AI Act Approaching,” 2025-07-23 [1] -European Commission Digital Strategy, “AI Act – Application Timeline,” 2025-07-24 [3] -Mondaq, “EU AI Act Compliance Deadline of August 2, 2025 Looming,” 2025-07-28 [4] - The German government wants to use AI in the high-risk area, including asylum and migration, and law enforcement. - The labeling requirement for AI-generated content comes into effect in August 2026. - Authorities can ignore the AI Act's rules until 2030. - A labeling requirement will come into effect next summer for all deepfakes, chatbots, agents, AI-generated images, videos, music, and text. - Existing systems will not be affected by the AI Act until August 2026. - High-risk systems offered before August 2026 do not have to comply with the AI Act's rules. - The use of AI significantly contributes to the climate crisis by consuming vast amounts of resources: water, energy, and raw materials. - Anke Domscheit-Berg, a digital expert, expresses concerns about the exceptions and transition periods in Germany's AI Act. - Anke Domscheit-Berg believes that the delay in applying obligations for high-risk systems is absurd.
Technology and policy-and-legislation intertwine as the European Union's Artificial Intelligence (AI) Act requires government agencies using high-risk AI systems to fully comply by August 2, 2026. In the realm of politics, concerns have been raised about the AI Act's capacity to address societal issues, as AI expert Anke Domscheit-Berg expresses concerns about the exceptions and transition periods in Germany's AI Act.