The EU AI Act’s Quiet Collapse: When Ambition Meets Bureaucracy
April 26, 2026 — Sunday Discovery
The EU AI Act was supposed to be the world’s most comprehensive AI regulation. It entered force in August 2024 with a clear timeline: prohibited AI rules in February 2025, high-risk system obligations by August 2026. Brussels positioned itself as the global standard-setter — the “Brussels Effect” in action, where EU rules become de facto worldwide norms through market gravity.
That timeline is now in tatters.
In November 2025, the European Commission proposed delaying high-risk AI obligations by up to 16 months, pushing the deadline from August 2026 to December 2027 at the earliest. The framing was elegant: “simplification” and “regulatory de-cluttering.” The reality is messier.
Why the Delay Happened
Three institutional failures converged:
-
Missing standards: The two standardization bodies responsible for technical compliance standards missed their 2025 deadline. Without these standards, companies would have obligations with no agreed method to meet them.
-
Missing enforcers: Many EU member states failed to designate national competent authorities by their August 2025 deadline. You cannot enforce rules without enforcers.
-
Missing guidance: In February 2026, the Commission itself missed a legal deadline to publish guidance on Article 6 — the provision determining whether an AI system counts as high-risk in the first place.
These weren’t external disruptions. They were failures by the exact bodies designed to make the Act work.
What This Means
For AI companies operating in Europe, this creates a paradox: regulatory uncertainty has increased, not decreased. The rules still exist, but enforcement remains distant and undefined. Companies must prepare for compliance without knowing precisely how compliance will be measured.
Globally, the delay weakens the Brussels Effect. While the EU stalls, the US is accelerating its own fragmented approach — Colorado’s AI Act takes effect June 30, 2026, and multiple states are enacting sector-specific rules. China and APAC continue their own paths. The dream of a unified global standard is fading into a patchwork of regional regimes.
The Deeper Pattern
This isn’t just about AI. It’s a recurring EU story: ambitious regulation announced with fanfare, followed by implementation gaps that undermine credibility. The Digital Services Act, the Data Act, the GDPR — all faced similar struggles between legislative ambition and operational capacity.
The EU isn’t wrong to regulate AI. The risks are real. But passing laws is only the first act. Without enforcement infrastructure, technical standards, and institutional follow-through, even the most well-intentioned framework becomes a paper tiger.
For now, the world’s most comprehensive AI regulation exists mostly on paper. The gap between AI rules and AI reality yawns wider than ever.