EU postpones AI Act enforcement after industry warns on costs

Abstract illustration depicting extended regulatory timeline with corporate and biometric system elements representing EU AI Act implementation delay

The European Commission has extended key compliance deadlines under the EU AI Act following sustained pressure from businesses warning that the original timelines would impose unsustainable costs and technical challenges on companies deploying artificial intelligence systems.

The delay affects provisions governing biometric identification systems and high-risk AI applications, which were scheduled to take effect in the coming months. According to Lawyer Monthly, the postponement represents a significant concession to industry concerns about the practical feasibility of meeting the regulation’s demanding technical and documentation requirements within the initially prescribed timeframes.

The decision follows months of corporate lobbying, with technology firms and industry associations arguing that compliance infrastructure—including conformity assessments, risk management systems, and technical documentation—could not be established quickly enough without disrupting existing operations. Thales Group and other major European technology providers reportedly presented evidence to regulators demonstrating the scale of internal restructuring required to meet the Act’s provisions.

The EU AI Act, which entered into force in August 2024, established the world’s first comprehensive regulatory framework for artificial intelligence. Its risk-based approach categorises AI systems into prohibited, high-risk, limited-risk, and minimal-risk tiers, with corresponding compliance obligations. High-risk systems—including those used in critical infrastructure, law enforcement, and employment decisions—face the most stringent requirements.

The business impact of this delay creates clear winners and losers across the technology sector. Established enterprises with substantial compliance budgets gain additional time to build governance frameworks without rushing implementations that might expose them to enforcement action. Cloud infrastructure providers such as those analysed by wiz.io stand to benefit from extended sales cycles as clients defer AI deployments pending regulatory clarity.

Conversely, smaller AI developers and startups that had already invested heavily in early compliance efforts may find themselves at a competitive disadvantage, having allocated scarce resources to meet deadlines that larger competitors successfully lobbied to extend. The delay also potentially disadvantages EU-based AI firms relative to international competitors operating in less regulated markets, prolonging the period of regulatory uncertainty that has already slowed European AI investment.

According to McKinsey & Company research, compliance costs for high-risk AI systems under the Act could reach hundreds of thousands of euros per system, depending on complexity and deployment scale. These figures helped substantiate industry arguments that rushed implementation would force companies to choose between non-compliance and withdrawing AI systems from the market entirely.

The postponement raises questions about regulatory credibility and the EU’s ability to enforce its ambitious technology governance agenda. While flexibility in implementation demonstrates responsiveness to legitimate business concerns, it also signals that well-resourced industry coalitions can successfully push back against regulatory timelines, potentially establishing a precedent for future delays.

Silicon UK reported that the extension applies specifically to provisions requiring third-party conformity assessments and the establishment of quality management systems—two of the most resource-intensive compliance elements. The exact duration of the delay has not been officially confirmed, though industry sources suggest an extension of 12 to 18 months for certain high-risk system categories.

Looking ahead, the key indicators will be whether the Commission establishes firmer deadlines with this extension or leaves open the possibility of further postponements. The development of practical guidance documents, the accreditation of notified bodies to conduct conformity assessments, and the operationalisation of the EU AI Office will all signal whether Brussels intends to maintain regulatory pressure or continue accommodating industry implementation challenges.

This delay underscores the persistent tension between regulatory ambition and practical implementation capacity, suggesting that the EU’s approach to AI governance will remain subject to negotiation between policymakers and the technology industry they seek to regulate.