The era of unregulated artificial intelligence in Europe has officially ended. As of February 2, 2025, the first substantive provisions of the European Union's Artificial Intelligence Act (AI Act) have come into force, marking a critical turning point for global tech governance. While the legislation was formally adopted in mid-2024, the "grace period" is effectively over for specific categories of AI, with regulators shifting their focus from drafting rules to enforcing them.
This new phase introduces immediate prohibitions on AI practices deemed to pose unacceptable risks to fundamental rights. Simultaneously, the newly established European AI Office, operating within the Commission's Directorate General for Communications Networks, Content and Technology (DG CNECT), has begun its work as the primary enforcer for general-purpose AI (GPAI) models. For companies operating within the EU single market, the compliance clock is no longer ticking-it has struck the hour.
The Enforcement Timeline: What Applies When?
The AI Act utilizes a staggered implementation schedule designed to give organizations time to adapt. However, the first deadlines are already upon us. According to regulatory timelines, the rollout proceeds as follows:
- February 2, 2025: The ban on "prohibited AI practices" enters into force. This includes AI systems that manipulate human behavior, expand facial recognition databases through untargeted scraping, or use biometric categorization systems to infer sensitive data like political orientation.
- August 2, 2025: Rules for General-Purpose AI (GPAI) models apply to new providers. This section targets the foundation models powering generative AI tools.
- August 2, 2026: The majority of the Act's provisions, including obligations for high-risk AI systems in sectors like healthcare, employment, and law enforcement, will be fully enforced.
It is important to note the distinction for legacy systems. Providers of GPAI models that were already on the market before August 2025 have been granted an extension until August 2, 2027, to achieve full compliance.
A Two-Tiered Governance Structure
One of the most complex aspects of the AI Act is its enforcement architecture, which divides responsibilities between Brussels and member states. The European Commission has updated its frameworks to clarify this division of labor.
The AI Office
Established on June 16, 2024, the AI Office holds exclusive jurisdiction over General-Purpose AI models. This centralized approach ensures that the largest and most powerful models-often developed by multinational tech giants-are regulated uniformly across the bloc. The Office has the power to request documentation to assess compliance and will be supported by a Scientific Panel of independent experts.
National Competent Authorities
For "high-risk" AI systems deployed in specific contexts (like recruitment software or credit scoring), enforcement falls to national market surveillance authorities. These bodies are tasked with supervising compliance within their territories. Member states are also designating "notifying authorities" responsible for assessing the conformity assessment bodies that certify AI systems.
"The AI Act grants the AI Office exclusive jurisdiction to enforce the Act's provisions relating to General-Purpose AI Models, and provides the AI Office the power to request documentation needed to assess compliance." - Legal Analysis, Orrick
Compliance Burdens and the 'Digital Omnibus'
The financial stakes for non-compliance are high. The Act utilizes a tiered penalty structure similar to the GDPR, with fines calculated as a percentage of global annual turnover or a fixed amount, whichever is higher. For SMEs and startups, the rules are slightly more lenient, capping fines at the lower of the two amounts. However, for large enterprises, the penalties are designed to be dissuasive.
Recognizing the potential for regulatory friction, particularly the overlap between the AI Act and GDPR, the European Commission has signaled flexibility. Reports indicate an initiative known as the "Digital Omnibus," proposed to streamline compliance across EU digital laws. This move aims to prevent a regulatory bottleneck that could stifle innovation, ensuring that companies do not face contradictory requirements from data protection and AI safety regulators.
Forward Outlook
The immediate focus for the industry is the Code of Practice for GPAI providers, expected to be finalized by the Commission by April 2025. This document will provide the technical detail necessary for companies to demonstrate compliance with the Act's broader obligations. As the EU moves further into the implementation phase throughout 2025 and 2026, the global tech sector acts as a watchful observer. The Brussels Effect is once again in play, with the EU setting a regulatory benchmark that may force global changes in how AI is developed and deployed.