EU AI Act Compliance Guide
The world's first comprehensive AI law — what your organization must do now, and how to build a compliant AI governance program before each enforcement deadline.
Overview
The EU Artificial Intelligence Act (EU AI Act) is the world's first comprehensive legal framework for artificial intelligence. Adopted in 2024 and entering into force in August 2024 with phased application dates through 2026, the EU AI Act classifies AI systems into risk tiers and imposes proportionate obligations on providers (those who develop or place AI on the EU market) and deployers (those who use AI systems in professional contexts). The EU AI Act applies not only to EU-based organizations but to any organization worldwide whose AI systems affect EU residents.
Who must comply?
The EU AI Act applies to: (1) providers of AI systems placed on the EU market or put into service in the EU, regardless of whether the provider is established in the EU; (2) deployers of AI systems where the place of use is within the EU; and (3) providers and deployers established outside the EU when the output is used in the EU. In practice, this means virtually any organization worldwide whose AI systems make decisions about EU residents.
Quick Facts
- Framework
- EU Artificial Intelligence Act
- Jurisdiction
- European Union
- Status
- Phased rollout
- Penalties
- For violations of prohibited AI rules: up to €35 million or 7% of global annual turnover, whichever is higher. For violations of high-risk AI obligations: up to €15 million or 3% of global annual turnover. For providing incorrect information to authorities: up to €7.5 million or 1.5% of global annual turnover.
Get compliant with TRAIGA platform
Start free — first AI system inventoried in under 10 minutes. No credit card required.
Get StartedRelated Resources
AI Compliance Software →
How TRAIGA automates EU AI Act compliance documentation.
AI Risk Register →
Build the technical documentation register EU AI Act requires.
Enterprise AI Governance →
EU AI Act compliance for large organizations with complex portfolios.
AI Governance Software →
Full platform overview — covers EU AI Act and all major frameworks.
Key obligations under EU AI Act
What your organization must actually do to comply — broken down by obligation category.
Risk Classification
Classify every AI system into one of four risk tiers: prohibited (banned outright), high-risk (strict obligations), limited-risk (transparency obligations), and minimal-risk (no mandatory requirements). High-risk categories include AI in critical infrastructure, education, employment, essential services, law enforcement, migration, and justice.
Technical Documentation
Providers of high-risk AI systems must create and maintain comprehensive technical documentation covering system design, development methodology, training data, performance metrics, risk management process, and conformity assessment. Documentation must be kept up to date throughout the system's lifecycle.
Conformity Assessment
High-risk AI systems must undergo a conformity assessment — either a self-assessment or a third-party assessment by a notified body — before being placed on the EU market. Conformity assessment verifies that the system meets the EU AI Act's requirements.
Human Oversight
High-risk AI systems must be designed and implemented with appropriate human oversight measures — enabling humans to oversee, intervene, and override the AI system's outputs. Oversight mechanisms must be documented and their effectiveness must be verifiable.
Post-Market Monitoring
Providers of high-risk AI systems must establish and operate a post-market monitoring system that actively collects and analyses data on system performance, incidents, and risks after deployment. Serious incidents must be reported to national supervisory authorities.
EU Database Registration
Providers and deployers of high-risk AI systems must register their systems in the EU's publicly accessible AI database before placing them on the market. Registration requires basic information about the system, its purpose, and the provider's contact details.
EU AI Act risk tiers explained
The EU AI Act uses a risk-based approach with four tiers. Prohibited AI (Tier 1) includes systems that manipulate human behavior subconsciously, exploit vulnerabilities of specific groups, and government social scoring — these are banned outright. High-risk AI (Tier 2) includes AI used in critical infrastructure, education, employment, essential services, law enforcement, migration control, and justice — these face the most stringent obligations. Limited-risk AI (Tier 3) includes chatbots and emotion recognition systems — transparency obligations apply. Minimal-risk AI (Tier 4) includes most AI applications — no mandatory requirements, but voluntary codes of practice are encouraged.
High-risk AI categories under Annex III
The EU AI Act's Annex III defines the high-risk AI categories in detail. For most organizations, the most relevant categories are: recruitment and HR AI (used in CV filtering, candidate ranking, and employment decisions); AI used in access to essential private and public services (credit scoring, insurance underwriting, emergency services); AI in education (admission, scoring, behavior monitoring); and AI in critical infrastructure (energy, transport, water). Organizations deploying AI in these categories face the full suite of EU AI Act high-risk obligations.
Phased enforcement timeline
The EU AI Act has a phased application schedule. The provisions on prohibited AI systems applied in February 2025. General-purpose AI model obligations apply from August 2025. The full high-risk AI obligations apply from August 2026. However, organizations should begin compliance programs now — building a compliant technical documentation and risk management infrastructure takes significant time, and early compliance demonstrates good faith to regulators.
General-purpose AI model obligations
The EU AI Act includes specific provisions for general-purpose AI models (GPAIs) — large foundation models like GPT-4, Claude, and Gemini. Providers of GPAI models must maintain technical documentation, comply with copyright law, and publish summaries of training data. Providers of GPAI models with systemic risk face additional obligations including model evaluation, adversarial testing, and incident reporting.
Meet EU AI Act requirements with TRAIGA platform
TRAIGA platform helps organizations meet EU AI Act obligations by providing: structured AI system inventory with EU AI Act risk classification, technical documentation templates for high-risk AI systems, conformity assessment evidence tracking, post-market monitoring documentation, human oversight mechanism recording, and controls mapped to EU AI Act Annex III requirements — all in the same platform used for TRAIGA Act and NIST AI RMF compliance.
What TRAIGA platform covers for EU AI Act
Risk Classification
Technical Documentation
Conformity Assessment
Human Oversight
Post-Market Monitoring
EU Database Registration
EU AI Act — frequently asked questions
Common questions from compliance officers, legal teams, and executives evaluating EU AI Act compliance obligations.
- Does the EU AI Act apply to non-EU companies?
- Yes. The EU AI Act applies to any organization whose AI systems affect EU residents — regardless of where the organization is based. A US company whose AI system processes data about or makes decisions affecting EU citizens must comply with the EU AI Act's requirements for that system. The extraterritorial reach is similar to GDPR.
- What is a 'provider' vs. a 'deployer' under the EU AI Act?
- A provider is any entity that develops an AI system, places it on the EU market, or puts it into service under its own name or trademark. A deployer is any entity that uses an AI system in a professional context — even if they didn't build it. Providers bear the heaviest EU AI Act obligations. Deployers have narrower but still significant obligations, including conducting fundamental rights impact assessments for certain high-risk uses and implementing appropriate human oversight.
- What is a conformity assessment?
- A conformity assessment is the formal process by which a provider demonstrates that their high-risk AI system meets the EU AI Act's requirements before placing it on the EU market. Most high-risk AI systems can undergo self-assessment (the provider certifies compliance themselves). Some high-risk categories — primarily AI for biometric identification and AI in critical infrastructure — require assessment by an independent notified body. TRAIGA platform tracks conformity assessment evidence and supports the documentation requirements for self-assessment.
- When does the EU AI Act fully apply?
- The EU AI Act has a phased timeline: prohibited AI provisions applied in February 2025; GPAI model obligations apply from August 2025; full high-risk AI obligations apply from August 2026. Organizations should be building their compliance infrastructure now — the documentation, risk management, and governance structures required cannot be built in weeks.
- How does the EU AI Act relate to GDPR?
- The EU AI Act and GDPR are separate but complementary regulations. GDPR governs the processing of personal data; the EU AI Act governs the development and use of AI systems. Many high-risk AI systems process personal data, so organizations must comply with both. GDPR's data minimization and purpose limitation principles are relevant to EU AI Act technical documentation requirements. TRAIGA platform helps organizations satisfy both frameworks from a single system of record.
Start your EU AI Act compliance program today
TRAIGA platform handles EU AI Act compliance documentation — plus every other major AI regulation — from a single platform. Free to start, first AI system inventoried in under 10 minutes.
Covers 6 AI frameworks simultaneously
Implement controls once — satisfy all regulations
Board governance reports in minutes