Is Your AI System High-Risk Under the EU AI Act? The Complete Classification Guide
Complete guide to EU AI Act risk classification. Learn if your AI system is prohibited, high-risk, limited-risk, or minimal-risk with practical examples and compliance requirements.
Is Your AI System High-Risk Under the EU AI Act? The Complete Classification Guide
The EU AI Act risk classification determines everything about your compliance obligations. Get it wrong, and you could face fines up to €35 million. Get it right, and you'll know exactly what's required to legally operate your AI system in the European Union.
This complete guide walks you through the four risk categories, provides practical classification examples, and gives you the tools to determine where your AI system fits. Whether you're building a chatbot, hiring algorithm, or diagnostic tool, understanding your risk level is the first step toward compliance.
Why EU AI Act Risk Classification Matters
The EU AI Act uses a risk-based approach to regulation. Higher-risk AI systems face stricter requirements:
- Prohibited AI (Article 5): Banned entirely — no exceptions
- High-Risk AI (Annex III): Extensive requirements including conformity assessment, CE marking, and continuous monitoring
- Limited-Risk AI (Article 50): Transparency obligations and user disclosure
- Minimal-Risk AI: No specific obligations under the AI Act
Getting your classification wrong isn't just a compliance issue — it's a business-critical decision that affects your go-to-market timeline, development costs, and legal liability.
The Four EU AI Act Risk Categories Explained
1. Prohibited AI Systems (Article 5) — Unacceptable Risk
These AI practices are completely banned in the EU, regardless of safeguards or human oversight.
Prohibited practices include:
- Subliminal manipulation techniques that operate beyond conscious awareness
- Social scoring by public authorities for general-purpose evaluation of citizens
- Real-time biometric identification in publicly accessible spaces (with limited law enforcement exceptions)
- Exploiting vulnerabilities of specific groups due to age, disability, or economic situation
Examples of prohibited AI:
- ❌ An app using subliminal audio to influence purchasing decisions
- ❌ A city-wide citizen scoring system determining service access
- ❌ Real-time facial recognition in shopping malls for general surveillance
- ❌ AI targeting gambling ads at people with addiction vulnerabilities
Compliance requirements:
- Immediate cessation of prohibited practices
- No market placement permitted
- Criminal and civil liability for violations
2. High-Risk AI Systems (Annex III) — Strict Regulation
High-risk AI systems must undergo conformity assessment and meet extensive requirements before market placement. The EU AI Act defines high-risk systems in two ways:
Annex I (AI as Safety Component): AI systems used as safety components in products covered by existing EU legislation (medical devices, machinery, automotive, etc.)
Annex III (Specific High-Risk Use Cases): AI systems used in eight critical areas that significantly impact fundamental rights and safety.
The 8 Annex III High-Risk Categories:
1. Biometric Identification and Categorization
- Remote biometric identification systems
- Biometric categorization systems inferring sensitive attributes
- Examples: Airport facial recognition, emotion detection systems
2. Critical Infrastructure Management
- AI managing water, gas, electricity, or heating supply
- AI controlling transportation safety systems
- Examples: Smart grid algorithms, traffic management AI
3. Education and Vocational Training
- AI for educational institution admissions
- AI assessing learning progress or determining graduation
- Examples: University admission algorithms, automated exam scoring
4. Employment and Worker Management
- AI for recruitment, promotion, or performance evaluation
- AI for task allocation or monitoring worker behavior
- Examples: Resume screening tools, employee monitoring software
5. Access to Essential Services
- AI evaluating creditworthiness for loans or insurance
- AI determining access to healthcare, social benefits, or emergency services
- Examples: Credit scoring algorithms, insurance pricing models
6. Law Enforcement
- AI assessing individual risk of criminal offenses
- AI for lie detection or emotion recognition in law enforcement
- Examples: Predictive policing algorithms, AI polygraph systems
7. Migration, Asylum, and Border Control
- AI examining visa applications or detecting false documents
- AI assessing asylum claims or deportation risk
- Examples: Automated visa processing, border control screening
8. Administration of Justice and Democratic Processes
- AI assisting judicial decisions or influencing election outcomes
- AI evaluating legal evidence or determining court procedures
- Examples: Sentencing algorithms, election management systems
Article 6(3) Exception for High-Risk Systems
Important caveat: Not all Annex III use cases are automatically high-risk. Systems may qualify for an exception if they:
- Perform narrow procedural tasks (data formatting, simple calculations)
- Don't materially impact the outcome of decision-making
- Don't affect individuals' access to resources or opportunities
Example of Article 6(3) exception: A simple algorithm that alphabetically sorts job applications for HR review would likely qualify for the exception, while an AI that ranks candidates by predicted performance would not.
High-Risk AI Compliance Requirements
High-risk AI systems must implement:
Technical Requirements:
- Risk management system throughout the AI lifecycle (Article 9)
- Data governance and quality measures (Article 10)
- Detailed technical documentation (Article 11)
- Comprehensive record keeping (Article 12)
- Accuracy, robustness, and cybersecurity measures (Article 15)
Operational Requirements:
- Human oversight with meaningful intervention capability (Article 14)
- Transparency and information for deployers and users (Article 13)
- Conformity assessment by notified bodies or internal procedures
- CE marking and Declaration of Conformity
- Registration in the EU AI systems database
3. Limited-Risk AI Systems (Article 50) — Transparency Obligations
Limited-risk AI systems must inform users they're interacting with AI but face lighter requirements than high-risk systems.
Limited-risk categories include:
AI Systems Interacting with Humans:
- Chatbots and conversational AI systems
- Virtual assistants and customer service bots
- Examples: Website chatbots, AI customer support
Emotion Recognition and Biometric Categorization:
- AI systems detecting human emotions
- Systems categorizing people by biometric characteristics
- Examples: Sentiment analysis tools, demographic detection
AI-Generated Synthetic Content:
- Systems generating synthetic audio, video, images, or text
- Deepfake creation and manipulation tools
- Examples: AI art generators, synthetic media tools
Compliance requirements for limited-risk AI:
- Clear disclosure to users that they're interacting with AI
- Appropriate transparency measures based on system type
- Content labeling for synthetic audio, video, images, and text
- User awareness of AI-generated content
4. Minimal-Risk AI Systems — Self-Regulation
The majority of AI systems fall into minimal-risk and have no specific obligations under the EU AI Act. However, providers may voluntarily adopt codes of conduct.
Examples of minimal-risk AI:
- ✅ Email spam filters and content recommendation algorithms
- ✅ AI-powered video game characters and entertainment systems
- ✅ Inventory management and supply chain optimization
- ✅ AI photo editing and enhancement tools
- ✅ Language translation services for general use
- ✅ AI-assisted software development tools
Step-by-Step AI Classification Process
Follow these steps to classify your AI system:
Step 1: Check for Prohibited Practices (Article 5) Review if your system performs any banned practices. If yes, it cannot be used in the EU.
Step 2: Assess High-Risk Categories (Annex I & III)
- Does your AI function as a safety component in regulated products? (Annex I)
- Is it used in any of the 8 high-risk use cases? (Annex III)
- If yes to Annex III, could Article 6(3) exception apply?
Step 3: Evaluate Limited-Risk Requirements (Article 50)
- Does it interact directly with humans?
- Does it detect emotions or categorize people biometrically?
- Does it generate synthetic content?
Step 4: Default Classification If none of the above apply, your system is minimal-risk.
Real-World Classification Examples
Healthcare AI Diagnostics:
- High-Risk (Annex I): AI diagnostic tool providing treatment recommendations
- Minimal-Risk: AI organizing medical images for human review
Recruitment AI:
- High-Risk (Annex III): AI screening candidates and ranking them for hiring decisions
- Limited-Risk: Chatbot answering candidate questions during application process
- Minimal-Risk: AI scheduling interviews based on calendar availability
Financial Services AI:
- High-Risk (Annex III): Credit scoring algorithm determining loan approval
- Limited-Risk: Chatbot helping customers understand financial products
- Minimal-Risk: AI detecting fraudulent transactions for internal review
Educational AI:
- High-Risk (Annex III): AI system evaluating student performance for graduation
- Minimal-Risk: AI organizing course materials by topic
Foundation Models and General-Purpose AI
Large language models and foundation models with significant computational resources (≥10²⁵ FLOPs) face additional obligations:
- Systemic risk evaluation and mitigation
- Model documentation and information sharing
- Incident reporting and monitoring
- Cybersecurity measures and testing
Examples include systems like GPT-4, Claude, and other large-scale AI models used as the foundation for downstream applications.
EU AI Act Compliance Timeline
Understanding when requirements take effect:
- February 2025: Prohibited AI practices banned
- August 2025: General-purpose AI model obligations begin
- August 2026: High-risk AI system requirements fully enforced
- August 2027: All provisions in force
Important: Many obligations are already in effect. Don't wait until the final deadline to begin compliance preparations.
Common Classification Mistakes to Avoid
Mistake 1: Assuming B2B systems are automatically minimal-risk Even internal AI systems can be high-risk if they affect employment decisions or critical infrastructure.
Mistake 2: Overlooking Article 6(3) exceptions Many systems that appear high-risk may qualify for exceptions if they perform only narrow procedural tasks.
Mistake 3: Ignoring limited-risk transparency requirements Customer-facing AI systems often need clear disclosure even if they're not high-risk.
Mistake 4: Focusing only on the primary use case Consider all possible applications and deployment contexts of your AI system.
When to Seek Professional Help
Consider professional legal guidance if:
- Your system could potentially be classified as high-risk
- You're uncertain about Article 6(3) exception applicability
- Your AI processes sensitive data or affects vulnerable populations
- You plan to deploy across multiple EU member states
- Misclassification would significantly impact your business model
Tools for AI Classification
Several resources can help with initial classification:
- Official EU AI Office guidance documents on EUR-Lex
- Free risk classification tools like Fortai's assessment
- Legal consultation for complex or edge cases
- Industry-specific guidance from relevant trade associations
For an immediate assessment of your AI system's risk level, try our free classification tool — it takes 5 minutes and provides a detailed compliance report.
Next Steps After Classification
Once you know your risk level:
For High-Risk Systems:
- Begin conformity assessment planning
- Implement required technical documentation
- Establish risk management processes
- Plan for CE marking and registration
For Limited-Risk Systems:
- Design appropriate transparency measures
- Implement user disclosure mechanisms
- Prepare content labeling for synthetic media
For Minimal-Risk Systems:
- Consider voluntary codes of conduct
- Monitor for any changes in use case or regulation
- Maintain awareness of GDPR and other applicable laws
Conclusion
EU AI Act risk classification is the foundation of AI compliance strategy. While the framework provides clear categories, practical application requires careful analysis of your specific system, use case, and deployment context.
The stakes are high — classification errors can result in significant penalties, business disruption, and legal liability. But with proper understanding and appropriate tools, classification becomes a manageable first step toward EU AI Act compliance.
Start with a thorough assessment of your AI system's functionality, review the specific risk categories, and seek professional guidance when needed. For complex systems or unclear cases, professional legal consultation is a worthwhile investment in compliance certainty.
Ready to classify your AI system? Use our free EU AI Act risk classification tool to get an instant assessment with detailed compliance guidance.
Related Articles:
- EU AI Act Risk Levels Explained: Complete Guide to All Categories
- EU AI Act Annex III: All 8 High-Risk Categories Explained
- AI Act vs GDPR: Key Differences for Your Business
This article is for informational purposes only and does not constitute legal advice. Consult qualified legal counsel for specific compliance questions regarding your AI systems.