The EU AI Act entered into force on 1 August 2024, and its most consequential provisions for employers take effect on 2 August 2026. For any company using AI in recruitment — candidate scoring, CV screening, automated shortlisting, interview scheduling based on algorithmic recommendations — this regulation fundamentally changes the compliance landscape. The Act classifies AI systems used in “employment, workers management and access to self-employment” as high-risk, triggering a comprehensive set of obligations that most current ATS platforms are not yet equipped to meet.
This guide breaks down what the EU AI Act requires, what your ATS must do to comply, and what practical steps HR teams should take before the August 2026 deadline. It is written for HR professionals and recruitment leaders, not lawyers — though you should involve your legal team in the implementation.
AI disclosure
This article was written by the Treegarden editorial team with AI assistance for research and drafting. All legal references have been verified against the official EU AI Act text (Regulation (EU) 2024/1689). This is not legal advice — consult qualified legal counsel for your specific situation.
What Is the EU AI Act and Why Does It Matter for Recruitment
The EU AI Act (Regulation (EU) 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. It establishes a risk-based classification system: AI applications are categorised as unacceptable risk, high-risk, limited risk, or minimal risk, with corresponding regulatory obligations.
For recruitment technology, the classification is unambiguous. Annex III of the Act explicitly lists “AI systems intended to be used for recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates” as high-risk AI systems. This classification applies regardless of company size, and it covers any organisation deploying AI recruitment tools within the EU market or affecting EU-based candidates.
The numbers underscore why this matters: according to a 2024 Society for Human Resource Management survey, 87% of companies now use some form of AI in their hiring process. Most of these tools — from basic CV parsing to advanced candidate scoring — fall within the Act's high-risk classification.
Why Recruitment AI Is Classified as High-Risk
The rationale for high-risk classification is straightforward: AI-driven hiring decisions directly affect people's livelihoods. A flawed algorithm that systematically disadvantages candidates based on gender, age, ethnicity, or disability status causes measurable harm at scale — potentially affecting thousands of applicants before anyone detects the pattern.
The EU legislators identified several specific concerns:
- Power asymmetry. Candidates have limited ability to understand or challenge AI-driven decisions. They often do not know AI is being used, much less how it scored them.
- Scale of impact. A single biased algorithm deployed across thousands of applications causes more harm than any individual recruiter's unconscious bias.
- Historical bias amplification. AI models trained on historical hiring data can encode and amplify past discrimination patterns, as demonstrated by Amazon's abandoned CV screening tool in 2018.
- Opacity. Many AI scoring systems operate as black boxes, making it impossible for candidates or employers to understand why a particular score was assigned.
The August 2026 Deadline: What Changes
The EU AI Act's high-risk obligations become enforceable on 2 August 2026. After this date, organisations deploying high-risk AI systems in recruitment must demonstrate compliance with the following requirements:
| Requirement | Article | What It Means for Your ATS |
|---|---|---|
| Risk management system | Art. 9 | Documented process for identifying and mitigating AI risks in hiring |
| Data governance | Art. 10 | Training data must be relevant, representative, and free from bias |
| Technical documentation | Art. 11 | Complete documentation of how AI scoring works |
| Record-keeping | Art. 12 | Automatic logging of all AI decisions for audit purposes |
| Transparency | Art. 13 | Users must understand how the AI system works and its limitations |
| Human oversight | Art. 14 | Humans must be able to override, intervene in, or halt AI decisions |
| Accuracy and robustness | Art. 15 | AI must perform consistently and predictably |
| Cybersecurity | Art. 15 | Protection against manipulation of AI outputs |
The penalties for non-compliance are severe: fines up to €35 million or 7% of global annual turnover, whichever is higher. For context, GDPR fines cap at €20 million or 4% of turnover. The EU AI Act's enforcement regime is deliberately more aggressive.
Human Oversight Requirements Under Article 14
Article 14 is arguably the most operationally significant provision for HR teams. It requires that high-risk AI systems are “designed and developed in such a way that they can be effectively overseen by natural persons during the period in which the AI system is in use.”
In practical terms, this means:
- No fully automated rejection. You cannot configure your ATS to automatically reject candidates based solely on an AI score without a human reviewing the decision.
- Override capability. Recruiters must be able to override any AI recommendation — including moving a low-scored candidate forward or rejecting a high-scored candidate.
- Comprehensible outputs. AI scores must be presented in a way that allows the human overseer to understand and interpret them, not just as opaque numbers.
- Intervention mechanism. There must be a way to halt the AI system's operation if it produces unexpected or potentially discriminatory outputs.
How Treegarden addresses human oversight
Treegarden has implemented technical safeguards aligned with the EU AI Act's human oversight requirements. AI candidate scores are presented as recommendations with detailed explanations of strengths and weaknesses, not as automated decisions. Recruiters review all AI-scored candidates in a dedicated review queue before any pipeline movement occurs. The system operates in “suggest mode” by default, where AI recommends pipeline stages but requires human confirmation. An override button is available on every candidate card, and all overrides are logged for audit purposes.
Transparency Obligations for AI Hiring Tools
Article 13 requires that AI systems provide sufficient transparency for users to “interpret the system's output and use it appropriately.” For recruitment AI, this translates into two distinct transparency obligations:
- Transparency to recruiters (deployers). HR teams must understand how the AI scores candidates, what factors influence scores, and what the system's known limitations are. This is not optional — deployers who cannot explain how their AI works are in breach of the Act.
- Transparency to candidates (affected persons). Under Article 86, candidates must be informed that an AI system is being used in the recruitment process. They also have the right to an explanation of how the AI-assisted decision was reached, particularly if they are rejected.
This effectively eliminates “black box” AI scoring in recruitment. If your ATS vendor cannot explain how their AI model produces scores, you cannot legally deploy it for hiring decisions in the EU after August 2026.
Transparency is not just a legal requirement — it builds trust
Research from the Harvard Business Review shows that candidates who receive explanations of AI-driven decisions are 40% more likely to view the hiring process as fair, even when the outcome is a rejection. Transparency improves employer brand perception and reduces the likelihood of complaints or legal challenges. Meeting the EU AI Act's transparency requirements is simultaneously a compliance obligation and a candidate experience improvement.
Technical Requirements: Logging, Accuracy, and Robustness
Beyond human oversight and transparency, the EU AI Act imposes several technical requirements on high-risk AI systems:
Automatic Logging (Article 12)
Your ATS must maintain automatic logs of all AI-assisted decisions for the duration required by the applicable sector-specific regulation — or at minimum, as long as the AI system is in use. These logs must include:
- The input data used for each AI decision (candidate profile, job requirements)
- The AI output (score, recommendation, ranking position)
- Any human override or intervention
- Timestamps and user identification for audit traceability
Accuracy and Robustness (Article 15)
AI systems must achieve and maintain an appropriate level of accuracy, robustness, and cybersecurity throughout their lifecycle. This means:
- Regular validation of AI scoring accuracy against real hiring outcomes
- Protection against adversarial inputs (e.g., CV manipulation to game AI scores)
- Graceful degradation — the system must remain functional even when the AI component encounters errors
- Security measures to prevent tampering with AI model outputs
Practical Compliance Checklist for HR Teams
If your organisation uses AI in recruitment and operates within the EU or hires EU-based candidates, use this checklist to assess your readiness:
- Inventory your AI tools. List every AI component in your hiring process: CV parsing, candidate scoring, chatbots, scheduling algorithms, knockout question logic. Determine which qualify as high-risk under Annex III.
- Verify vendor compliance. Contact your ATS vendor and ask for their EU AI Act compliance roadmap. If they do not have one, consider this a red flag.
- Implement human oversight. Ensure no candidate is rejected purely by AI without human review. Configure your ATS to require manual confirmation for all rejection decisions.
- Enable score explanations. If your AI scores candidates, ensure recruiters can see why each score was assigned — not just the number.
- Activate audit logging. Ensure your ATS logs all AI decisions, human overrides, and pipeline movements with timestamps.
- Inform candidates. Update your privacy policy and application process to disclose AI usage. Include information about candidates' rights under GDPR Article 22 and the EU AI Act.
- Conduct a bias audit. Analyse AI scoring outputs across demographic groups to identify potential systematic disparities.
- Document your risk management process. Create and maintain a record of how you identify, assess, and mitigate risks associated with AI recruitment tools.
- Train your team. Ensure all recruiters and hiring managers understand how the AI tools work, their limitations, and when to override AI recommendations.
- Assign responsibility. Designate an internal owner for AI compliance in recruitment — this could be the Head of HR, the DPO, or a dedicated AI governance role.
Penalties for Non-Compliance: What Is at Stake
The EU AI Act establishes a tiered penalty structure that exceeds GDPR in severity:
- Prohibited AI practices: Up to €35 million or 7% of global annual turnover
- High-risk non-compliance: Up to €15 million or 3% of global annual turnover
- Supplying incorrect information to authorities: Up to €7.5 million or 1.5% of turnover
These penalties apply to both AI providers (the vendor who built the tool) and deployers (the employer using it). As an employer, you cannot simply claim that compliance is your vendor's responsibility. Article 26 places specific obligations on deployers, including ensuring human oversight, informing affected persons, and maintaining records.
Beyond financial penalties, non-compliance creates litigation risk. Candidates who believe they were subjected to discriminatory AI screening can bring claims under both the EU AI Act and existing employment discrimination law, creating a dual enforcement pathway.
How Treegarden Is Building Toward EU AI Act Compliance
Treegarden has been proactively addressing EU AI Act obligations since the regulation's final text was published. The platform's AI recruitment features are designed with EU AI Act requirements in mind:
- Explainable scoring. Every AI candidate score includes a detailed breakdown showing which skills, experience factors, and qualification matches contributed to the score. Recruiters see strengths and weaknesses, not just a number.
- Human-in-the-loop architecture. Treegarden's AI operates in suggest mode by default. AI recommends pipeline stages and scores candidates, but all decisions require human confirmation. No candidate is automatically rejected based on AI scoring alone.
- Comprehensive audit trails. Every AI decision, human override, pipeline movement, and score explanation is logged with timestamps and user identification. These logs are exportable for regulatory audit.
- Candidate transparency. Treegarden supports AI disclosure notices in the application process and provides mechanisms for candidates to request explanation of AI-assisted decisions.
- Bias monitoring dashboard. Score distribution analytics allow HR teams to monitor AI scoring patterns across demographic groups and detect potential bias before it becomes a compliance issue.
- GDPR Article 22 integration. Treegarden includes opt-out mechanisms for automated processing and a human review workflow for candidates who exercise their rights.
Building toward full EU AI Act compliance
Treegarden's compliance roadmap includes ongoing development of risk management documentation, enhanced logging capabilities, and integration with third-party bias auditing tools. The platform is designed to evolve alongside regulatory requirements, ensuring that customers using Treegarden's AI features are building toward compliance rather than retrofitting it. Request a demo to see the compliance features in action.
FAQ
Does the EU AI Act apply to companies outside the EU?
Yes. The EU AI Act has extraterritorial scope, similar to GDPR. If your AI recruitment tools process applications from EU-based candidates or if the output of the AI system is used within the EU, the Act applies to your organisation regardless of where you are headquartered. US and UK companies hiring in the EU must comply.
Is basic CV parsing considered high-risk AI under the Act?
It depends on how the parsed data is used. CV parsing that simply extracts structured data from documents (name, contact details, work history) without making or influencing selection decisions may not meet the high-risk threshold. However, if the parsed data feeds directly into a scoring or ranking algorithm that influences hiring decisions, the combined system is likely high-risk. The safest approach is to treat any AI system that influences candidate selection as high-risk.
What should I ask my ATS vendor about EU AI Act compliance?
Ask for their EU AI Act compliance roadmap with specific milestones. Ask whether their AI scoring provides explanations, whether the system supports human override at every decision point, whether all AI decisions are logged for audit, and whether they have conducted or plan to conduct a bias audit of their scoring models. If the vendor cannot answer these questions substantively, they are behind on compliance preparation.
The August 2026 deadline is approaching fast. Organisations that start preparing now will be well-positioned to meet their obligations without disrupting their hiring operations. Those that wait risk last-minute scrambling, vendor switches, and potential enforcement exposure. Treegarden is building toward full EU AI Act compliance, with human oversight, explainable scoring, and audit trails available today. Request a free demo to see how these features work in practice.