The appeal of fully automated hiring is obvious: AI screens candidates, scores them, and moves the top matches forward — all without a recruiter touching the application. For high-volume roles receiving hundreds of applications, this sounds like the solution to every capacity problem.
The reality is more complicated. Fully automated hiring decisions create legal exposure under both GDPR and the EU AI Act. They also produce worse outcomes than human-AI collaboration, because AI lacks the contextual judgment that experienced recruiters bring to nuanced evaluation decisions. The most effective approach in 2026 is not replacing human judgment with AI, but using AI to augment human judgment — what technologists call “human-in-the-loop” design.
AI disclosure
This article was written by the Treegarden editorial team with AI assistance for research and drafting. Legal references have been verified against official regulation texts. This is not legal advice — consult qualified legal counsel for your specific compliance situation.
What Human-in-the-Loop Means for AI Recruitment
Human-in-the-loop (HITL) is a design principle where AI systems operate under human supervision, with humans retaining the ability to review, override, or halt AI decisions at critical points. In recruitment, this means:
- AI analyses and recommends. The AI scores candidates, suggests pipeline movements, identifies top matches, and flags potential concerns. These are recommendations, not decisions.
- Humans decide and act. A recruiter reviews the AI's recommendations, applies contextual judgment, and makes the actual decision to shortlist, reject, or advance a candidate.
- Humans can override. At any point, the recruiter can override an AI recommendation — advancing a candidate the AI scored low, or holding back a candidate the AI scored high.
This is fundamentally different from “human-on-the-loop” (where humans monitor AI but do not intervene in individual decisions) or “human-out-of-the-loop” (where AI operates autonomously). The EU AI Act specifically requires human-in-the-loop for high-risk AI systems.
Why Fully Automated Hiring Decisions Are Legally Risky
The legal landscape is unambiguous: fully automated hiring decisions — where AI rejects or advances candidates without human review — are restricted or prohibited in most major jurisdictions:
| Jurisdiction | Regulation | Automated Decision Restriction |
|---|---|---|
| EU / EEA | GDPR Art. 22 | Individuals have the right not to be subject to solely automated decisions with significant effects |
| EU / EEA | EU AI Act Art. 14 | High-risk AI must include human oversight; humans must be able to override, intervene, or halt |
| United Kingdom | UK GDPR Art. 22 | Same as EU GDPR — solely automated decisions restricted |
| United States | EEOC guidance | Employers liable for discriminatory AI outcomes; human review is a key defense |
| New York City | LL 144 | AEDTs require bias audits and candidate notification; human alternatives must be available |
| Illinois | AI Video Interview Act | Consent and notification required; candidates can request human alternative |
| Colorado | AI Act (2026) | Impact assessments required for high-risk AI employment decisions |
The pattern across jurisdictions is clear: legislators and regulators expect human involvement in AI-assisted hiring decisions. An ATS configured to automatically reject candidates based solely on AI scores — without any human reviewing those rejections — creates compliance exposure in virtually every regulated market.
EU AI Act Article 14: The Human Oversight Obligation
Article 14 of the EU AI Act establishes specific human oversight requirements for high-risk AI systems. For recruitment AI, the key provisions are:
- Effective oversight by natural persons. The AI system must be designed so that humans can “fully understand the capacities and limitations of the high-risk AI system and be able to duly monitor its operation.”
- Ability to interpret outputs. The human overseer must be able to “correctly interpret the high-risk AI system's output” — which requires explainable scoring, not opaque numbers.
- Ability to decide not to use the output. Humans must be able to “decide, in any particular situation, not to use the high-risk AI system or to otherwise disregard, override or reverse the output of the high-risk AI system.”
- Ability to intervene or halt. The human overseer must be able to “intervene in the operation of the high-risk AI system or interrupt the system through a ‘stop’ button or a similar procedure.”
These are not optional design suggestions. They are binding obligations on both providers (ATS vendors) and deployers (employers). An ATS that does not support human override at every AI-influenced decision point fails to meet Article 14's requirements.
GDPR Article 22: The Prohibition on Solely Automated Decisions
GDPR Article 22(1) states: “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”
In recruitment, this applies directly to AI screening. Being rejected from a job “similarly significantly affects” a candidate. If the rejection is based solely on an AI score without meaningful human involvement, it breaches Article 22.
The key phrase is “solely on automated processing.” If a human recruiter genuinely reviews the AI's recommendation and makes an independent judgment before the rejection decision is finalised, Article 22 is satisfied. If the human review is nominal — a rubber stamp on AI decisions — courts have indicated this does not constitute “meaningful” human involvement.
What counts as “meaningful” human involvement?
The Article 29 Working Party (now the European Data Protection Board) has clarified that for human involvement to be meaningful, the human must have the authority and competence to change the decision, must actually exercise their own judgment, and must not merely apply the automated decision without genuine consideration. A recruiter who spends 2 seconds clicking “approve” on every AI recommendation is not providing meaningful oversight. A recruiter who reviews the AI explanation, considers the candidate's profile, and makes a deliberate decision — even if they usually agree with the AI — is providing meaningful oversight.
Suggest Mode vs Auto Mode: Two Approaches to AI in Hiring
Most modern ATS platforms with AI features offer some variation of two operational modes:
Suggest Mode (Human-in-the-Loop)
In suggest mode, AI analyses candidates and produces recommendations, but all actions require human confirmation:
- AI scores candidates and ranks them by match quality
- AI recommends pipeline stage movements (e.g., “suggest moving to Phone Screen”)
- AI flags candidates that may not meet minimum requirements
- Recruiter reviews each recommendation and confirms, modifies, or rejects it
- No candidate is automatically moved, rejected, or advanced without human action
Auto Mode (Human-on-the-Loop)
In auto mode, AI takes actions autonomously based on predefined rules:
- AI automatically rejects candidates below a score threshold
- AI automatically advances top-scoring candidates to the next pipeline stage
- AI sends automated rejection or advancement emails without human review
- Human recruiter reviews outcomes periodically rather than approving individual decisions
Under the EU AI Act and GDPR, suggest mode is compliant by design. Auto mode is legally problematic unless substantial additional safeguards are in place — and even then, the “solely automated” question under GDPR Article 22 remains challenging to satisfy.
Treegarden's suggest mode: AI recommends, humans decide
Treegarden operates in suggest mode by default. When AI scores candidates, the scores appear as recommendations alongside a detailed explanation of strengths and weaknesses. Pipeline movement suggestions appear in a review queue where recruiters can accept, modify, or reject each recommendation. No candidate is automatically rejected or advanced based on AI scoring alone. This architecture is designed with EU AI Act Article 14 requirements in mind, ensuring that human oversight is built into the workflow rather than bolted on as an afterthought.
Designing a Human-in-the-Loop Recruitment Workflow
Implementing HITL effectively requires more than adding a “confirm” button. The workflow must be designed so that human review is substantive, efficient, and scalable:
- AI pre-sorts and scores. All incoming applications are parsed and scored automatically. Results are presented to the recruiter sorted by match score, with AI recommendations clearly labelled.
- Recruiter reviews top candidates. The recruiter reviews the AI's top-ranked candidates first, reading the AI explanation alongside the candidate's actual CV. For high-volume roles, reviewing the top 20 to 30% may be sufficient if the AI is well-calibrated.
- Recruiter reviews AI-flagged rejections. Before any rejection is finalised, the recruiter reviews the AI's rationale for low scores. This catches cases where the AI missed relevant experience or penalised non-standard career paths unfairly.
- Human confirms all pipeline movements. Whether advancing or rejecting, the recruiter clicks to confirm each decision. Batch confirmation is acceptable for efficiency, provided the recruiter has genuinely reviewed the AI recommendations before confirming.
- Override with reason logging. When the recruiter disagrees with the AI, the system logs the override with the recruiter's reasoning. This creates an audit trail and a feedback mechanism for improving AI accuracy.
When Human Override Matters Most
Not every AI recommendation needs deep scrutiny. Experienced recruiters learn when AI recommendations are reliable and when they should be questioned. Key scenarios where human override is most critical:
- Career changers. AI scores based on keyword matching often undervalue candidates transitioning from related fields who bring transferable skills that the AI does not recognise.
- Non-traditional backgrounds. Candidates with military service, gap years, freelance experience, or non-linear career paths are frequently scored low by AI that expects conventional career progression.
- Overqualified candidates. AI may flag candidates as a poor match because they exceed the job requirements, when in practice they represent excellent value.
- Local knowledge. AI may not recognise that a particular qualification, certification, or employer is especially relevant in a specific market or industry vertical.
- Accommodation candidates. Candidates with disabilities or specific circumstances may present their qualifications differently, leading to AI scoring that does not reflect their actual capability.
The Automation Bias Problem and How to Solve It
Automation bias is the tendency for humans to over-rely on automated system outputs, even when those outputs are incorrect. In recruitment, this manifests as recruiters uncritically accepting AI scores and recommendations without applying their own judgment — which undermines the entire purpose of human-in-the-loop design.
Research by Skitka et al. (2000) and more recent studies in AI-assisted decision-making show that automation bias is most pronounced when:
- The human is under time pressure (common in high-volume recruitment)
- The AI output is presented as a single number without explanation
- The human does not understand how the AI works
- The AI has been consistently accurate, creating a false sense of infallibility
Countermeasures that work:
- Require engagement with explanations. Present AI scores with explanations that the recruiter must read before confirming. Do not allow batch confirmation without review.
- Track override rates. If a recruiter overrides AI recommendations less than 5% of the time, investigate whether they are rubber-stamping rather than genuinely reviewing.
- Provide disagreement prompts. Periodically present AI recommendations that are deliberately borderline and ask the recruiter to explain their decision, building the habit of critical evaluation.
- Train on AI limitations. Ensure recruiters understand what the AI can and cannot assess, so they know when to trust it and when to apply their own judgment.
How Treegarden Implements Human-in-the-Loop AI
Treegarden's AI recruitment features are built on a human-in-the-loop architecture that is aligned with EU AI Act human oversight requirements:
- Suggest mode as default. All AI scoring and pipeline recommendations require explicit human confirmation. This is the default configuration and cannot be accidentally bypassed.
- Dedicated review queue. AI-scored candidates appear in a review queue where the recruiter sees the score, the explanation, and the recommended action side by side. The recruiter confirms, modifies, or overrides each recommendation.
- Score explanation at every touchpoint. AI scores are never presented as bare numbers. Every score is accompanied by a clickable explanation showing strengths, weaknesses, and factor contributions.
- Override logging and analytics. All human overrides are logged with reasons and timestamps. Override analytics help teams identify where AI accuracy can be improved and demonstrate human oversight for compliance audits.
- AI Proposals column. Treegarden's Kanban board includes a dedicated AI Proposals column where AI-recommended pipeline movements queue for human review. No candidate leaves this column without a human decision.
- Emergency halt. Administrators can disable AI scoring system-wide with a single toggle, meeting Article 14's requirement for the ability to interrupt or halt the AI system.
Proactively addressing EU AI Act obligations
Treegarden's human-in-the-loop architecture was designed with EU AI Act compliance in mind from the beginning. Rather than retrofitting human oversight onto an automated system, Treegarden built suggest mode as the foundational workflow and added AI recommendations on top. This approach ensures that human judgment remains central to every hiring decision while AI handles the data-intensive preparation work. Request a demo to see the human-in-the-loop workflow in action.
FAQ
Can we use AI to automatically reject candidates who don't meet minimum requirements?
Knockout screening based on objective, binary criteria (e.g., “Do you have the legal right to work in this country?”) is generally considered less risky than scoring-based rejection, because the criteria are transparent and verifiable. However, even knockout rejection should include a mechanism for candidates to challenge incorrect determinations. Under GDPR Article 22, best practice is to have a human review automated rejections at least on a sample basis, and to inform candidates that automated screening was used with an option to request human review.
Is batch-confirming AI recommendations considered meaningful human oversight?
It depends on the quality of review. If a recruiter reviews the AI's explanations for a batch of candidates, understands the reasoning, and then confirms the batch because they agree with the recommendations, this constitutes meaningful oversight. If a recruiter clicks “confirm all” without reading the explanations, this is a rubber stamp and does not satisfy GDPR Article 22 or EU AI Act Article 14. The safest approach is to require individual confirmation with evidence that the explanation was viewed.
What is the difference between human-in-the-loop and human-on-the-loop?
Human-in-the-loop means the human makes or confirms each decision before it takes effect. Human-on-the-loop means the AI acts autonomously and the human monitors outcomes, intervening only when problems are detected. For recruitment AI classified as high-risk under the EU AI Act, human-in-the-loop is the required standard. Human-on-the-loop monitoring does not satisfy Article 14's requirements because by the time a human detects a problem, affected candidates have already been subjected to the automated decision.
Human-in-the-loop recruitment is not a constraint on AI efficiency — it is what makes AI recruitment both legally compliant and practically effective. The best AI recruitment tools amplify human judgment rather than replacing it, and the organisations that implement HITL workflows now will be ahead of the compliance curve when the EU AI Act takes full effect in August 2026. Treegarden is designed with human oversight at its core. Request a free demo to see how suggest mode and the review queue work in practice.