The auto-rejection problem most employers ignore

Many applicant tracking systems offer a feature that sounds efficient on its face: set a minimum AI score threshold, and any candidate who falls below it is automatically rejected. The recruiter never sees their application. The candidate receives a form rejection email. The pipeline stays clean.

The problem is that this feature, in most implementations, violates European data protection law. Specifically, it conflicts with GDPR Article 22, which addresses automated individual decision-making. And the consequences of getting this wrong are not theoretical — they include regulatory fines, legal challenges from rejected candidates, and reputational damage that undermines employer branding.

Despite this, auto-rejection remains a standard feature in many ATS platforms. Some vendors market it as a productivity benefit without mentioning the legal implications. Others bury the compliance risk in documentation that recruiters never read. The result is that thousands of companies are unknowingly operating automated rejection systems that put them at legal risk every time a candidate applies.

What GDPR Article 22 actually says

GDPR Article 22(1) states: "The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her."

The key phrase is "significantly affects." A job rejection unquestionably significantly affects an individual. It determines whether they get employment, income and career progression. When an ATS automatically rejects a candidate based on an AI score without any human reviewing that decision, the candidate has been subject to a decision based solely on automated processing that significantly affects them.

Article 22(2) provides three exceptions where automated decisions are permitted: when necessary for a contract, when authorised by law, or when based on explicit consent. In practice, none of these exceptions provide a reliable defence for automated rejection in recruitment:

Contract necessity is difficult to argue because rejecting a candidate is not necessary for performing a contract — it is a pre-contractual decision. The candidate has no contract with the employer yet.

Legal authorisation would require specific national legislation permitting automated rejection in hiring, which most EU member states have not enacted.

Explicit consent is problematic because consent in employment contexts is generally considered not freely given, as the candidate is in a position of dependency. A candidate who refuses to consent to automated processing risks being excluded from the application entirely.

The Rubber-Stamp Problem

Regulators have been clear that having a human nominally "approve" automated decisions does not satisfy Article 22. The human must have genuine authority, competence and willingness to overturn the automated recommendation. A workflow where a recruiter clicks "approve all" on a batch of AI rejections is not meaningful human oversight.

The advance-only approach: how compliant AI works

The alternative to auto-rejection is an advance-only model. Instead of removing candidates from the process, the AI identifies strong candidates and surfaces them for recruiter attention. Every candidate remains visible in the system regardless of their score. The AI helps the recruiter find the best candidates faster; it does not eliminate candidates on the recruiter's behalf.

This is how Treegarden's Edera AI operates. When a recruiter initiates candidate scoring for a job, the AI analyses each applicant's CV and assigns a score from 0 to 100. Candidates with strong scores are surfaced — they can be automatically placed in a dedicated "AI Proposals" column on the Kanban board. But candidates with lower scores are not rejected, hidden or removed. They remain on the board, visible to the recruiter, available for review.

The advance-only model satisfies GDPR Article 22 because the automated processing does not produce the decision that affects the candidate. The AI produces a recommendation. The human recruiter makes the decision. The candidate is affected by a human decision, not an automated one.

The legal risks of automated rejection in recruitment are not speculative. They fall into three categories:

Regulatory enforcement. Data protection authorities across Europe have the power to investigate and fine organisations for GDPR violations. Fines under Article 83 can reach 20 million euros or 4% of global annual turnover, whichever is higher. While most enforcement actions have not yet focused specifically on recruitment AI, regulators are increasingly scrutinising automated decision-making across all sectors.

Individual complaints. Any rejected candidate can file a complaint with their national data protection authority. If your ATS auto-rejected them without human review, you will need to demonstrate that the decision was not solely automated. If you cannot, the authority may require you to re-process the application with human oversight, pay compensation, or both.

Litigation risk. Candidates can bring civil claims under GDPR Article 82 for material or non-material damage resulting from a GDPR violation. A candidate who was automatically rejected from a role they were qualified for has a plausible claim for non-material damage. Class action mechanisms in some jurisdictions could amplify this risk if an ATS automatically rejected large numbers of candidates.

Beyond GDPR: the global trend toward AI regulation in hiring

GDPR is not the only regulatory framework restricting automated decision-making in recruitment. The trend is global and accelerating:

EU AI Act classifies AI systems used in employment and recruitment as "high-risk," requiring conformity assessments, transparency obligations and human oversight requirements.

New York City Local Law 144 requires employers using automated employment decision tools to conduct annual bias audits and provide candidates with notice that AI is being used in the hiring process.

Illinois AI Video Interview Act requires employers to notify candidates when AI is used to analyse video interviews and obtain consent before using such tools.

Colorado AI Act (effective 2026) requires deployers of high-risk AI systems in employment to conduct impact assessments and provide notice to affected individuals.

The regulatory direction is clear: automated decision-making in hiring will face increasing scrutiny and restriction. Building your recruitment process on an auto-rejection model means building on a foundation that regulators are actively working to restrict.

GDPR-Compliant AI in Treegarden

Treegarden's Edera AI uses an advance-only model: it scores candidates and surfaces top matches, but never auto-rejects. All scoring is user-initiated. Candidate data is processed using open-source AI (Ollama) with no data sent to third-party providers. Start recruiting with compliant AI.

Data privacy in AI scoring: where does candidate data go?

GDPR compliance extends beyond Article 22. The processing of candidate data through AI systems raises questions about data minimisation, purpose limitation, and sub-processor management that many ATS vendors handle poorly.

When an ATS sends candidate CV data to OpenAI's API or Google's Gemini for scoring, that data is being transferred to a third-party processor. This requires a data processing agreement, potentially a data transfer impact assessment (for non-EU transfers), and disclosure in the privacy notice. The candidate's data may be used to train the AI model, creating a secondary processing purpose that was not disclosed or consented to.

Treegarden avoids this problem entirely by using Ollama, an open-source AI framework that processes data within Treegarden's own infrastructure. No candidate data is sent to external AI providers. There are no additional sub-processors to manage, no cross-border transfer concerns, and no risk of candidate data being used for model training.

Practical compliance checklist for AI recruiting

If you are evaluating ATS platforms or reviewing your current AI recruitment setup for GDPR compliance, here are the specific questions to ask:

1. Does the system auto-reject candidates? If yes, this is a significant GDPR risk. Look for systems that use advance-only models instead.

2. Is scoring user-initiated or automatic? Systems that automatically score candidates on application create a processing activity that needs to be disclosed and justified. User-initiated scoring gives the recruiter control over when AI is applied.

3. Where is candidate data processed? If the ATS sends data to third-party AI providers (OpenAI, Google, Anthropic), this creates additional GDPR obligations around sub-processors and potentially international data transfers.

4. Can candidates request human review? Under Article 22(3), data subjects have the right to obtain human intervention, express their point of view, and contest automated decisions. Your process must accommodate this.

5. Is there an audit trail? You should be able to demonstrate, for any individual candidate, what processing occurred, what score was assigned, and that a human made the ultimate selection decision.

6. Have you updated your privacy notice? If you use AI in recruitment, your candidate privacy notice must disclose this, including the logic involved and the significance and consequences of the processing.

Frequently asked questions

What does GDPR Article 22 say about AI in recruitment?

Article 22 gives individuals the right not to be subject to decisions based solely on automated processing that produce legal effects or significantly affect them. A job rejection clearly qualifies. An ATS that automatically rejects candidates based on an AI score, without meaningful human review, likely violates Article 22.

What is the difference between advance-only and auto-reject AI?

Advance-only AI identifies and surfaces strong candidates for recruiter review but never removes anyone from consideration. All candidates remain visible regardless of score. Auto-reject AI automatically eliminates candidates below a score threshold without human review. Advance-only is the GDPR-compliant approach.

Can candidates challenge an automated rejection under GDPR?

Yes. Under Article 22(3), candidates have the right to obtain human intervention, express their point of view, and contest a decision made by automated processing. If your ATS auto-rejected a candidate without human review, they can challenge that decision.

Is it enough to have a human click "approve" on an AI rejection?

Not necessarily. Regulators have indicated that rubber-stamping automated decisions does not constitute meaningful human intervention. The human reviewer must have genuine authority, competence and willingness to change the decision.