Every time an AI recruitment tool scores a candidate and that score influences whether they are shortlisted or rejected, GDPR Article 22 is in play. This provision — often overlooked in the rush to adopt AI screening tools — gives individuals a fundamental right: the right not to have decisions with significant personal effects made by machines alone.

For HR teams, the implications are practical and immediate. If your ATS uses AI to screen, score, or rank candidates, and if those AI outputs influence who gets rejected without meaningful human review, you may be operating outside the law. This guide explains Article 22 in plain language, identifies the common scenarios where organisations cross the line, and provides a practical framework for staying compliant while still benefiting from AI recruitment tools.

AI disclosure

This article was written by the Treegarden editorial team with AI assistance for research and drafting. Legal references have been verified against the official GDPR text and European Data Protection Board guidelines. This is not legal advice — consult qualified legal counsel for your specific compliance requirements.

GDPR Article 22 in Plain Language

Article 22(1) of the General Data Protection Regulation states:

“The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”

Breaking this down for recruitment:

  • “A decision” — this includes the decision to reject a candidate, to not shortlist them, or to rank them so low they are never seen by a human recruiter.
  • “Based solely on automated processing” — if the decision is made entirely by the AI system without meaningful human involvement, it is “solely automated.”
  • “Including profiling” — profiling means any automated processing of personal data to evaluate certain aspects of a person. AI candidate scoring is profiling.
  • “Legal effects or similarly significantly affects” — being rejected from a job opportunity “similarly significantly affects” an individual. There is no serious argument otherwise.

The practical conclusion: if your AI scores candidates and that score is the sole or primary basis for rejection — without a human genuinely reviewing and approving the decision — you are in breach of Article 22.

When Does AI Screening Become Solely Automated

The line between AI-assisted and solely automated decisions is not always obvious. The European Data Protection Board (EDPB) has provided guidance that helps clarify the distinction:

ScenarioSolely Automated?Rationale
AI scores candidates; recruiter reviews all applications and makes independent shortlisting decisionsNoHuman makes the decision with AI as one input among several
AI scores candidates; recruiter only reviews candidates above a score thresholdLikely yes for candidates below the thresholdCandidates below the threshold are effectively rejected by the AI alone
AI auto-rejects candidates below a score threshold; recruiter reviews the remainderYes for auto-rejected candidatesRejection is entirely automated; no human involved
AI scores and ranks candidates; recruiter clicks “approve all” without reading explanationsLikely yesHuman involvement is nominal, not meaningful
AI flags candidates as “not recommended”; recruiter reviews each flagged candidate before rejectingNoAI recommends; human decides after genuine review
AI knockout screening rejects candidates who answer “no” to eligibility questionsGrey areaBinary eligibility criteria are less problematic, but candidates should be able to challenge incorrect determinations

The critical factor is whether a human genuinely exercises judgment. The EDPB has stated that “the controller cannot avoid the Article 22 provisions by fabricating human involvement” — a human who merely rubber-stamps AI decisions without independent consideration does not satisfy the requirement.

The Three Exceptions That Allow Automated Decisions

Article 22(2) permits solely automated decisions in three circumstances:

  1. Contractual necessity. The decision is necessary for entering into or performing a contract. This exception is narrow in recruitment — automated screening is rarely “necessary” for the employment contract. The fact that it is more efficient does not make it necessary.
  2. Legal authorisation. EU or Member State law authorises the automated decision-making. No EU Member State has specifically authorised automated rejection in recruitment as of 2026.
  3. Explicit consent. The candidate has given explicit consent to automated decision-making. This is the most practically available exception, but it comes with significant conditions: the consent must be freely given, specific, informed, and unambiguous. In a recruitment context, where the candidate's interest in the job creates an inherent power imbalance, data protection authorities have expressed scepticism about whether consent can truly be “freely given.”

Even when an exception applies, Article 22(3) requires the data controller to “implement suitable measures to safeguard the data subject's rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.” Translation: even with consent, you must offer candidates the option to request human review.

Candidate Rights Under Article 22

Candidates subject to automated decision-making have several rights that HR teams must be prepared to honour:

  • Right to be informed. Under Articles 13 and 14, candidates must be told that automated decision-making is taking place, the logic involved, and the significance and envisaged consequences.
  • Right to object. Under Article 21, candidates can object to profiling, including AI scoring in recruitment.
  • Right to human review. Under Article 22(3), candidates can request that a human reviews any automated decision.
  • Right to express their point of view. Candidates must be given the opportunity to provide additional information or context that the AI may have missed.
  • Right to contest the decision. Candidates can challenge the outcome of an automated decision and request that it be reconsidered.

These rights create operational requirements for HR teams. You need a process for handling human review requests, a mechanism for candidates to provide additional information, and a procedure for reconsidering AI-influenced decisions. If a candidate exercises these rights and you have no process in place, you are in breach of GDPR.

What Meaningful Human Involvement Actually Requires

The concept of “meaningful human involvement” is the key to GDPR Article 22 compliance. The EDPB's guidelines establish that meaningful involvement requires:

  • Authority. The human must have the authority to deviate from the AI's recommendation. If company policy requires following the AI score, human review is not meaningful.
  • Competence. The human must be competent to understand the AI's output, the candidate's profile, and the job requirements well enough to form an independent judgment.
  • Genuine consideration. The human must actually consider the candidate's application and the AI's recommendation before making a decision. Spending 2 seconds per application does not constitute genuine consideration.
  • Ability to override. The human must be able to override the AI's recommendation in practice, not just in theory. If the system makes it difficult or time-consuming to override, this undermines meaningful involvement.

A practical test for meaningful human involvement

Ask yourself: “In the last 100 AI-scored candidates, how many times did a recruiter override the AI's recommendation?” If the answer is zero or near-zero, either the AI is perfect (unlikely) or the human review is not meaningful. A healthy override rate — typically 10 to 20% for well-calibrated AI — indicates that recruiters are genuinely exercising judgment. An override rate below 5% warrants investigation into whether human review is substantive.

Based on enforcement actions and regulatory guidance, these are the most common ways organisations inadvertently breach Article 22 with AI recruitment tools:

  1. Score threshold auto-rejection. Setting a minimum AI score and automatically rejecting all candidates below it without any human reviewing the rejected applications. This is the clearest example of solely automated decision-making.
  2. De facto automation through information overload. Presenting recruiters with 500 AI-scored candidates and expecting them to review all of them. In practice, the recruiter reviews the top 20 and ignores the rest — meaning the AI effectively decided the fate of the bottom 480.
  3. Nominal review. Having a recruiter spend a few seconds clicking “approve” on each AI recommendation without actually reading the candidate's application or the AI's explanation.
  4. No opt-out mechanism. Failing to inform candidates that AI is being used or to offer an alternative process for candidates who object to automated profiling.
  5. No human review process. Having no procedure in place for candidates who request human review of an AI-influenced decision.
  6. Incomplete disclosure. Informing candidates that “we use technology to assist in our recruitment process” without providing meaningful information about the logic involved, the significance, and the consequences.

How AI Candidate Scoring Interacts with Article 22

AI candidate scoring occupies a nuanced position under Article 22. The scoring itself is profiling, but scoring alone does not necessarily constitute a “decision.” The critical question is what happens after the score is generated:

  • If the score is one input among several, and a human makes the final decision: Not solely automated. Article 22 is satisfied, provided the human review is meaningful.
  • If the score determines ranking and only top-ranked candidates are reviewed: Candidates ranked too low for human review are effectively subject to solely automated decision-making.
  • If the score directly triggers rejection: Solely automated. Clear Article 22 breach.
  • If the score is used to prioritise but all candidates eventually receive human review: Not solely automated. The score assists workflow efficiency without replacing human judgment.

The practical implication: AI scoring is compliant when it informs and prioritises human decision-making, but non-compliant when it replaces it. The architectural design of your ATS determines which side of the line you fall on.

Building a GDPR-Compliant AI Screening Workflow

A GDPR Article 22-compliant AI screening workflow includes the following elements:

  1. Disclosure at application. Inform candidates in your privacy notice and application form that AI tools are used in the screening process. Explain what the AI evaluates and how it influences decisions.
  2. Opt-out mechanism. Provide candidates with a way to opt out of AI profiling and request a fully human review of their application.
  3. AI scoring with explanation. Use AI that provides explainable scores, so recruiters can understand and verify the AI's assessment.
  4. Human review of all rejections. Ensure that no candidate is rejected without a human recruiter reviewing the AI's recommendation and the candidate's application.
  5. Human review request process. Maintain a documented process for handling candidate requests for human review of AI-influenced decisions. Respond within a reasonable timeframe (GDPR requires response within one month).
  6. Override capability. Ensure recruiters can easily override AI recommendations at any point in the process.
  7. Audit logging. Log all AI scores, human decisions, overrides, and candidate requests for compliance documentation.
  8. Regular DPIA updates. Include AI recruitment tools in your Data Protection Impact Assessment (DPIA) and update it when tools change or when new risks are identified.

How Treegarden Addresses GDPR Article 22 Requirements

Treegarden's GDPR Article 22 compliance features

Treegarden has implemented technical safeguards aligned with GDPR Article 22 requirements. The platform includes an opt-out mechanism that allows candidates to request exclusion from AI profiling. AI scoring operates in suggest mode by default, with all pipeline movements requiring human confirmation. Score explanations are provided for every AI assessment, meeting the “meaningful information about the logic involved” standard. A human review workflow handles candidate requests for reconsideration, with audit logging that documents every step. All AI decisions, human overrides, and candidate requests are logged with timestamps for compliance documentation.

Treegarden is designed with GDPR Article 22 requirements in mind, providing the human oversight, transparency, and candidate rights mechanisms that compliant AI recruitment requires. The platform's architecture ensures that AI enhances rather than replaces human judgment in hiring decisions.

FAQ

Does GDPR Article 22 apply to AI screening of candidates outside the EU?

GDPR applies when you process personal data of individuals who are in the EU, regardless of where your organisation is based. If you use AI screening for roles where EU-based candidates apply, Article 22 applies to those candidates. For candidates located outside the EU, Article 22 does not directly apply, but similar protections may exist under local law (e.g., UK GDPR, Brazil's LGPD, California's CCPA).

Can we rely on candidate consent to justify automated screening?

Technically, explicit consent is one of the three exceptions under Article 22(2). However, data protection authorities have expressed significant scepticism about whether consent in a recruitment context can be “freely given” given the inherent power imbalance — the candidate wants the job, which creates pressure to consent. If you rely on consent, ensure it is genuinely optional (offer a human-review alternative), specific to AI screening (not bundled with general data processing consent), and clearly informed (explain exactly what the AI does and how it affects decisions). Even with consent, you must offer the right to human review.

What should we do if a candidate requests human review of an AI screening decision?

You should have a documented process ready. When a request is received: (1) acknowledge the request promptly, (2) assign a qualified recruiter to review the candidate's application independently of the AI score, (3) make a new determination based on human review of the candidate's full application, (4) communicate the outcome to the candidate within one month (GDPR Article 12 timeline), and (5) log the entire process for compliance documentation. The human review must be genuine — not simply confirming the AI's original assessment without independent evaluation.

GDPR Article 22 is not a theoretical compliance concern — it is an enforceable right that candidates are increasingly aware of, and that data protection authorities are actively enforcing. Organisations using AI in recruitment need to ensure their workflows include meaningful human oversight, candidate transparency, and accessible opt-out mechanisms. Treegarden provides these capabilities as built-in features, with human-in-the-loop architecture and candidate rights workflows designed with GDPR Article 22 requirements in mind. Request a free demo to see how Treegarden supports compliant AI screening.