What GDPR Article 22 actually says
Article 22 of the General Data Protection Regulation establishes a specific right for data subjects: the right not to be subject to a decision based solely on automated processing — including profiling — which produces legal effects concerning them or similarly significantly affects them. The operative word is "solely." Where a human being is genuinely involved in reviewing and making the final decision, Article 22 is not triggered.
The article also establishes limited exceptions to this restriction. Solely automated decisions are permitted when they are necessary for entering into or performing a contract with the data subject; when authorised by EU or member state law with appropriate safeguards; or when based on the data subject's explicit consent. Even where one of these exceptions applies, the data subject retains the right to obtain human intervention, to express their point of view and to contest the decision.
The scope of "significantly affects" is deliberately broad under GDPR and the accompanying guidance from the European Data Protection Board. Employment decisions — including decisions about whether a candidate progresses in a hiring process, receives an interview invitation or is rejected — are explicitly cited in EDPB guidance as falling within this scope. The employment relationship is one of the most consequential relationships in a person's life, and decisions that affect access to employment are therefore considered to produce effects that are at least similarly significant even if not strictly "legal" in the technical sense.
This framing is important because it means organisations cannot avoid Article 22's reach by arguing that a hiring rejection is merely a business decision rather than a legal one. Data protection authorities across the EU have consistently treated employment screening decisions as falling within Article 22's protective scope when made solely by automated systems.
When Article 22 Is Triggered
Article 22 applies to decisions based "solely" on automated processing that produce legal effects or similarly significant effects. An AI match score presented to a recruiter who makes the final decision does NOT trigger Article 22 — the human is the decision-maker. Article 22 is triggered when the automated system itself determines the outcome: automatic rejection below a score threshold, automatic shortlisting above one, or any pipeline movement executed without a human reviewing the individual case before the action is taken.
Does Article 22 apply to your ATS?
The answer depends not on whether your ATS uses AI, but on how that AI interacts with the decision-making process. Most modern applicant tracking systems incorporate some degree of automated processing — CV parsing, match scoring, keyword filtering, stage recommendations — and none of these capabilities automatically trigger Article 22 obligations. The trigger is the decision, not the analysis.
Consider four common ATS configurations and their Article 22 status. First: an ATS that parses CVs and extracts structured data for recruiter review. No Article 22 concern — the parsing is a processing activity, not a decision. Second: an ATS that calculates an AI match score and presents it to a recruiter who then decides which candidates to shortlist. No Article 22 concern — the human reviews the score and makes the decision. Third: an ATS configured to automatically reject all candidates below a specified match score, with no human review before the rejection is communicated. Article 22 triggered — the system makes and executes the decision without human involvement. Fourth: an ATS that automatically moves candidates to a specific pipeline stage based on screening question responses, without recruiter review. Article 22 triggered for each automated advancement or rejection.
The practical implication is that the line runs through your configuration, not your technology stack. The same AI capabilities can be deployed in a compliant or non-compliant way depending on whether human review is built into the workflow before consequential decisions are executed. This is good news for HR teams that want to use AI — it means compliance is achievable without abandoning AI features, provided the workflow is designed correctly.
It is also worth noting that Article 22 applies independently of the size of your organisation. There is no small-business exemption. Any controller using automated processing that makes employment decisions affecting EU data subjects must comply, regardless of headcount or processing volume.
The automated versus human decision distinction
The distinction between automated processing that informs human decisions and automated processing that constitutes the decision is the conceptual heart of Article 22. Getting this distinction right in practice requires more than labelling a workflow "human-reviewed" — the human review must be genuine, meaningful and documented.
The EDPB's guidelines on automated decision-making (Guidelines 06/2022) address this directly. They state that "human involvement" must be more than a rubber-stamping exercise. A human review that consists of a recruiter clicking "approve" on every AI recommendation without reading the candidate's profile does not satisfy the requirement. The review must involve actual human consideration of the individual case.
What does genuine review look like in practice? The reviewer must have access to the relevant information — the candidate profile, the AI's assessment, the criteria against which the assessment was made — and must actually consider it. They should be capable of overriding the AI recommendation and should do so in some proportion of cases. Where a reviewer never overrides the system's recommendations, there is a factual argument that the review is not genuinely independent and that the decision is effectively being made by the automated system.
This creates an important implication for training and culture. Recruiters who understand why genuine review matters — not just for legal compliance but for decision quality — are more likely to exercise real judgement rather than defaulting to AI recommendations. The compliance requirement and the quality objective are aligned: independent human review both protects the organisation legally and produces better hiring decisions.
Human Review Workflow in Treegarden
AI scores and recommendations in Treegarden are always presented to human recruiters for review; no candidate is rejected or advanced solely by automated decision. The system surfaces AI match scores, flags and recommendations as inputs to the recruiter's decision, not as decisions themselves. Pipeline stage changes require a recruiter action — the system never moves a candidate without that deliberate human step, ensuring the decision chain is unambiguously human-led.
How to use AI in recruitment in a compliant way
Using AI in recruitment compliantly under Article 22 requires designing workflows where the AI informs and the human decides. This is not a constraint that limits the value of AI — it is a design principle that produces better outcomes both legally and practically. AI tools are excellent at processing large volumes of data quickly, identifying patterns and generating consistent assessments. Humans are better at contextual judgement, understanding nuance and taking responsibility for decisions that affect real people. Combining them in the right way leverages both.
The practical design principles for compliant AI-assisted recruitment are straightforward. First, ensure all AI outputs are presented as inputs to human decision-makers, clearly framed as recommendations, scores or analyses rather than determinations. Second, build human confirmation steps into every pipeline stage transition — an explicit recruiter action should be required to advance or reject a candidate, not an automated rule that fires on AI output alone. Third, give recruiters the tools to interrogate AI recommendations: why did this candidate score highly, which criteria contributed most, what does the AI's assessment not capture? Fourth, track override rates to identify whether human review is genuine or nominal. Fifth, document the human review steps in your processing records.
It is also important to address the lawful basis question. Recruitment involves processing personal data, and AI-assisted analysis of CVs constitutes profiling. Your privacy notice and candidate-facing communications should disclose that AI tools are used in the initial screening process, what those tools assess, and how candidates can request human review of assessments that affect them. Transparency about AI use is increasingly expected by candidates and required by emerging regulatory guidance.
AI Decision Logging
Every AI recommendation in Treegarden is logged alongside the human decision that followed it, creating a clear audit trail showing that human oversight was applied at each decision point. The log records the AI score or recommendation, the timestamp of the human action, which recruiter took the action, and whether the human decision aligned with or diverged from the AI recommendation. This audit trail supports Article 22 compliance demonstration and internal governance review.
Candidate rights under Article 22
Even where solely automated decision-making is permitted under one of Article 22's exceptions, the data subject retains three important rights: the right to obtain human intervention, the right to express their point of view and the right to contest the decision. For recruitment organisations that permit automated decisions under the explicit consent exception (which is difficult to achieve properly in a recruitment context), these rights must be made accessible and actionable.
More broadly, Articles 13 and 14 create transparency obligations that apply to all automated processing in recruitment, not just decisions that trigger Article 22 directly. Candidates must be informed about automated processing and profiling at the point their data is collected. This means your application form or careers page privacy notice should disclose: that an ATS is used to process applications; that AI tools generate match scores or assessments; what criteria those assessments are based on; and how candidates can contact you to understand or challenge assessments that affected their application.
The right of access under Article 15 gives candidates the right to request a copy of their personal data, including any AI-generated scores or assessments. Organisations should have a process for responding to these requests that includes providing the AI-generated data in an intelligible format — not just a raw numerical score but enough information for the candidate to understand what was assessed and why the score was what it was.
Practically, most candidates who are rejected from applications do not exercise these rights — but the obligation to make them accessible is unconditional. A well-structured response process, documented clearly and tested before it is needed, protects the organisation and demonstrates genuine commitment to candidate rights rather than procedural compliance.
Documentation requirements for AI-assisted decisions
Article 30 of GDPR requires controllers to maintain a Record of Processing Activities (ROPA) that documents each processing activity, including the purposes, categories of data subjects and personal data, recipients, transfers, retention periods and security measures. AI-assisted recruitment screening is a processing activity that must appear in your ROPA.
For AI-assisted recruitment, a complete ROPA entry should document: the name and nature of the AI tool used; the categories of personal data processed by the tool (CV content, work history, education, skills, application responses); the purpose of the processing (initial screening to identify potentially suitable candidates); the lawful basis for processing; how long candidate data is retained; and the safeguards that ensure compliance with Article 22 (specifically, the human review step before any consequential decision is executed).
Beyond the ROPA, where AI tools involve profiling — as they typically do in matching or scoring applications — a Data Protection Impact Assessment (DPIA) is likely required under Article 35. The DPIA should assess the necessity and proportionality of the AI tool, the risks to data subjects (including the risk of discriminatory outcomes), and the measures taken to address those risks. For organisations using AI screening for the first time, completing the DPIA is the most thorough way to identify compliance gaps before they become problems.
Candidate Data Processing Record
Treegarden maintains processing records for all AI-assisted decisions, supporting GDPR Article 30 documentation requirements. The platform provides exportable logs of all AI assessments and the human decisions that followed, categorised by job and date range. These records give your data protection officer the documentation needed for ROPA entries, DPIA supporting evidence and regulatory inquiry responses — without requiring manual record-keeping by the recruiting team.
Practical safeguards to implement now
For organisations that are already using AI-assisted recruitment tools and want to ensure Article 22 compliance, the following safeguards address the most common gaps. Each is practical, implementable without replacing existing technology and addresses a specific compliance risk.
The first and most important safeguard is reviewing your ATS configuration to confirm that no automated rules result in candidate rejection or advancement without a prior human action. Check your auto-reject settings in particular — many ATS platforms offer automatic rejection below a threshold score or based on specific screening question answers. If these are enabled, they need to be disabled or reconfigured to require recruiter confirmation before the rejection is communicated.
The second safeguard is updating your candidate-facing privacy documentation. Your careers page, application form and candidate privacy notice should clearly disclose AI use in screening, describe what is assessed, and provide a mechanism for candidates to request human review or ask questions. This disclosure must exist before candidates submit their applications — not as an afterthought after rejection.
The third safeguard is training your recruitment team. Recruiters who understand Article 22 will naturally conduct more meaningful reviews because they understand the legal and ethical context of their role as decision-makers. Training should cover: what Article 22 requires, how the current workflow is designed to comply, what genuine review means in practice, and how to respond if a candidate asks about AI use in their application.
The fourth safeguard is establishing a process for responding to candidate data requests that includes AI-generated data. Test this process before it is needed. Identify who receives requests, who retrieves the data, and what format the response takes. Ensure you can respond within the statutory one-month period with information that is genuinely useful to the candidate rather than technical and opaque.
Document the Human Review Step
The most important compliance safeguard is ensuring the human review step is real and documented, not nominal. Train recruiters to record their rationale for accepting or overriding AI recommendations — even a brief note that captures "Progressed despite lower AI score due to specific sector experience relevant to this role" or "Declined despite high AI score — screening responses indicate misalignment with team structure" creates the evidence trail that demonstrates genuine human engagement with each candidate. This documentation is both a compliance asset and a tool for improving AI calibration over time.
Frequently asked questions about GDPR Article 22 and AI recruitment
Does GDPR Article 22 apply to CV screening tools?
It depends on how the tool is used. If an AI CV screening tool automatically rejects or advances candidates without a human reviewing the decision, Article 22 is triggered. If the tool produces scores or rankings that a human recruiter uses to make the actual decision, Article 22 is not triggered — the human remains the decision-maker. The key test is whether the outcome is determined solely by automated processing or whether a human genuinely reviews and decides.
What rights do candidates have under GDPR Article 22?
Where automated decision-making is permitted (with explicit consent or contractual necessity), candidates have the right to obtain human review of the decision, to express their point of view, and to contest the decision. Organisations must provide meaningful information about the logic involved in automated decisions and their likely consequences. Where Article 22 is triggered, organisations must also be able to demonstrate that the human review was genuine and not merely a formality.
What documentation does GDPR require for AI-assisted hiring decisions?
Under GDPR Articles 13, 14 and 30, organisations using AI in recruitment must document: what AI tools are used and for what purpose, what data is processed by each tool, the basis on which the AI makes assessments, how human review is implemented, how long candidate data is retained, and how candidates can exercise their rights. A Record of Processing Activities (ROPA) entry covering AI-assisted recruitment decisions is required for all organisations subject to GDPR.
Can we use AI to reject candidates automatically without violating GDPR?
Automatic rejection without human review would constitute solely automated decision-making under Article 22, which is restricted unless specific conditions are met — typically explicit candidate consent or contractual necessity, both of which are difficult to satisfy in a recruitment context. Most legal experts and data protection authorities advise against automatic AI-driven rejections in recruitment without genuine human review. The safest approach is to use AI for scoring and ranking while requiring human review before any rejection is communicated to a candidate.