A candidate applies for a software engineering role. Your ATS assigns an AI match score of 72 out of 100. The recruiter glances at the number and moves on to the next candidate. But what does 72 actually mean? Which skills matched? Which didn't? Was the candidate penalised for career gaps, for attending a less prestigious university, or for writing their CV in a non-standard format? Without an explanation, the score is a black box — and a black box that influences hiring decisions is both a legal liability and a missed opportunity for better recruitment.
Explainable AI (XAI) is the practice of designing AI systems that produce outputs humans can understand, interpret, and verify. In recruitment, this means AI candidate scores that come with clear, readable explanations of why a particular score was assigned — what matched, what didn't, and what the AI considered most important.
AI disclosure
This article was written by the Treegarden editorial team with AI assistance for research and drafting. Legal references have been verified against official regulation texts. This is not legal advice — consult qualified legal counsel for your specific situation.
What Is Explainable AI and Why Does Recruitment Need It
Explainable AI refers to AI systems whose internal decision-making process can be understood by humans. This stands in contrast to “opaque” or “black box” AI, where the model produces an output but the reasoning behind that output cannot be meaningfully explained — not even by the engineers who built it.
Recruitment needs explainable AI for three interconnected reasons:
- Legal compliance. GDPR Article 22 and the EU AI Act both create rights and obligations around explanation of AI-assisted decisions. Unexplainable AI scoring in recruitment creates direct legal exposure.
- Recruiter trust. Recruiters who do not understand AI scores cannot use them effectively. Research shows that recruiters either over-rely on unexplained scores (automation bias) or ignore them entirely. Neither outcome improves hiring quality.
- Candidate fairness. Candidates affected by AI scoring decisions have a legitimate interest in understanding how they were evaluated. Unexplainable AI makes it impossible to identify and challenge discriminatory scoring patterns.
Black Box vs Glass Box: Two Approaches to AI Scoring
AI scoring systems in recruitment generally fall into two architectural categories:
| Characteristic | Black Box AI | Glass Box (Explainable) AI |
|---|---|---|
| How it works | Complex neural networks with millions of parameters; decision logic is opaque | Structured models with interpretable decision paths; logic can be traced |
| Score explanation | Produces a number with no breakdown | Produces a score with factor-by-factor explanation |
| Bias detection | Extremely difficult; requires external probing | Built-in; biased factors can be identified and adjusted |
| Recruiter utility | Low; recruiters cannot validate or contextualise the score | High; recruiters can assess whether the score aligns with their judgment |
| Regulatory compliance | Fails GDPR Art. 22 and EU AI Act transparency requirements | Meets transparency and explanation obligations |
| Auditability | Requires specialised technical expertise to audit | Can be audited by HR teams and compliance officers directly |
The practical implication for HR teams: if your ATS vendor cannot show you exactly why a candidate received a particular score, you are using black box AI. After August 2026, this creates significant compliance risk for any organisation operating in or hiring from the EU.
The Legal Case for Explainability: GDPR Article 22
GDPR Article 22 provides individuals with the right not to be subject to “a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.” Recital 71 of the GDPR further specifies that data controllers must provide “meaningful information about the logic involved” in automated decision-making.
For AI candidate scoring, this creates a clear obligation: if a candidate's application is significantly affected by AI scoring (which includes being ranked low and therefore not reviewed by a human), the candidate has the right to understand how the scoring logic works and to request human intervention.
A black box AI system that cannot explain its scoring logic fails the “meaningful information about the logic involved” test. This is not a theoretical risk. Data protection authorities across Europe have increasingly focused on automated decision-making in employment contexts, and the combination of GDPR Article 22 with the EU AI Act's transparency requirements creates dual enforcement pressure.
The “meaningful information” standard
Courts and data protection authorities have interpreted “meaningful information about the logic involved” to require more than generic descriptions of how AI works. The explanation must be specific enough for the individual to understand how the decision was reached in their particular case. Telling a candidate “our AI evaluates your skills against the job requirements” is not sufficient. Telling them “your score was primarily influenced by a skills match of 75% in required programming languages, with deductions for missing experience in cloud infrastructure” meets the standard.
EU AI Act Article 86: The Right to Explanation
The EU AI Act reinforces and extends GDPR's explanation requirements. Article 86 establishes that “any affected person subject to a decision which is taken by the deployer on the basis of the output from a high-risk AI system” has the right to “clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision taken.”
For recruitment specifically, this means:
- Candidates must be able to request an explanation of how AI scoring influenced the recruitment decision
- The explanation must cover the “main elements” — not a generic description but the specific factors that determined the outcome in their case
- The explanation must be “clear and meaningful” — understandable to a non-technical person
- The deployer (employer) is responsible for providing this explanation, even if the AI system was built by a vendor
This obligation is distinct from the transparency requirements under Article 13 (which apply to how the system is designed and documented) and creates a direct, individual right for each candidate affected by AI-assisted decisions.
What a Good AI Score Explanation Looks Like
An effective AI score explanation in recruitment should include the following elements:
- Overall score with context. The numerical score plus a plain-language interpretation (e.g., “Strong match — this candidate meets most key requirements for this role”).
- Key strengths. The specific skills, qualifications, or experience factors that contributed positively to the score, with relative weighting.
- Key gaps. The specific requirements the candidate does not meet, and how these gaps affected the score.
- Factor-level breakdown. A visual or tabular breakdown showing how each evaluated dimension (skills, experience, qualifications, certifications) contributed to the overall score.
- Confidence indicator. An indication of how confident the AI is in the score — particularly relevant when CV parsing quality is low or when the job description is vague.
Treegarden's AI score explanation in practice
When Treegarden scores a candidate, the recruiter sees more than a number. Clicking on any AI match score opens a detailed explanation modal showing: an overall match percentage with a plain-language interpretation, a strengths section listing the specific skills and experience that matched the job requirements, a weaknesses section identifying gaps, and a factor-by-factor breakdown of how skills, experience, education, and certifications each contributed to the overall score. This explanation is designed to be understood by HR professionals without technical AI knowledge — meeting the “clear and meaningful” standard required by the EU AI Act.
Why Recruiters Need Explanations Too
Explainable AI is not just about candidate rights. Recruiters who receive AI scores without explanations face a fundamental trust problem: should they follow the AI's recommendation or their own judgment?
Research on human-AI collaboration in high-stakes decisions reveals two dysfunctional patterns:
- Automation bias. Without understanding why the AI scored a candidate high or low, recruiters default to trusting the AI over their own assessment. This is especially dangerous when the AI has a systematic bias — the recruiter amplifies rather than catches the error.
- Automation distrust. Recruiters who distrust unexplained AI scores ignore them entirely, eliminating any efficiency benefit. The organisation is paying for AI features that recruiters do not use.
Explainable AI resolves both patterns. When recruiters can see that an AI score is driven by specific, relevant factors, they can calibrate their trust appropriately — following the AI when its reasoning aligns with the role requirements and overriding it when they identify contextual factors the AI missed.
The Business Case for Transparent AI Scoring
Beyond compliance, transparent AI scoring delivers measurable business benefits:
- Higher recruiter adoption. Studies show that AI tools with explanations achieve 60 to 80% higher adoption rates among recruiters compared to black box alternatives.
- Better hiring decisions. When recruiters understand AI scores, they combine AI analysis with human judgment more effectively. A 2024 study by the Kellogg School of Management found that teams using explained AI scores made 23% more accurate hiring predictions than teams using unexplained scores.
- Reduced legal exposure. Explainable AI provides a documented, defensible basis for hiring decisions. When challenged, you can demonstrate exactly how the AI contributed to the decision and how human judgment was applied.
- Improved candidate experience. Candidates who receive meaningful feedback about their application assessment — even when rejected — rate the process more favourably and are more likely to reapply or recommend the employer to others.
- Easier bias detection. When you can see what factors drive scores, you can identify problematic patterns without needing specialised data science expertise.
How Treegarden Delivers Explainable AI Scoring
Treegarden's AI scoring system is designed with explainability as a core architectural principle, not an afterthought:
- Score breakdown on click. Every AI match score in the candidate list is clickable. One click opens a detailed modal showing the complete scoring explanation with strengths, weaknesses, and factor-level contributions.
- Plain-language explanations. Score explanations are written in natural language that HR professionals can understand without technical training. No jargon, no model weights, no feature vectors — just clear statements about what matched and what didn't.
- Audit-ready logging. Every AI score, explanation, and subsequent human decision is logged with timestamps and user identification. These logs provide the documentation basis required by GDPR Article 22 and the EU AI Act for demonstrating compliant AI deployment.
- Per-job AI configuration. Recruiters can adjust AI scoring criteria per job, ensuring that the AI evaluates candidates against role-specific requirements rather than generic criteria. This transparency extends to the configuration level — you know exactly what the AI is looking for.
- Human override with reason logging. When a recruiter disagrees with an AI score and overrides the recommendation, the system captures the override reason. This creates a feedback loop that improves AI accuracy over time and demonstrates human oversight for compliance purposes.
Designed with EU AI Act requirements in mind
Treegarden's explainable AI scoring is not a retrofit — it is how the system was built from the ground up. Every scoring decision produces a human-readable explanation, every explanation is logged for audit, and every recruiter interaction with AI scores is tracked. This architecture is aligned with the EU AI Act's transparency requirements and GDPR Article 22 obligations. Request a demo to see the explainable scoring in action.
FAQ
Can explainable AI be as accurate as black box AI?
For recruitment scoring, yes. The specific task of matching candidate profiles to job requirements is well-suited to interpretable models. Unlike complex pattern recognition tasks (image recognition, natural language understanding) where deep neural networks significantly outperform simpler models, candidate-job matching relies on structured comparisons that interpretable models handle effectively. In practice, the accuracy difference between explained and unexplained recruitment AI is negligible, while the trust, compliance, and usability benefits of explainability are substantial.
Do candidates have the right to see their AI score explanation?
Under GDPR Article 22 and EU AI Act Article 86, candidates have the right to request an explanation of how AI-assisted decisions affected them. This does not necessarily mean showing them the raw score, but it does mean providing meaningful information about the factors that influenced the decision. Best practice is to prepare a standardised explanation format that you can provide to candidates upon request, covering the key factors the AI evaluated and the areas where the candidate's profile did or did not match.
How do we verify that AI score explanations are accurate?
The best way to verify AI score explanations is through spot-checking: regularly select a sample of scored candidates, review the explanation against the candidate's actual CV and the job requirements, and confirm that the explanation accurately reflects both the score and the underlying data. If the explanation says the candidate has 5 years of Python experience but the CV shows 2, the explanation system has a data extraction issue. If the explanation matches the data but the score seems inappropriate, the scoring model itself may need adjustment.
Explainable AI in candidate scoring is not a luxury or a future aspiration — it is a current legal requirement under GDPR and an imminent one under the EU AI Act. Organisations using black box AI scoring in recruitment need to transition to explainable alternatives before August 2026. Treegarden provides explainable AI scoring as a core feature, with score breakdowns, audit logging, and human oversight designed with EU AI Act requirements in mind. Request a free demo to experience transparent AI scoring firsthand.