The problem: When the AI learns to discriminate
In 2018, Reuters revealed that Amazon had developed an AI recruiting tool thatsystematically discriminate against female candidates. The system had been trained on the CVs of employees from the last 10 years - predominantly men, given the demographic profile of the tech industry. The AI "learned" that resumes that contained words associated with women (graduate of College X for Women, captain of the women's soccer team) were negatively correlated with employment. Amazon abandoned the project, but the lesson remains crucial.
This is not an isolated case. A study published in Harvard Business Review in 2024 showed that67% of AI recruitment tools tested had at least some form of measurable bias- either gender, age or ethnicity. The fundamental problem is not that the AI is ill-intentioned, but that the AI reflects the patterns from the data it was trained on. If a company's employment history favors a certain demographic profile, the AI will perpetuate and amplify that pattern.
This reality does not mean that AI should be avoided in recruitment. On the contrary - a well-designed AI can bemaiobjective than a human, precisely because it can be audited, tested and corrected. A human recruiter has inconsistent biases that they are not even aware of. An algorithm has biases that you can measure and eliminate. The key is to know how to identify, prevent and correct algorithmic bias.
Types of AI bias in recruitment
To combat bias, we must first understand it. In the context of AI recruitment, there are three main types of algorithmic bias:
1. Bias from training data (Training Data Bias).This is the most common and the most insidious type of bias. If the AI is trained on historical hiring data, it will reflect the biases from those decisions. If in the last 5 years the company has predominantly hired graduates from 3 specific universities, the AI will favor candidates from those universities - not because they are better, but because the historical pattern shows this. Similarly, if technical departments were dominated by men, the AI may learn to disfavor female CVs.
2. Discriminare prin proxy (Proxy Discrimination).Even if an AI does not have direct access to demographic information (gender, age, ethnicity), it can discriminate indirectly through correlated variables. Home address can be a proxy for ethnicity or socio-economic status. Year of graduation is a proxy for age. Name can be a proxy for ethnicity or gender. An AI that "doesn't see" gender but favors candidates who played football over those who did ballet discriminates gender by proxy.
3. Feedback Loop Bias.This type of bias is self-amplifying over time. If the AI recommends a certain type of candidate, and the HR team confirms those recommendations (because they seem "correct" - they look like existing employees), the AI receives positive feedback and reinforces the pattern. Over time, the diversity of the candidate pool decreases, and the AI becomes more and more convinced that the "ideal" profile is the one it initially defined. It is a vicious circle that, without intervention, worsens with each recruitment cycle.
Relevant statistics about recruitment bias
An NBER study demonstrated that resumes with "white" names receive 50% more callbacks than those with "African-American" names, for identical qualifications. Another study showed that women are 30% less likely to be called to interview for technical roles, even when their CVs are identical to men's except for the name. These human biases can be amplified by AI if they are not actively managed, but they can also be eliminated through the correct design of algorithms.
EU AI Act: Legal requirements for fair AI in recruitment
The European Regulation on Artificial Intelligence (EU AI Act), entered into force in 2025, classifies the use of AI in recruitment ashigh risk system (Annex III, categoria 4). Aceasta clasificare impune obligatii specifice:
Conformity assessment.Before using an AI tool in recruitment, the company must perform a compliance assessment that includes testing the algorithm for bias on protected criteria: gender, age, ethnicity, disability, religion and sexual orientation.
Audit regulat.It is not enough to test once. The EU AI Act requires periodic audits of recruitment algorithms to verify that they have not developed new biases over time (feedback loop bias). Recommended frequency: at least every six months or after each significant update of the model.
The candidate's right to an explanation.Each candidate has the right to understand how he was evaluated by AI and what factors contributed to the decision taken. "Black box" systems that only offer a score without justification are not compliant. This requirement de facto eliminates AI tools that cannot explain their own decisions.
Supravegherea umana obligatorie.The AI cannot make hiring decisions autonomously. Every decision influenced by AI must be reviewed and approved by a human. In practice, this means that purely automatic auto-reject (without human review) is not allowed.
Sanctiuni.Non-compliance with the EU AI Act can attract fines of up toEUR 35 million or 7% of the global turnover- whichever is greater. For Romanian companies, this makes compliance a critical priority, not optional.
How Treegarden tackles AI bias
Treegarden was designed from the start with the principle of "responsible AI" at the heart of the architecture. Here's how each functionality addresses the risk of bias:
AI Bias Detection for job ads
Before the ad is published, AI Bias Detection scans the text and identifies potentially discriminatory language. The system detects three categories: gender bias (male-coded or female-coded words), age bias (direct or indirect references to age) and exclusionary language (unnecessary requirements that reduce the pool of candidates without justification). For each term identified, the AI suggests a neutral alternative, explaining why the original wording may discourage certain categories of candidates.
AI Match Score based on skills, not demographics.Treegarden's matching algorithm evaluates exclusively: technical and non-technical skills, relevant professional experience, level of education and certifications, industry relevance and the semantic context of the CV. The system does not have access to and does not use: gender, age, ethnicity, address, marital status or any other protected criteria. The score is calculated strictly based on professional relevance for the role.
AI Interview Frame Generator for structured interviews.Unstructured interviews are fertile ground for bias. When each candidate receives different questions, objective comparison becomes impossible and decisions are influenced by personal affinity, not competence. AI Interview Frame Generator creates standardized interview guides: the same assessment dimensions, similar questions (adapted to the profile, but with the same structure) and predefined scoring criteria. Studies show that structured interviews are2x more predictivethan the unstructured ones and significantly reduce the evaluator's bias.
EEO reports for monitoring diversity
Treegarden includes Equal Employment Opportunity (EEO) reports that track the demographic distribution of candidates at each stage of the pipeline. These reports allow the identification of "loss points" where diversity decreases: if 40% of applicants are women but only 15% reach the interview, there is a potential bias in the screening stage. EEO data are aggregated and anonymized, used exclusively for monitoring the fairness of the process, not for the individual evaluation of candidates.
Transparenta totala a scorului AI.Each AI Match Score from Treegarden comes with a full breakdown: which components contributed to the score, what is the weight of each and which CV elements were identified as relevant. The recruiter can see exactly "why" a candidate received a certain score and can challenge or adjust the AI assessment. This transparency fulfills the EU AI Act requirement of "explainability" and allows the internal audit of the algorithm.
Best practices for fair AI in recruitment
Beyond the tools you use, there are organizational practices that significantly reduce the risk of AI bias:
1. Audit regulat al rezultatelor AI.At least once per quarter, analyze AI Match Score results by demographic dimensions (if data is available). Look for discrepancies: does the average score differ significantly between genders? Do candidates from certain universities systematically receive higher scores? If you find suspicious patterns, investigate the cause and adjust the evaluation criteria.
2. Diverse hiring panels.Hiring decisions must be made by diverse panels - not just one manager. When multiple evaluators with different perspectives evaluate the same candidate, individual biases balance each other out. Treegarden facilitates this approach through collaborative evaluations on the candidate's profile, where each evaluator leaves independent notes and scores.
3. Blind screening where possible.Eliminate demographic information from the initial screening stage. Some companies go so far as to hide the names of candidates during the shortlisting phase. Although Treegarden does not implement blind screening by default (because it requires context specific to each organization), AI Match Score is calculated without using demographic information, offering an intrinsic level of "blindness" to irrelevant criteria.
4. Review and update job descriptions.It uses AI Bias Detection not only for the initial creation of the ad, but also for periodic revisions. Language that was acceptable 2 years ago may be problematic today. Run bias detection on all active ads at least quarterly.
5. Active human supervision, not passive.The EU AI Act requires human supervision, but "clicking Approve without reading" is not real supervision. Train the HR team to understand how the AI Match Score works, what the components of the score mean and when to question an AI recommendation. Good AI doesn't eliminate the need for human expertise - it amplifies it.
Checklist: Quarterly AI Bias Audit
Follow these steps quarterly: (1) Analyze the distribution of AI scores by gender, age and other available dimensions. (2) Compare the pipeline advancement rate by demographic category. (3) Run AI Bias Detection on all active job ads. (4) Review rejection email templates for inclusive language. (5) Request feedback from rejected candidates about the perceived fairness of the process. (6) Document the results and corrective actions in the AI audit register, according to the requirements of the EU AI Act.
Treegarden vs. competitori: Cine ofera AI echitabil
Most ATSs on the marketdoes not offer dedicated tools for detection and prevention of bias. Sa comparam:
Workable:It offers AI screening but no dedicated ad bias detection tool or algorithm audit. It does not provide native EEO reports.
Greenhouse:It has diversity reports and a structured interview process, but the detection of bias in job descriptions is not a native functionality - it requires external integrations (Textio, expensive separately).
SmartRecruiters:It offers SmartAssistant for AI screening, but score transparency is limited. It does not offer native detection of bias in ads.
Treegarden:It offers a complete and integrated suite: AI Bias Detection for ads, transparent and explainable AI Match Score, AI Interview Frame Generator for structured interviews, native EEO reports and score based exclusively on competencies. All these functionalities are included in the basic platform, not as expensive add-ons.
Key difference: While competitors treat AI equity as an optional feature or add-on, Treegarden treats it as a fundamental architectural principle. You can't "turn off" fairness - it's built into the way every algorithm works.
The future of fair AI in recruitment
AI bias in recruitment is not a problem that is solved once and disappears. It is a continuous process of monitoring, auditing and improvement. As AI models become more sophisticated and regulations more stringent, companies that invest now in fair AI practices will have a significant advantage.
Directiile viitoare includ: algorithmic audits certified by third parties(similar to financial audits),international AI equity standards(under development at ISO level) andcomplete transparency of the decision-making process(candidates will be able to see exactly how their application was evaluated).
The companies that adopt these practices now not only comply with the legislation - they build an employer brand based on equity and transparency. In a competitive labor market, candidates increasingly choose employers who demonstrate a real commitment to diversity and inclusion - not just declaratively, but through concrete tools and processes.
Treegarden provides HR teams with the necessary tools for fair AI recruitment: proactive bias detection, transparent and explainable scores, structured interviews and diversity reports. The result: faster, more objective recruitment and compliance with the EU AI Act - without compromises on fairness.