Updated: March 2026 · By Álvaro Arrescurrenaga, CEO of Voicit
In 2018, Amazon discovered that its AI-powered recruitment system systematically penalized women. In 2020, the UK's teacher evaluation algorithm denied access to university to thousands of students from disadvantaged neighborhoods. And in 2023, New York City became the first jurisdiction in the world to mandate audits of recruitment algorithms. The ethics of using technology in HR is not a theoretical debate—it is an urgent necessity with real-world consequences.
It is the set of principles that guide the responsible use of artificial intelligence, automation, and data analytics in human resource management. It encompasses three pillars: privacy of employee and candidate data, transparency in automated decisions, and algorithmic fairness to prevent discrimination. In the EU, the AI Act (2024) classifies AI systems in HR as "high risk," with specific legal obligations.
- Why technology ethics in HR matters now
- Privacy: what data you can and cannot collect
- Algorithmic transparency: the right to know
- Fairness: real biases and how to detect them
- 4 real cases that changed the rules
- Legal framework: AI Act, GDPR and NYC Local Law 144
- Ethical checklist for HR teams
- Tools to audit the ethics of your technology
- Conclusion
🔍 Why technology ethics in HR matters now
The adoption of AI in Human Resources has accelerated dramatically. According to Gartner76% of HR leaders believe that if they don't adopt AI in the next 12-24 months, they will fall behind. But the speed of adoption has outpaced ethical considerations:
- 83% of companies They use some type of AI in their selection processes (source: SHRM, 2025)
- Only 32% They have a formal AI ethics policy for HR
- 1 in 4 candidates He claims to have been evaluated by an AI system without his knowledge.
- Fines of up to €35M for non-compliance with the EU AI Act in high-risk systems
The problem isn't the technology itself—it's using it without proper controls. AI in HR touches on decisions that directly affect people's lives: who gets a job, who is promoted, who is fired. These decisions demand a solid ethical framework.
🔒 Privacy: what data you can and cannot collect
Modern HR systems collect a huge amount of data: from CVs and performance reviews to productivity monitoring, sentiment analysis in internal communications, and biometric access control data.
Data that requires special care
| Data type | Example | Risk level | Required legal basis (GDPR) |
| Basic data | Name, email, CV | Low | Consent or legitimate interest |
| Performance data | Evaluations, KPIs | Half | Legitimate interest + information |
| Productivity data | Screen monitoring, keystroke logging | High | Legitimate interest + proportionality |
| Biometric data | Fingerprint, facial recognition | Very high | Explicit consent (Art. 9) |
| Health data | Sick leave, stress analysis | Very high | Explicit consent + need |
| Sentiment analysis | AI analyzing tone in emails/chats | Very high | Prohibited in many contexts (AI Act) |
Principle of data minimization
The GDPR requires collecting only the data strictly necessary for the stated purpose. If your CV screening tool collects the date of birth but doesn't need it to assess skills, you're violating the principle of data minimization—even if the candidate has given consent.
The rule of thumb: If you can't explain exactly why you need a piece of information, don't collect it..
🔎 Algorithmic transparency: the right to know
When an algorithm rejects a candidate or recommends dismissal, can it explain why? Algorithmic transparency is one of the most critical—and most neglected—pillars of technology ethics in HR.
What the law requires
- GDPR (Art. 22): right not to be subject to automated decisions with significant effects, and right to obtain an explanation of the logic used.
- AI Act (Art. 13): High-risk AI systems must be "transparent enough" for users to understand and interpret the results.
- NYC Local Law 144: It requires publishing a summary of the algorithm's bias audit on the company's website.
What does it mean in practice?
- Inform candidates and employees that AI is used in the process (in the job offer, in the contract, or in internal policy).
- Explain the criteria that uses the system: "This candidate scored 85/100 because he has 5 years of experience in the sector and is proficient in 3 of the 4 required tools."
- Offer a humane alternative: Anyone has the right to request that a human review the automated decision.
- Document the system: technical data sheet with training data, performance metrics, bias audit results.
Warning sign: If your HR AI provider can't explain how its algorithm works, or refuses to share bias audit results, that's a red flag. Algorithmic opacity is incompatible with AI Act compliance.
⚖️ Fairness: real biases and how to detect them
Algorithms are not neutral. They learn from historical data—and if that data contains biases (and it almost always does), AI reproduces and often amplifies them.
Types of bias in AI for HR
- Historical data bias: If a company hired mostly men for 10 years, AI learns that "man" = "good candidate." This is exactly what happened with Amazon.
- Proxy bias: The algorithm does not use "gender" as a variable, but it uses "captain of the rugby team" (correlation with male gender) as a positive predictor.
- Exclusion bias: Candidates with gaps in their CV (maternity, illness, care of family members) are penalized by algorithms that prioritize linear trajectories.
- Socioeconomic bias: Prioritizing prestigious universities, native English proficiency, or international experience excludes equally valid talent from less privileged backgrounds.
- Accessibility bias: AI-powered video interviews that evaluate facial expressions discriminate against neurodivergent people or people with disabilities.
How to detect biases in your system
The only reliable way is with quantitative auditsCompare the algorithm's results by demographic groups (gender, age, ethnicity, disability) and measure:
- Adverse selection rate (4/5 rule): If the selection rate of a protected group is less than 80% of that of the majority group, there is an adverse impact.
- False negative rate: Does the system reject more qualified candidates from one group than from another?
- Score distribution: Are the average scores significantly different between groups?
📋 4 real cases that changed the rules
1. Amazon (2018) — Gender bias in CV screening
Amazon developed an AI to filter resumes that it learned from 10 years of historical hiring data. Because most of those hired were men, the system learned to penalize resumes containing the word "women's" (such as "women's chess club captain"). Amazon discarded the system.
Lesson: Historical data is not neutral. Without bias auditing, AI automates past discrimination.
2. HireVue (2019-2021) — Facial evaluation in interviews
HireVue used facial expression analysis in video interviews to evaluate candidates. Following pressure from the Electronic Privacy Information Center (EPIC), the FTC investigated the case. HireVue eliminated facial analysis in 2021, admitting that the benefits did not justify the risks of bias.
Lesson: Just because a technology is possible doesn't mean it's ethical. Facial recognition for recruitment is increasingly being questioned both legally and ethically.
3. A-levels UK Algorithm (2020) — Socioeconomic Bias
During the pandemic, the British government used an algorithm to assign grades to students. The system systematically penalized students from state schools and disadvantaged neighborhoods. Following massive protests, the government discarded the algorithmic results.
Lesson: Algorithms may appear objective but encode structural inequalities. Human oversight is not optional.
4. NYC Local Law 144 (2023) — First algorithmic audit law
New York City passed the first law requiring companies to annually audit their AI recruitment tools, publish the results, and notify candidates. Other cities and the EU (with the AI Act) are following suit.
Lesson: Regulation is here. Companies that don't audit their algorithms risk fines and lawsuits.
📜 Legal framework: AI Act, GDPR and NYC Local Law 144
Three key regulatory frameworks that every HR team should know:
| Regulation | Scope | Key requirements for HR | Sanctions |
| AI Act (EU) | European Union | Conformity assessment, human supervision, transparency, activity log | Up to €35M or 7% of global turnover |
| GDPR | EU + EEA | Data minimization, consent, right to explanation, right to human intervention (Art. 22) | Up to €20M or 4% of global turnover |
| NYC Law 144 | New York | Annual bias audit, publication of results, notification to candidates | $500-$1,500/day per violation |
What does the AI Act classify as "high risk" in HR?
- AI systems for screening and filtering of candidates
- Tools automated evaluation in interviews
- Systems of employee monitoring
- AI for decisions about promotion, dismissal, or assignment of tasks
If you use any of these systems in the EU, you are required to comply with the high-risk requirements of the AI Act, which will be phased in between 2025 and 2027.
✅ Ethical checklist for HR teams
- Can the provider explain how the algorithm works?
- Does it provide bias audit reports?
- Is the training data representative and diverse?
- Is there human oversight in critical decisions?
- Are candidates/employees informed about the use of AI?
- Is there a process for requesting human review?
- Are the requirements of the GDPR (minimization, consent, explanation) met?
- Is the system registered as high risk under the AI Act (if applicable)?
Periodic audit (minimum quarterly):
- Does the selection rate by demographic group comply with the 4/5 rule?
- Are there qualified candidates being rejected due to suspicious patterns?
- Are the algorithm's criteria still aligned with the actual job requirements?
- Have the identified biases been documented and corrected?
- Do HR employees receive up-to-date training in AI ethics?
🛠️ Tools to audit the ethics of your technology
- Aequitas (free) — Open-source framework from the University of Chicago for auditing bias in automated decision systems.
- AI Fairness 360 (IBM) (free) — Open source toolkit with fairness metrics and bias mitigation algorithms.
- What-If Tool (Google) (free) — A visual tool for exploring the behavior of ML models without writing code.
- Holistic AI (paid) — AI audit and governance platform, used by companies to comply with NYC Law 144 and the AI Act.
💡 Conclusion
Ethics in the use of technology for HR is not a hindrance to innovation—it's what separates responsible innovation from irresponsible innovation. The cases of Amazon, HireVue, and the British algorithm demonstrate that AI without ethical controls causes real harm to real people.
The good news: the legal framework already exists (AI Act, GDPR), auditing tools are accessible (many free), and HR teams that lead in technology ethics build more trust with candidates and employees.
The key lies in three principles: collect only the necessary data (privacy), explain how the systems work (transparency), and regularly audit the results (equity). And always, always: human oversight in decisions that affect people's lives.
If you use AI in your selection process, complement the screening with tools that document the interviews transparently. Voicit It generates automatic interview reports that serve as an objective record of the evaluation — an essential complement to the ethical traceability of the process.
📚 Related Articles
- CV screening with artificial intelligence: a complete guide 2026
- AI apps for transcribing meetings: the 12 best in 2026
- How to generate interview reports with AI
- Employee Experience: Keys Beyond Onboarding
CEO and co-founder of Voicit. Entrepreneur specializing in AI applied to meetings and recruitment processes. Over 1,000 companies use the platform to transform meetings and interviews into actionable reports.
