{"id":5864,"date":"2025-07-09T12:08:45","date_gmt":"2025-07-09T12:08:45","guid":{"rendered":"https:\/\/voicit.com\/?p=5864"},"modified":"2026-04-03T17:03:09","modified_gmt":"2026-04-03T17:03:09","slug":"ethics-technology-human-resources","status":"publish","type":"post","link":"https:\/\/voicit.com\/en\/blog\/recursos-humanos\/etica-tecnologia-rrhh\/5864\/","title":{"rendered":"Ethics in the use of technology for HR: privacy, transparency and fairness (2026)"},"content":{"rendered":"<style>\n@import url('https:\/\/fonts.googleapis.com\/css2?family=Manrope:wght@400;500;600;700&display=swap');\n.voicit-blog-content { font-family: 'Manrope', sans-serif; max-width: 780px; margin: 0 auto; color: #333; }\n.voicit-blog-content p { font-size: 18px; line-height: 1.7; margin-bottom: 20px; }\n.voicit-blog-content h2 { font-size: 26px; font-weight: 700; color: #111; border-bottom: 2px solid #f0f0f0; padding-bottom: 12px; margin-top: 48px; margin-bottom: 20px; }\n.voicit-blog-content h3 { font-size: 21px; font-weight: 600; color: #111; margin-top: 32px; margin-bottom: 14px; }\n.voicit-blog-content ul, .voicit-blog-content ol { font-size: 17px; line-height: 1.6; margin-bottom: 20px; padding-left: 24px; }\n.voicit-blog-content li { margin-bottom: 8px; }\n.voicit-byline { display: flex; align-items: center; gap: 12px; margin-bottom: 32px; font-size: 15px; color: #666; }\n.voicit-byline img { width: 36px; height: 36px; border-radius: 50%; }\n.snippet-bait { background: #f5f7fa; border-left: 4px solid #111; border-radius: 8px; padding: 16px 20px; margin: 24px 0; font-size: 17px; line-height: 1.6; }\n.snippet-bait strong { display: block; margin-bottom: 6px; }\n.toc-box { background: #f8f9fa; border: 1px solid #e5e5e5; border-radius: 12px; padding: 20px 28px; margin: 24px 0; }\n.toc-box ol { margin: 0; padding-left: 20px; }\n.toc-box li { margin-bottom: 6px; }\n.toc-box a { color: #333; text-decoration: none; border-bottom: 1px solid #ccc; }\n.stat-box { background: #f0f4ff; border-radius: 10px; padding: 20px 24px; margin: 24px 0; }\n.stat-box p { margin: 0; }\n.warning-box { background: #fff8f0; border-left: 4px solid #e65100; border-radius: 8px; padding: 16px 20px; margin: 24px 0; }\n.case-box { background: #f9f9f9; border: 1px solid #e5e5e5; border-radius: 10px; padding: 20px 24px; margin: 24px 0; }\n.checklist-box { background: #f8f9fa; border-radius: 10px; padding: 20px 24px; margin: 24px 0; }\n.comparison-table { width: 100%; border-collapse: collapse; border: 1px solid #ddd; border-radius: 8px; overflow: hidden; margin: 24px 0; font-size: 15px; }\n.comparison-table td { padding: 12px 16px; border: 1px solid #ddd; }\n.comparison-table tr:first-child td { background: #1a1a1a; color: #ffffff; font-weight: 700; }\n.comparison-table tr:nth-child(even) td { background: #fafafa; }\n.disclaimer-box { font-size: 14px; font-style: italic; color: #777; background: #f9f9f9; border-left: 3px solid #ccc; padding: 12px 16px; border-radius: 6px; margin: 24px 0; }\n.author-box { display: flex; gap: 20px; align-items: center; background: #f9f9f9; border-radius: 12px; padding: 24px 28px; margin: 32px 0; }\n.author-box img { width: 80px; height: 80px; border-radius: 50%; }\n.author-box-text { font-size: 15px; line-height: 1.5; }\n.author-box-text strong { font-size: 17px; display: block; margin-bottom: 4px; }\n.voicit-cta { display: block; text-align: center; margin: 32px auto; }\n.voicit-cta a { display: inline-block; background: #000; color: #fff; padding: 14px 28px; font-size: 16px; font-weight: 600; border-radius: 10px; text-decoration: none; }\n<\/style>\n<div class=\"voicit-blog-content\">\n<div class=\"voicit-byline\">\n<img decoding=\"async\" src=\"https:\/\/www.gravatar.com\/avatar\/9606a7cf8a077e463d66ccba5e8cd71f?s=96\" alt=\"\u00c1lvaro Arrescurrenaga\"><br \/>\nUpdated: March 2026 \u00b7 By \u00c1lvaro Arrescurrenaga, CEO of Voicit\n<\/div>\n<p>In 2018, Amazon discovered that its AI-powered recruitment system systematically penalized women. In 2020, the UK's teacher evaluation algorithm denied access to university to thousands of students from disadvantaged neighborhoods. And in 2023, New York City became the first jurisdiction in the world to mandate audits of recruitment algorithms. The ethics of using technology in HR is not a theoretical debate\u2014it is an urgent necessity with real-world consequences.<\/p>\n<div class=\"snippet-bait\">\n<strong>What is technology ethics in HR?<\/strong><br \/>\nIt is the set of principles that guide the responsible use of artificial intelligence, automation, and data analytics in human resource management. It encompasses three pillars: privacy of employee and candidate data, transparency in automated decisions, and algorithmic fairness to prevent discrimination. In the EU, the AI Act (2024) classifies AI systems in HR as \"high risk,\" with specific legal obligations.\n<\/div>\n<div class=\"toc-box\">\n<strong>In this article:<\/strong><\/p>\n<ol>\n<li><a href=\"#por-que-importa\">Why technology ethics in HR matters now<\/a><\/li>\n<li><a href=\"#privacidad\">Privacy: what data you can and cannot collect<\/a><\/li>\n<li><a href=\"#transparencia\">Algorithmic transparency: the right to know<\/a><\/li>\n<li><a href=\"#equidad\">Fairness: real biases and how to detect them<\/a><\/li>\n<li><a href=\"#casos\">4 real cases that changed the rules<\/a><\/li>\n<li><a href=\"#marco-legal\">Legal framework: AI Act, GDPR and NYC Local Law 144<\/a><\/li>\n<li><a href=\"#checklist\">Ethical checklist for HR teams<\/a><\/li>\n<li><a href=\"#herramientas\">Tools to audit the ethics of your technology<\/a><\/li>\n<li><a href=\"#conclusion\">Conclusion<\/a><\/li>\n<\/ol>\n<\/div>\n<h2 id=\"por-que-importa\">\ud83d\udd0d Why technology ethics in HR matters now<\/h2>\n<p>The adoption of AI in Human Resources has accelerated dramatically. According to <a href=\"https:\/\/www.gartner.com\/en\/human-resources\" rel=\"nofollow noopener\" target=\"_blank\">Gartner<\/a>76% of HR leaders believe that if they don't adopt AI in the next 12-24 months, they will fall behind. But the speed of adoption has outpaced ethical considerations:<\/p>\n<div class=\"stat-box\">\n<ul>\n<li><strong>83% of companies<\/strong> They use some type of AI in their selection processes (source: <a href=\"https:\/\/www.shrm.org\" rel=\"nofollow noopener\" target=\"_blank\">SHRM<\/a>, 2025)<\/li>\n<li><strong>Only 32%<\/strong> They have a formal AI ethics policy for HR<\/li>\n<li><strong>1 in 4 candidates<\/strong> He claims to have been evaluated by an AI system without his knowledge.<\/li>\n<li><strong>Fines of up to \u20ac35M<\/strong> for non-compliance with the EU AI Act in high-risk systems<\/li>\n<\/ul>\n<\/div>\n<p>The problem isn't the technology itself\u2014it's using it without proper controls. AI in HR touches on decisions that directly affect people's lives: who gets a job, who is promoted, who is fired. These decisions demand a solid ethical framework.<\/p>\n<h2 id=\"privacidad\">\ud83d\udd12 Privacy: what data you can and cannot collect<\/h2>\n<p>Modern HR systems collect a huge amount of data: from CVs and performance reviews to productivity monitoring, sentiment analysis in internal communications, and biometric access control data.<\/p>\n<h3>Data that requires special care<\/h3>\n<table class=\"comparison-table\">\n<tr>\n<td>Data type<\/td>\n<td>Example<\/td>\n<td>Risk level<\/td>\n<td>Required legal basis (GDPR)<\/td>\n<\/tr>\n<tr>\n<td>Basic data<\/td>\n<td>Name, email, CV<\/td>\n<td>Low<\/td>\n<td>Consent or legitimate interest<\/td>\n<\/tr>\n<tr>\n<td>Performance data<\/td>\n<td>Evaluations, KPIs<\/td>\n<td>Half<\/td>\n<td>Legitimate interest + information<\/td>\n<\/tr>\n<tr>\n<td>Productivity data<\/td>\n<td>Screen monitoring, keystroke logging<\/td>\n<td>High<\/td>\n<td>Legitimate interest + proportionality<\/td>\n<\/tr>\n<tr>\n<td>Biometric data<\/td>\n<td>Fingerprint, facial recognition<\/td>\n<td>Very high<\/td>\n<td>Explicit consent (Art. 9)<\/td>\n<\/tr>\n<tr>\n<td>Health data<\/td>\n<td>Sick leave, stress analysis<\/td>\n<td>Very high<\/td>\n<td>Explicit consent + need<\/td>\n<\/tr>\n<tr>\n<td>Sentiment analysis<\/td>\n<td>AI analyzing tone in emails\/chats<\/td>\n<td>Very high<\/td>\n<td>Prohibited in many contexts (AI Act)<\/td>\n<\/tr>\n<\/table>\n<h3>Principle of data minimization<\/h3>\n<p>The GDPR requires collecting only the data <strong>strictly necessary<\/strong> for the stated purpose. If your CV screening tool collects the date of birth but doesn't need it to assess skills, you're violating the principle of data minimization\u2014even if the candidate has given consent.<\/p>\n<p>The rule of thumb: <strong>If you can't explain exactly why you need a piece of information, don't collect it.<\/strong>.<\/p>\n<h2 id=\"transparencia\">\ud83d\udd0e Algorithmic transparency: the right to know<\/h2>\n<p>When an algorithm rejects a candidate or recommends dismissal, can it explain why? Algorithmic transparency is one of the most critical\u2014and most neglected\u2014pillars of technology ethics in HR.<\/p>\n<h3>What the law requires<\/h3>\n<ul>\n<li><strong>GDPR (Art. 22):<\/strong> right not to be subject to automated decisions with significant effects, and right to obtain an explanation of the logic used.<\/li>\n<li><strong>AI Act (Art. 13):<\/strong> High-risk AI systems must be \"transparent enough\" for users to understand and interpret the results.<\/li>\n<li><strong>NYC Local Law 144:<\/strong> It requires publishing a summary of the algorithm's bias audit on the company's website.<\/li>\n<\/ul>\n<h3>What does it mean in practice?<\/h3>\n<ol>\n<li><strong>Inform candidates and employees<\/strong> that AI is used in the process (in the job offer, in the contract, or in internal policy).<\/li>\n<li><strong>Explain the criteria<\/strong> that uses the system: \"This candidate scored 85\/100 because he has 5 years of experience in the sector and is proficient in 3 of the 4 required tools.\"<\/li>\n<li><strong>Offer a humane alternative:<\/strong> Anyone has the right to request that a human review the automated decision.<\/li>\n<li><strong>Document the system:<\/strong> technical data sheet with training data, performance metrics, bias audit results.<\/li>\n<\/ol>\n<div class=\"warning-box\">\n<p><strong>Warning sign:<\/strong> If your HR AI provider can't explain how its algorithm works, or refuses to share bias audit results, that's a red flag. Algorithmic opacity is incompatible with AI Act compliance.<\/p>\n<\/div>\n<h2 id=\"equidad\">\u2696\ufe0f Fairness: real biases and how to detect them<\/h2>\n<p>Algorithms are not neutral. They learn from historical data\u2014and if that data contains biases (and it almost always does), AI reproduces and often amplifies them.<\/p>\n<h3>Types of bias in AI for HR<\/h3>\n<ul>\n<li><strong>Historical data bias:<\/strong> If a company hired mostly men for 10 years, AI learns that \"man\" = \"good candidate.\" This is exactly what happened with Amazon.<\/li>\n<li><strong>Proxy bias:<\/strong> The algorithm does not use \"gender\" as a variable, but it uses \"captain of the rugby team\" (correlation with male gender) as a positive predictor.<\/li>\n<li><strong>Exclusion bias:<\/strong> Candidates with gaps in their CV (maternity, illness, care of family members) are penalized by algorithms that prioritize linear trajectories.<\/li>\n<li><strong>Socioeconomic bias:<\/strong> Prioritizing prestigious universities, native English proficiency, or international experience excludes equally valid talent from less privileged backgrounds.<\/li>\n<li><strong>Accessibility bias:<\/strong> AI-powered video interviews that evaluate facial expressions discriminate against neurodivergent people or people with disabilities.<\/li>\n<\/ul>\n<h3>How to detect biases in your system<\/h3>\n<p>The only reliable way is with <strong>quantitative audits<\/strong>Compare the algorithm's results by demographic groups (gender, age, ethnicity, disability) and measure:<\/p>\n<ul>\n<li><strong>Adverse selection rate (4\/5 rule):<\/strong> If the selection rate of a protected group is less than 80% of that of the majority group, there is an adverse impact.<\/li>\n<li><strong>False negative rate:<\/strong> Does the system reject more qualified candidates from one group than from another?<\/li>\n<li><strong>Score distribution:<\/strong> Are the average scores significantly different between groups?<\/li>\n<\/ul>\n<h2 id=\"casos\">\ud83d\udccb 4 real cases that changed the rules<\/h2>\n<div class=\"case-box\">\n<h3>1. Amazon (2018) \u2014 Gender bias in CV screening<\/h3>\n<p>Amazon developed an AI to filter resumes that it learned from 10 years of historical hiring data. Because most of those hired were men, the system learned to penalize resumes containing the word \"women's\" (such as \"women's chess club captain\"). Amazon discarded the system.<\/p>\n<p><strong>Lesson:<\/strong> Historical data is not neutral. Without bias auditing, AI automates past discrimination.<\/p>\n<\/div>\n<div class=\"case-box\">\n<h3>2. HireVue (2019-2021) \u2014 Facial evaluation in interviews<\/h3>\n<p>HireVue used facial expression analysis in video interviews to evaluate candidates. Following pressure from the Electronic Privacy Information Center (EPIC), the FTC investigated the case. HireVue eliminated facial analysis in 2021, admitting that the benefits did not justify the risks of bias.<\/p>\n<p><strong>Lesson:<\/strong> Just because a technology is possible doesn't mean it's ethical. Facial recognition for recruitment is increasingly being questioned both legally and ethically.<\/p>\n<\/div>\n<div class=\"case-box\">\n<h3>3. A-levels UK Algorithm (2020) \u2014 Socioeconomic Bias<\/h3>\n<p>During the pandemic, the British government used an algorithm to assign grades to students. The system systematically penalized students from state schools and disadvantaged neighborhoods. Following massive protests, the government discarded the algorithmic results.<\/p>\n<p><strong>Lesson:<\/strong> Algorithms may appear objective but encode structural inequalities. Human oversight is not optional.<\/p>\n<\/div>\n<div class=\"case-box\">\n<h3>4. NYC Local Law 144 (2023) \u2014 First algorithmic audit law<\/h3>\n<p>New York City passed the first law requiring companies to annually audit their AI recruitment tools, publish the results, and notify candidates. Other cities and the EU (with the AI Act) are following suit.<\/p>\n<p><strong>Lesson:<\/strong> Regulation is here. Companies that don't audit their algorithms risk fines and lawsuits.<\/p>\n<\/div>\n<h2 id=\"marco-legal\">\ud83d\udcdc Legal framework: AI Act, GDPR and NYC Local Law 144<\/h2>\n<p>Three key regulatory frameworks that every HR team should know:<\/p>\n<table class=\"comparison-table\">\n<tr>\n<td>Regulation<\/td>\n<td>Scope<\/td>\n<td>Key requirements for HR<\/td>\n<td>Sanctions<\/td>\n<\/tr>\n<tr>\n<td><strong>AI Act (EU)<\/strong><\/td>\n<td>European Union<\/td>\n<td>Conformity assessment, human supervision, transparency, activity log<\/td>\n<td>Up to \u20ac35M or 7% of global turnover<\/td>\n<\/tr>\n<tr>\n<td><strong>GDPR<\/strong><\/td>\n<td>EU + EEA<\/td>\n<td>Data minimization, consent, right to explanation, right to human intervention (Art. 22)<\/td>\n<td>Up to \u20ac20M or 4% of global turnover<\/td>\n<\/tr>\n<tr>\n<td><strong>NYC Law 144<\/strong><\/td>\n<td>New York<\/td>\n<td>Annual bias audit, publication of results, notification to candidates<\/td>\n<td>$500-$1,500\/day per violation<\/td>\n<\/tr>\n<\/table>\n<h3>What does the AI Act classify as \"high risk\" in HR?<\/h3>\n<ul>\n<li>AI systems for <strong>screening and filtering of candidates<\/strong><\/li>\n<li>Tools <strong>automated evaluation<\/strong> in interviews<\/li>\n<li>Systems of <strong>employee monitoring<\/strong><\/li>\n<li>AI for decisions about <strong>promotion, dismissal, or assignment of tasks<\/strong><\/li>\n<\/ul>\n<p>If you use any of these systems in the EU, you are required to comply with the high-risk requirements of the AI Act, which will be phased in between 2025 and 2027.<\/p>\n<div class=\"voicit-cta\">\n<a href=\"https:\/\/app.voicit.com\/signup\">Try Voicit for free \u2192<\/a>\n<\/div>\n<h2 id=\"checklist\">\u2705 Ethical checklist for HR teams<\/h2>\n<div class=\"checklist-box\">\n<strong>Before implementing an AI tool:<\/strong><\/p>\n<ul>\n<li>Can the provider explain how the algorithm works?<\/li>\n<li>Does it provide bias audit reports?<\/li>\n<li>Is the training data representative and diverse?<\/li>\n<li>Is there human oversight in critical decisions?<\/li>\n<li>Are candidates\/employees informed about the use of AI?<\/li>\n<li>Is there a process for requesting human review?<\/li>\n<li>Are the requirements of the GDPR (minimization, consent, explanation) met?<\/li>\n<li>Is the system registered as high risk under the AI Act (if applicable)?<\/li>\n<\/ul>\n<p><strong>Periodic audit (minimum quarterly):<\/strong><\/p>\n<ul>\n<li>Does the selection rate by demographic group comply with the 4\/5 rule?<\/li>\n<li>Are there qualified candidates being rejected due to suspicious patterns?<\/li>\n<li>Are the algorithm's criteria still aligned with the actual job requirements?<\/li>\n<li>Have the identified biases been documented and corrected?<\/li>\n<li>Do HR employees receive up-to-date training in AI ethics?<\/li>\n<\/ul>\n<\/div>\n<h2 id=\"herramientas\">\ud83d\udee0\ufe0f Tools to audit the ethics of your technology<\/h2>\n<ul>\n<li><strong><a href=\"https:\/\/aequitas.dssg.io\/\" rel=\"nofollow noopener\" target=\"_blank\">Aequitas<\/a><\/strong> (free) \u2014 Open-source framework from the University of Chicago for auditing bias in automated decision systems.<\/li>\n<li><strong><a href=\"https:\/\/aif360.mybluemix.net\/\" rel=\"nofollow noopener\" target=\"_blank\">AI Fairness 360 (IBM)<\/a><\/strong> (free) \u2014 Open source toolkit with fairness metrics and bias mitigation algorithms.<\/li>\n<li><strong><a href=\"https:\/\/pair.withgoogle.com\/what-is-ml-fairness\" rel=\"nofollow noopener\" target=\"_blank\">What-If Tool (Google)<\/a><\/strong> (free) \u2014 A visual tool for exploring the behavior of ML models without writing code.<\/li>\n<li><strong><a href=\"https:\/\/holistic.ai\/\" rel=\"nofollow noopener\" target=\"_blank\">Holistic AI<\/a><\/strong> (paid) \u2014 AI audit and governance platform, used by companies to comply with NYC Law 144 and the AI Act.<\/li>\n<\/ul>\n<h2 id=\"conclusion\">\ud83d\udca1 Conclusion<\/h2>\n<p>Ethics in the use of technology for HR is not a hindrance to innovation\u2014it's what separates responsible innovation from irresponsible innovation. The cases of Amazon, HireVue, and the British algorithm demonstrate that AI without ethical controls causes real harm to real people.<\/p>\n<p>The good news: the legal framework already exists (AI Act, GDPR), auditing tools are accessible (many free), and HR teams that lead in technology ethics build more trust with candidates and employees.<\/p>\n<p>The key lies in three principles: <strong>collect only the necessary data<\/strong> (privacy), <strong>explain how the systems work<\/strong> (transparency), and <strong>regularly audit the results<\/strong> (equity). And always, always: human oversight in decisions that affect people's lives.<\/p>\n<p>If you use AI in your selection process, complement the screening with tools that document the interviews transparently. <a href=\"https:\/\/voicit.com\/en\/\">Voicit<\/a> It generates automatic interview reports that serve as an objective record of the evaluation \u2014 an essential complement to the ethical traceability of the process.<\/p>\n<div class=\"disclaimer-box\">\n<strong>Transparency note:<\/strong> Voicit is an AI-powered meeting transcription and reporting tool. It is not a screening or automated candidate assessment system. We mention it as a complement to the ethical documentation of the selection process.\n<\/div>\n<h2>\ud83d\udcda Related Articles<\/h2>\n<ul>\n<li><a href=\"https:\/\/voicit.com\/en\/blog\/human-resources\/cv-screening-artificial-intelligence\/5770\/\">CV screening with artificial intelligence: a complete guide 2026<\/a><\/li>\n<li><a href=\"https:\/\/voicit.com\/en\/blog\/human-resources\/apps-aia-transcribe\/6931\/\">AI apps for transcribing meetings: the 12 best in 2026<\/a><\/li>\n<li><a href=\"https:\/\/voicit.com\/en\/blog\/human-resources\/reports-interviews-ai\/6998\/\">How to generate interview reports with AI<\/a><\/li>\n<li><a href=\"https:\/\/voicit.com\/en\/blog\/human-resources\/employee-experience-key-hr\/5714\/\">Employee Experience: Keys Beyond Onboarding<\/a><\/li>\n<\/ul>\n<div class=\"author-box\">\n<img decoding=\"async\" src=\"https:\/\/www.gravatar.com\/avatar\/9606a7cf8a077e463d66ccba5e8cd71f?s=160\" alt=\"\u00c1lvaro Arrescurrenaga, CEO of Voicit\"><\/p>\n<div class=\"author-box-text\">\n<strong>\u00c1lvaro Arrescurrenaga<\/strong><br \/>\nCEO and co-founder of Voicit. Entrepreneur specializing in AI applied to meetings and recruitment processes. Over 1,000 companies use the platform to transform meetings and interviews into actionable reports.\n<\/div>\n<\/div>\n<\/div>\n<p><script type=\"application\/ld+json\">{\n    \"@context\": \"https:\\\/\\\/schema.org\",\n    \"@type\": \"Article\",\n    \"headline\": \"\\u00c9tica en el uso de tecnolog\\u00eda para RRHH: privacidad, transparencia y equidad (2026)\",\n    \"description\": \"Gu\\u00eda completa sobre \\u00e9tica en IA para Recursos Humanos: privacidad de datos, sesgos algor\\u00edtmicos, AI Act, RGPD y casos reales. Con checklist descargable.\",\n    \"author\": {\n        \"@type\": \"Person\",\n        \"name\": \"\\u00c1lvaro Arrescurrenaga\",\n        \"url\": \"https:\\\/\\\/www.linkedin.com\\\/in\\\/alvaroarres\\\/\",\n        \"jobTitle\": \"CEO\",\n        \"worksFor\": {\n            \"@type\": \"Organization\",\n            \"name\": \"Voicit\"\n        }\n    },\n    \"publisher\": {\n        \"@type\": \"Organization\",\n        \"name\": \"Voicit\",\n        \"url\": \"https:\\\/\\\/voicit.com\",\n        \"logo\": {\n            \"@type\": \"ImageObject\",\n            \"url\": \"https:\\\/\\\/voicit.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Logo-voicit-black.png\"\n        }\n    },\n    \"datePublished\": \"2025-07-09\",\n    \"dateModified\": \"2026-03-09\",\n    \"mainEntityOfPage\": \"https:\\\/\\\/voicit.com\\\/blog\\\/recursos-humanos\\\/etica-tecnologia-rrhh\\\/5864\\\/\",\n    \"wordCount\": 2200,\n    \"inLanguage\": \"es\"\n}<\/script><\/p>","protected":false},"excerpt":{"rendered":"<p>Actualizado: marzo 2026 \u00b7 Por \u00c1lvaro Arrescurrenaga, CEO de Voicit En 2018, Amazon descubri\u00f3 que su sistema de IA para selecci\u00f3n de personal penalizaba sistem\u00e1ticamente a las mujeres. En 2020, el algoritmo de evaluaci\u00f3n de profesores en el Reino Unido dej\u00f3 sin acceso a la universidad a miles de estudiantes de barrios desfavorecidos. Y en [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":5867,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"set","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-gradient":""}},"footnotes":"","rank_math_title":"","rank_math_description":"Descubre la \u00e9tica en el uso de tecnolog\u00eda para RRHH y c\u00f3mo garantizar privacidad, transparencia y equidad en tu empresa.","rank_math_focus_keyword":"\u00e9tica en el uso de tecnolog\u00eda para RRHH"},"categories":[31,20],"tags":[],"class_list":["post-5864","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-consultoria","category-recursos-humanos"],"_links":{"self":[{"href":"https:\/\/voicit.com\/en\/wp-json\/wp\/v2\/posts\/5864","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/voicit.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/voicit.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/voicit.com\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/voicit.com\/en\/wp-json\/wp\/v2\/comments?post=5864"}],"version-history":[{"count":11,"href":"https:\/\/voicit.com\/en\/wp-json\/wp\/v2\/posts\/5864\/revisions"}],"predecessor-version":[{"id":7972,"href":"https:\/\/voicit.com\/en\/wp-json\/wp\/v2\/posts\/5864\/revisions\/7972"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/voicit.com\/en\/wp-json\/wp\/v2\/media\/5867"}],"wp:attachment":[{"href":"https:\/\/voicit.com\/en\/wp-json\/wp\/v2\/media?parent=5864"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/voicit.com\/en\/wp-json\/wp\/v2\/categories?post=5864"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/voicit.com\/en\/wp-json\/wp\/v2\/tags?post=5864"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}