AI tools risk gender bias in Spanish-language hiring
As more human resources departments in Puerto Rico use artificial intelligence tools, a linguistic challenge is creating unexpected discrimination risks that business leaders may not recognize.
The grammatical structure of Spanish — which assigns gender to most nouns, adjectives, articles and adverbs — can cause AI systems to develop gender bias, potentially exposing companies to legal liability and limiting their access to qualified talent, attorney Sylmarie Arizmendi said during a presentation on the use of AI in human resources at the Society for Human Resource Management’s Southern Labor Symposium, held recently in Ponce.
The gendered structure of Spanish may cause AI tools to favor male candidates or default to masculine terms, unintentionally excluding women. This phenomenon poses particular challenges for local organizations operating in a bilingual environment where HR managers routinely use AI tools to draft job listings, screen resumes and evaluate candidates in both Spanish and English.
“We who speak Spanish and upload a lot of data to these platforms need to be aware that the analyses these systems make can be biased toward men, and that has a significant impact in the workplace,” Arizmendi, an attorney for 32 years, told News is my Business.
“You may use the word ‘ingeniero’ when writing a job listing in Spanish. The AI system then assumes that you’re looking for a male engineer, and you end up receiving resumes from men only,” she explained. “Similarly, ‘chief executive officer’ usually is translated into ‘director ejecutivo,’ not ‘directora ejecutiva,’ and that’s the problem with linguistic bias, that it can favor male candidates for these jobs.”
As a result, AI gender bias perpetuates gender stereotypes in STEM careers (science, technology, engineering and mathematics), reduces the visibility of female candidates and affects diversity in automated recruiting, Arizmendi added.
However, when HR managers look to hire a nurse, for example, AI-based systems tend to favor women for those positions “because that is the prototype used to feed and train the systems, through a majority of resumes showing that most nurses are female,” she said.
Arizmendi used Amazon as an example. In 2014 the company launched an AI system to streamline its hiring, training it with resumes of people it had hired over the past 10 years. From its training, the model concluded that the ideal technical worker would be male, showing significant gender bias in favor of men, she said.
Amazon told the system to ignore gender, but it learned how to use proxies to identify gender, for example, a candidate who was president of a women’s chess club or who graduated from a women’s university. As a result, Amazon had to abandon the project.
Arizmendi said businesses can mitigate the risk of AI gender bias by frequently auditing their AI systems to identify and correct gender bias and occult proxies, vet vendors and suppliers of AI tools to ensure transparency, diversify training data, and upskill and reskill their employees.
However, the most critical step companies can take to avoid gender bias in AI is to ensure that humans consistently supervise all AI activity, Arizmendi said.
“There are tools you can use to audit your system and measure the bias of the decisions your AI program is making, but there should always be a human in the loop. That human can look into the candidates that the AI system puts forward and ensure that the candidate pooling is balanced,” she said.
“Always, always keep a human in the loop,” she said.


