Experts warn HR leaders of AI’s hidden legal risks
Artificial intelligence offers many advantages to companies, but its benefits come with serious risks that employers should heed and strive to prevent, legal experts told human resources professionals during a recent HR conference in Ponce, Puerto Rico.
AI can help businesses improve efficiency, increase precision, facilitate decision-making, personalize products and services, and reduce costs, among other benefits.
But it can also open the door to discrimination, privacy issues, copyright violations and excessive employee surveillance, attorneys Sylmarie Arizmendi and Alberto J. Bayouth-Montes said at the 2025 Southern Labor Symposium, held by the Society for Human Resources Management’s (SHRM) Puerto Rico Chapter.
Legal risks
As AI becomes more integrated into the workplace, business leaders should remain alert to its associated risks and proactively tackle them, Arizmendi and Bayouth-Montes said.
One major concern is algorithmic bias, where automated systems may unintentionally produce discriminatory outcomes. This bias can expose companies to reputational damage and legal scrutiny.
AI systems often require collecting substantial amounts of biometric and behavioral data, raising privacy and surveillance concerns that could expose companies to legal liability. Without clear limits and oversight, such practices can erode employee trust and trigger regulatory consequences, the attorneys said.
Over-automation can further heighten legal exposure, particularly in high-stakes areas such as hiring and firing, because it can produce outcomes that do not account for context, nuance and individual circumstances.
Relying on third-party AI providers also introduces risk, especially if contract terms and audits are insufficient to ensure ethical and legal compliance.
Compounding these challenges is the “black box” nature of many AI systems, where a lack of transparency in how decisions are made can make it difficult for employees and customers to understand how decisions are made.
“We’re trusting these systems to make important decisions, and the more important the decision, the greater the risk,” Arizmendi told News is my Business.
“Using AI in the workplace has a lot of benefits, but companies should do their due diligence to choose the program that better fits their company and do pilot projects before implementing it so that the transformation is gradual,” she said. “You need to keep your eyes open. You can’t just do it in blind faith.”
AI-generated bias
AI systems can perpetuate and amplify bias through multiple pathways that organizations must understand and address, according to the attorneys’ presentation.
Historical inequities embedded in data can lead to algorithmic bias in AI systems. When past datasets are biased, the models trained on them can replicate and perpetuate those distortions through their predictions and recommendations. A lack of diversity within the technical teams that design and train these systems can also lead to programming bias.
In addition, AI systems tend to reinforce existing patterns through confirmation bias, creating feedback loops of inequality and representation bias against underrepresented minority groups.
Open versus closed AI systems
Information entered into an “open” AI system like ChatGPT may be shared with unintended users and retained within the platform’s neural network for future system training. Unlike open AI systems, closed AI systems are typically proprietary and can restrict or prohibit sharing user prompts with external parties.
Employers should not share information with these systems that they would not otherwise disclose to third parties. Furthermore, sharing employee data with an open AI system could violate state and federal privacy laws, the attorneys stressed.
Developing AI policies in the workplace
To effectively manage AI implementation while mitigating legal and ethical risks, organizations should develop comprehensive policies. The attorneys made these recommendations:
- Keep humans in the loop to monitor AI systems and make final decisions.
- Avoid using AI with biased algorithms by employing diverse teams for AI development and using representative, inclusive and transparent data.
- Protect the privacy of sensitive data through periodic audits, encryption and compliance with applicable privacy laws.
- Vet AI vendors and examine contracts meticulously.
- Establish compliance guidelines and ethical standards, update information systems and train staff.
“Many companies start using AI without an ethical use policy, and that needs to be in place before implementing AI in the workplace,” Arizmendi said.


