Skip to content
Accounting

AI in the Workplace: Labor Department Issues Best Practices for Employers

Christopher Wood, CPP  

· 5 minute read

Christopher Wood, CPP  

· 5 minute read

← Blog home

The U.S. Department of Labor (DOL) released a comprehensive set of Principles and Best Practices for employers implementing artificial intelligence (AI) systems in the workplace. This guidance, part of President Biden’s Executive Order on AI, aims to protect workers’ rights and well-being while harnessing AI’s potential benefits.

The DOL’s guidance emphasizes the importance of worker empowerment, ethical AI development, transparency, and responsible data use. “These Best Practices provide a roadmap for responsible AI in the workplace, helping businesses harness these technologies while proactively supporting and valuing their workers,” said Acting Secretary of Labor Julie Su.

Jump to ↓


The DOL’s eight principles

Implementing AI responsibly

State legislation governing AI

The DOL’s eight principles

The eight principles provide a framework for employers to implement AI systems that complement rather than replace workers, protect labor rights, and improve job quality. Various aspects of AI implementation are covered in the Best Practices, from governance and oversight to supporting workers affected by AI-driven transitions.

1. Centering Worker Empowerment

Employers should involve workers and their representatives in the design, development, and deployment of AI systems. When workers are unionized, employers should bargain in good faith regarding AI and electronic monitoring use.

2. Ethical Development

AI developers should establish ethical standards and review processes to ensure AI systems protect workers’ rights, mitigate risks, and meet performance requirements. They should conduct impact assessments, design systems for human oversight, ensure good quality jobs for data reviewers, and create AI that produces understandable outcomes for non-technical users.

3. Establishing AI Governance and Oversight

Organizations should create governance structures accountable to leadership for AI system implementation. Employers should provide appropriate AI training to a broad range of employees. Human oversight is crucial for significant employment decisions influenced by AI.

4. Ensuring Transparency

Employers should provide advance notice and disclosure about worker-impacting AI systems. Workers should be informed about data collection and its purpose in AI systems. Procedures should be in place for workers to request, view, and correct their data used in employment decisions.

5. Protecting Labor and Employment Rights

AI systems should not undermine workers’ right to organize or violate health, safety, wage, and anti-discrimination protections. Employers must comply with existing labor laws and regulations when using AI.

6. Using AI to Enable Workers

AI should be implemented to assist and complement workers, improving job quality. Employers should consider piloting AI systems before broad deployment and minimize invasive electronic monitoring.

7. Supporting Workers Impacted by AI

Employers should provide training opportunities for workers to use AI systems. Retraining and reallocation of workers displaced by AI should be prioritized when feasible.

8. Responsible Use of Worker Data

Data collection should be limited to legitimate business purposes and protected from threats. Employers should secure worker consent before sharing data outside the organization.

Implementing AI Responsibly

Bradford J. Kelley, a Shareholder at Littler, stressed the importance of implementing safeguards that align with the DOL’s Principles and Best Practices as employers consider adopting AI technologies.

“Emphasis should be placed on actively engaging with employees about AI usage, conducting audits of AI systems used in employment decisions, assessing how AI systems will enhance employee well-being, and minimizing potential negative impacts on the workforce,” he said.

Kelley, who was formally a senior policy advisor at the DOL’s Wage and Hour Division (WHD), noted that “[t]he DOL guidance is flexible and organizations should adjust the principles based on their particular needs.”

“A proactive approach to AI in the workplace can help employers effectively navigate the rapidly evolving regulatory landscape,” he added, stressing the need to stay ahead of potential legal and ethical challenges associated with AI implementation.

State legislation governing AI

In May 2024, Colorado Governor Jared Polis signed legislation into law to regulate employers’ use of AI for consequential decisions, including employment or an employment opportunity, with reservations and called for improvements to it before implementation, urging federal legislation to preempt the state bill. The bill takes effect on February 1, 2026.

In August 2024, Illinois Governor JB Pritzker signed legislation into law that amends the state’s Human Rights Act to include new provisions related to AI in employment practices. Effective January 1, 2026, employers will be prohibited from using AI in ways that discriminate against protected classes or use zip codes as a proxy for protected classes in various employment decisions. Employers will also be required to notify employees when AI is being used for these purposes.

Other states may follow with similar measures to regulate the use of AI in the workplace. The California Civil Rights Council, for example, proposed regulations in May 2024 that would clarify how existing anti-discrimination laws apply to the use of AI, algorithms, and other automated tools in employment practices.

More answers