In October 2023, President Biden issued an Executive Order regarding the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” and directed federal agencies to issue guidance on the increased use of Artificial Intelligence (“AI”). With regard to the use of AI in the workplace, the Order directed the U.S. Secretary of Labor to issue guidance to make clear that employers deploying AI to monitor or augment employees’ work must continue to comply with federal worker protections.
On April 29, 2024, the United States Department of Labor issued Field Assistance Bulletin No. 2024-1 (the “Bulletin”) addressing the use of Artificial Intelligence and Automated Systems in the workplace. Specifically, the Bulletin discusses employers’ use of AI and its impact on compliance with the Fair Labor Standards Act (“FLSA”), the Family and Medical Leave Act (“FMLA”), the Polygraph Protection Act, and the Providing Urgent Maternal Protections for Nursing Mothers Act. In short, the Bulletin explains that federal labor standards will continue to apply to employers using AI or other automated systems and that employers need to exercise responsible human oversight to ensure compliance.
The Bulletin recognizes that the use of AI may lead to issues under the FLSA, such as the undercounting of hours worked or improper calculation of overtime. The Bulletin also identifies potential issues that could arise under other statutes, such as the FMLA, when automated systems demand employees provide more information than required under the law.
The Bulletin explains that AI systems that monitor employee eye movements, body posture, or voice analysis to determine whether employees are being truthful remain prohibited by the Employee Polygraph Protection Act. Finally, the Bulletin notes that the use of AI that monitors and/or penalizes employees engaging in protected activity could constitute unlawful retaliation.
On May 16, 2024, the Department of Labor issued additional guidance for employers entitled Artificial Intelligence and Worker Well-being: Principles and Best Practices for Developers and Employers (DOL AI Principles and Best Practices). This guidance defines the principles and best practices for developers and employers as follows:
- Centering Worker Empowerment: Workers and their representatives, especially those from underserved communities, should be informed of and have genuine input in the design, development, testing, training, use, and oversight of AI systems for use in the workplace.
- Ethically Developing AI: AI systems should be designed, developed, and trained in a way that protects workers.
- Establishing AI Governance and Human Oversight: Organizations should have clear governance systems, procedures, human oversight, and evaluation processes for AI systems for use in the workplace.
- Ensuring Transparency in AI Use: Employers should be transparent with workers and job seekers about the AI systems that are being used in the workplace.
- Protecting Labor and Employment Rights: AI systems should not violate or undermine workers’ right to organize, health and safety rights, wage and hour rights, and anti-discrimination and anti-retaliation protections.
- Using AI to Enable Workers: AI systems should assist, complement, and enable workers, and improve job quality.
- Supporting Workers Impacted by AI: Employers should support or upskill workers during job transitions related to AI.
- Ensuring Responsible Use of Worker Data: Workers’ data collected, used, or created by AI systems should be limited in scope and location, used only to support legitimate business aims, and protected and handled responsibly.
Further and more detailed guidance from the Department of Labor is expected in the future as AI and other automated systems are adopted more widely.