Dive Brief:
- Employers should implement artificial intelligence technologies transparently and with the “genuine input” of workers and their representatives, the U.S. Department of Labor said in a guidance document published Thursday.
- DOL included these items in a set of eight “AI Principles for Developers and Employers.” The agency said the guidance would create a road map for balancing AI’s potential benefits for businesses and workers while protecting workers from potential harm.
- DOL specifically recommended that AI be designed, developed and trained in a way that protects workers and that employers have “clear governance systems, procedures, human oversight, and evaluation processes” for workplace AI. The agency also cautioned that AI should not violate workers’ rights, including the right to organize, and that any data collected, used or created by AI systems be limited in scope and location.
Dive Insight:
The guidance document is a direct follow-up to President Joe Biden’s 2023 executive order directing federal agencies to develop AI “principles and best practices” specifically targeted at mitigating potential harms and maximizing benefits for workers. DOL confirmed the link between the two documents in a press release Tuesday.
“Workers must be at the heart of our nation’s approach to AI technology development and use,” Acting Secretary of Labor Julie Su said in the release. “These principles announced today reflect the Biden-Harris administration’s belief that, in addition to complying with existing laws, artificial intelligence should also enhance the quality of work and life for all workers.”
The guidance drew, at least in part, from the input of labor unions. In an email to HR Dive, the Retail, Wholesale and Department Store Union said it “contributed to the considerations” of DOL’s principles and said the guidance provides “a useful framework for understanding AI and the scope of its impact on working people and guiding solutions for reducing its impact.”
DOL said it also collected input from workers, researchers, academics, employers and developers through public listening sessions in advance of issuing the guidance.
A recent Littler Mendelson survey of U.S. employers showed that while a sizable share used generative AI for HR processes, an almost equal number had not done so. Compliance risks were a key concern for respondents, the law firm said.
While most employers in Littler’s survey said they were not concerned about the potential for AI to displace workers, other organizations — including the Society for Human Resource Management — have cautioned that AI could affect myriad workforce priorities, head counts included.
So far, regulation of AI in the employment context has come courtesy of several state and local hiring laws as well as a few federal efforts. Just weeks ago, DOL issued separate guidance detailing AI’s potential interaction with laws such as the Family and Medical Leave Act and the Fair Labor Standards Act. In 2022, the Biden administration published a blueprint for addressing bias and discrimination by automated technologies including AI.