Department of Labor releases AI best practices for employers

By: - October 19, 2024 7:29 am

A new best practices guide from the U.S. Department of Labor outlines how companies should develop and use AI and protect their employees while doing so. (Photo by Tierney L. Cross/Getty Images)

The U.S. Department of Labor released a list of artificial intelligence best practices for developers and employers this week, aiming to help employers benefit from potential time and cost savings of AI, while protecting workers from discrimination and job displacement.

The voluntary guidelines come about a year after President Joe Biden signed an executive order to assess the innovative potential and risks of AI across government and private sectors. The order directed the creation of the White House AI Council, the creation of a framework for federal agencies to follow relating to privacy protection and a list of guidelines for securing AI talent, for navigating the effects on the labor market and for ensuring equity in AI use, among others.

“Harnessing AI for good and realizing its myriad benefits requires mitigating its substantial risks,” Biden said of the executive order last year. “This endeavor demands a society-wide effort that includes government, the private sector, academia and civil society.”

The DOL’s guide, “Artificial Intelligence and Worker Well-being: Principles and Best Practices for Developers and Employers” was developed with input from public listening sessions and from workers, unions, researchers, academics, employers and developers. It aims to mitigate risks of discrimination, data breaches and job replacement by AI, while embracing possible innovation and production.

“Whether AI in the workplace creates harm for workers and deepens inequality or supports workers and unleashes expansive opportunity depends (in large part) on the decisions we make,” DOL Acting Secretary Julie Su said. “The stakes are high.”

The report shares eight principles and best practices, with a “north star” of centering workers. The guide says workers, especially from underserved communities, should understand and have input in the design, development, testing, training, use and oversight of the AI systems used in their workplaces. This will improve job quality and allow businesses to deliver on their outcomes. Unions should bargain in good faith on the use of AI and electronic monitoring in the workplace, it said.

Other best practices include ethically developing AI, with training that protects and takes feedback from workers. Organizations should also have a clear governance system to evaluate AI used in the workplace, and they should be transparent about the AI systems they’re using, the DOL said.

AI systems cannot violate or undermine workers’ rights to organize, or obstruct their health, safety, wage, anti-discrimination and anti-retaliation protections, the department said. Therefore, prior to deployment, employers should audit their AI systems for potential impacts of discrimination on the basis of “race, color, national origin, religion, sex, disability, age, genetic information and other protected bases,” and should make those results public.

The report also outlines how employers can and should help workers with AI. Before implementing an AI tool, employers should consider the impact it will have on job opportunities, and they should be clear about the specific tasks it will perform. Employers that experience productivity gains or increased profits, should consider sharing the benefits with their workers, like through increased wages, improved benefits or training, the DOL said.

The implementation of AI systems has the potential to displace workers, Su said in her summary. To mitigate this, employers should appropriately train their employees to use these systems, and reallocate workers who are displaced by AI to other jobs within their organization when feasible. Employers should reach out to state and local workforce programs for education and upskilling so their workforce can learn new skills, not be phased out by technology.

And lastly, employers using AI that collect workers’ data should safeguard that data, should not collect more data than is absolutely necessary and should not share that data outside the business without workers’ freely given consent.

The guidelines outlined by the DOL are not meant to be “a substitute for existing or future federal or state laws and regulations,” it said, rather a “guiding framework for businesses” that can be customized with feedback from their workers.

“We should think of AI as a potentially powerful technology for worker well-being, and we should harness our collective human talents to design and use AI with workers as its beneficiaries, not as obstacles to innovation,” Su said.

Creative Commons License

Our stories may be republished online or in print under Creative Commons license CC BY-NC-ND 4.0. We ask that you edit only for style or to shorten, provide proper attribution and link to our website. AP and Getty images may not be republished. Please see our republishing guidelines for use of any other photos and graphics.

Paige Gross
Paige Gross

Paige Gross is a Philadelphia-based reporter covering the evolving technology industry for States Newsroom. Her coverage involves how congress and individual states are regulating new and growing technologies, how technology plays a role in our everyday lives and what people ought to know to interact with technology.

MORE FROM AUTHOR