Dive Brief:
- U.S. public companies and other entities would face stiff penalties for failing to disclose workforce effects of artificial intelligence under a sweeping draft proposal on the technology unveiled Wednesday by Sen. Marsha Blackburn, R‑Tenn.
- Under the draft bill, covered entities would be required to disclose AI-related layoffs, role changes, and other job impacts to the Department of Labor on a quarterly basis. Failure to comply could trigger civil penalties of up to $1 million per violation, and both the government and private individuals could initiate lawsuits to enforce the reporting requirements.
- The effort is intended to codify President Donald Trump’s push for the enactment of comprehensive federal AI legislation. “Now, Congress must answer his call to establish one federal rulebook for AI to protect children, creators, conservatives, and communities across the country and ensure America triumphs over foreign adversaries in the global race for AI dominance,” Blackburn said in a press release.
Dive Insight:
The announcement marks a shift from the high-level AI policy framework that Blackburn outlined in December to a formal legislative discussion draft with specific statutory language.
The senator’s latest step comes just over three months after Trump signed an executive order aimed at blocking “onerous” AI laws at the state level and promoting a national policy framework for the technology.
The order called for the administration to work with Congress “to ensure that there is a minimally burdensome national standard — not 50 discordant State ones.”
Under Blackburn’s proposal, the Labor Department would be required to prepare quarterly reports on AI’s workforce effects based on information disclosed by covered entities, including public companies and selected non-public firms. Courts could impose injunctive relief, compelling companies to correct or complete their disclosures, and prevailing plaintiffs would be entitled to recover reasonable attorneys’ fees.
The push for greater transparency and reporting on how AI is impacting workers comes at a time when the issue is gaining increased national attention. Federal Reserve Governor Lisa Cook said last month that AI may be driving “the most significant reorganization of work in generations.”
“This transition could create new opportunities, but it could also come with some costs,” she said.
Blackburn’s proposed bill would also:
- require the Federal Trade Commission to craft rules establishing “minimum reasonable” AI safeguards;
- empower the U.S. attorney general, state attorneys general and private actors to file suit to hold AI system developers liable for harms caused by “unreasonably dangerous or defective product claims”;
- require large, cutting-edge AI developers to implement protocols to manage and mitigate “catastrophic” risks related to their systems and file regular reports with the Department of Homeland Security; and
- hold platforms liable for hosting an unauthorized digital replica of an individual if the platform has actual knowledge of the fact that the replica was not authorized by the person depicted.
The legislation would preempt state laws with conflicting AI requirements, while allowing states to enact rules offering greater protection for minors.