Dive Brief:
-
EU lawmakers on Wednesday approved sweeping artificial intelligence legislation that is expected to impact businesses globally, as the U.S. and other countries scramble to develop their own rules.
-
The newly enacted legislation, dubbed the EU AI Act, establishes comprehensive requirements for AI implementation based on levels of risk, with the possibility of stiff penalties for violators. It was approved by the EU Parliament in a vote of 523-46, with 49 abstentions.
- “The EU AI Act will have far-reaching implications not only for the European market, but also for the global AI landscape,” Steve Chase, vice chair of AI and digital innovation at Big Four accounting firm KPMG, said in an emailed statement. “U.S. companies must ensure they have the right guardrails in place to comply with the EU AI Act and forthcoming regulation, without hitting the breaks on the path to value with generative AI.”
Dive Insight:
AI has grabbed the world’s attention since Microsoft-backed startup OpenAI introduced its ChatGPT tool in November 2022.
The number of AI mentions in S&P Global earnings transcripts spiked six-fold from the first quarter of 2022 to the third quarter of 2023, according to research by Accenture.
Governments across the globe have been grappling with the rapid rise of AI tools and the potential risks they pose, ranging from disinformation to fraud and data privacy hazards.
President Joe Biden raised the issue in his State of the Union speech last week, urging Congress to address the technology’s potential “peril.” He specifically called for a ban on AI-driven voice impersonations.
Under the EU legislation, companies that adopt AI for “high-risk” uses including in the context of critical infrastructure, employment, and essential private and public services, must take steps to assess and reduce risks; maintain use logs; be transparent and accurate; and ensure human oversight. Citizens will have a right to submit complaints about such AI systems and receive explanations about decisions based on high-risk AI uses that affect their rights.
The new rules also ban certain AI applications that are deemed a threat to citizens’ rights, including “biometric categorization systems based on sensitive characteristics.” In addition, they forbid “emotion recognition” in the workplace and schools, as well as predictive policing when it is based solely on profiling individuals or assessing their characteristics.
Another provision calls for “deepfakes” — AI-manipulated images, videos, or audio content — to be clearly labelled as such.
“This is just one of many different laws that emanate from Brussels but have a global impact,” David Simon, a co-head of the global cybersecurity and data privacy practice at Skadden, Arps, Slate, Meagher & Flom LLP, and one of the leaders of the firm’s AI initiative, said in an interview. “The AI Act will be a game-changer and will likely become the de facto standard, not only for the way your business is regulated, but also the way your customers think about your AI tools.”
The act “will set a standard for trust, accountability and innovation in AI, and policymakers across the U.S. are watching,” Chase said.
For violations of the banned AI applications provision, potential penalties include up to 35 million euros or 7% of a company’s total worldwide annual turnover, whichever is higher.
The law applies to both public and private actors inside and outside the EU as long as the AI system is available in the EU market or its use affects people located in the region.
It is expected to enter force as soon as May, subject to a final “lawyer-linguist” check and endorsement by the European Council. Implementation will then be staggered, with bans on prohibited practices applying six months after the entry-into-force date and other provisions kicking in later.