The European Union has agreed to one of the world’s first comprehensive artificial intelligence laws, the AI Act.
The legislation sets up a regulatory framework to promote AI development while addressing the risks associated with the rapidly evolving technology.
It bans harmful AI practices considered a clear threat to people’s safety, livelihoods, and rights.
The law comes amid growing fears about the disruptive capabilities of artificial intelligence.
The regulatory framework classifies AI uses by risk and increases regulation on higher risk levels. The riskiest uses for AI are banned, including systems that exploit specific vulnerable groups, biometric identification systems for law enforcement purposes, and artificial intelligence that deploys manipulative “subliminal techniques.”
Limited risk systems, such as chatbots like OpenAI’s ChatGPT, are subject to new transparency obligations under the law. The AI Act is a launchpad for EU startups and researchers to lead the global AI race.
AI’s disruption reaches beyond big tech, with educators, artists, musicians, and the media industry grappling with AI-fueled imitation and controversies. Some companies behind the technology have experienced growing pains, with OpenAI’s CEO, Sam Altman, briefly ousted and then reinstated in November.