The European Parliament has sweeping legislation to regulate artificial intelligence, nearly three years after the draft rules were . Officials in December. On Wednesday, members of parliament approved the AI ​​Law by 523 votes to 46, with 49 abstentions.

The EU says the regulations seek to “protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while stimulating innovation and establishing Europe as a leader in this field”. The law sets obligations for AI applications based on potential risks and impacts.

The legislation has not yet become law. It is still subject to checks by a legal-linguist until the European Council has to officially enforce it. But the AI ​​Act is likely to come into force before the end of the legislative term, ahead of the next parliamentary elections in early June.

Most of the provisions will take effect 24 months after the AI ​​Act becomes law, but bans on prohibited apps will apply after six months. The EU prohibits practices that it believes will endanger citizens’ rights. “Biometric categorization systems based on sensitive characteristics” will be banned, as will the “untargeted deletion” of facial images from CCTV and the network to create facial recognition databases. Clearview AI’s activity would fall into this category.

Other apps that will be banned include; recognizing emotions in schools and workplaces; and “AI that manipulates human behavior or exploits human vulnerability.” Certain aspects of predictive policing will be prohibited, ie. when it is based entirely on assessing someone’s characteristics (such as inferring their sexual orientation or political views) or profiling them. Although the AI ​​Act generally prohibits the use of biometric identification systems by law enforcement agencies, it will be permitted in certain circumstances with prior authorization, such as to help locate a missing person or prevent a terrorist attack.

Applications that are considered high-risk — including the use of AI in law enforcement and healthcare — are subject to . They must not discriminate and must respect privacy rules. Developers must demonstrate that the systems are transparent, safe and explainable to users as well. As for AI systems that the EU considers low-risk (such as spam filters), developers must still inform users that they are interacting with AI-generated content.

The law has some rules when it comes to generative AI and manipulated media. Deepfakes and all other AI-generated images, videos and audio will need to be clearly labeled. AI models will also have to comply with copyright laws. “Rightholders may choose to retain their rights in their works or other subject matter to prevent the extraction of texts and data, except for the purposes of scientific research,” reads the text of the AI ​​Act. “Where opt-out rights are appropriately reserved, providers of general purpose AI models must obtain permission from rights holders if they wish to perform text and data mining on such works.” However, AI models created only for research, development and prototyping are exempt.

The most powerful general purpose and generative AI models (those trained with a total computing power of more than 10^25 FLOPs) are According to the Rules. The threshold may be adjusted over time, but OpenAI’s GPT-4 and DeepMind’s Gemini are believed to fall into this category.

Providers of such models will need to assess and mitigate risks, report serious incidents, provide details of their systems’ energy consumption, ensure they meet cybersecurity standards, and perform state-of-the-art model testing and evaluation.

As with other EU regulations targeting technology, the penalties for breaching the provisions of the AI ​​Act can be high. Companies that violate the rules will be subject to fines of up to 35 million euros ($51.6 million) or up to seven percent of their global annual profit, whichever is higher.

The AI ​​Act applies to every model operating in the EU, so US-based AI providers will have to comply, at least in Europe. Sam Altman, CEO of OpenAI creator OpenAI, suggested last May that his company might pull out of Europe if the AI ​​Act becomes law, but the company had no plans to do so.

To enforce the law, each member state will create its own AI watchdog, and the European Commission will create an AI Office. This will develop methods for model evaluation and risk monitoring in general purpose models. Providers of general purpose models deemed to pose systemic risks will be asked to work with the office to develop codes of conduct.

This article contains affiliate links; if you click on such a link and make a purchase, we may earn a commission.