What about outside the EU?
The GDPR, the EU’s data protection regulation, is the bloc’s most famous technology export and has been copied everywhere from California to India.
The approach to AI that the EU has adopted, which targets the most risky AI, is one that most developed countries agree with. If Europeans succeed in creating a coherent way to regulate technology, it can work as a template for other countries hoping to do so.
“American companies, in accordance with the EU AI Act, will eventually raise their standards for American consumers in terms of transparency and accountability,” said Mark Rothenberg, who heads the Center for AI and Digital Policy, a non-profit organization that tracks AI policy.
The bill is also being closely monitored by the Biden administration. The United States is home to some of the world’s largest artificial intelligence laboratories, such as Google AI, Meta and OpenAI, and runs many different global rankings in AI research, so the White House wants to know how any regulation can be applied to these companies. So far, influential US government figures such as National Security Adviser Jake Sullivan, Commerce Secretary Gina Raimondo and Lynn Parker, who is leading the White House’s artificial intelligence efforts, have welcomed Europe’s efforts to regulate AI.
“This is in stark contrast to the way the United States views the development of the GDPR, which at the time people in the United States said would end the Internet, eclipse the sun and end life on the planet as we know it.” says Rothenberg.
Despite some unavoidable caution, the United States has good reason to welcome the legislation. It is extremely worrying about China’s growing influence in technology. For America, the official position is that maintaining Western dominance in technology is a matter of whether “democratic values” prevail. She wants to keep the EU, “allied ally“close.
What are the biggest challenges?
Some of the requirements of the bill are technically impossible to comply with at the moment. The first draft of the bill requires that datasets be error-free and that people be able to “fully understand” how artificial intelligence systems work. The data sets used to train artificial intelligence systems are vast, and verifying that people are completely error-free would require thousands of hours of work if it were even possible to verify such a thing. And today’s neural networks are so complex that their creators do not fully understand how they come to their conclusions.
Technology companies are also deeply embarrassed about the requirement to provide external auditors or regulators with access to their source code and algorithms to enforce the law.