OpenAI, Google, Anthropic and other generative AI companies continue to improve their technology and release better and better large language models. To create a common approach to independently assess the safety of these models when they come out, the UK and US governments are signed memorandum of understanding. Together, the U.K.’s AI Safety Institute and its U.S. counterpart, which was announced by Vice President Kamala Harris but not yet operational, will develop test suites to assess the risks and ensure the safety of “the most -advanced models of artificial intelligence”. “

They plan to share technical knowledge, information and even personnel as part of the partnership, and one of their initial goals appears to be to jointly test a publicly available model. This was said by UK Science Minister Michelle Donelan, who signed the agreement Financial Times that they “really need to move fast” as they expect a new generation of AI models to emerge next year. They believe these models can be “totally game-changing” and don’t yet know what they might be capable of.

According to the times this partnership is the world’s first bilateral AI safety agreement, although both the US and the UK intend to team up with other countries in the future. “AI is the defining technology of our generation. This partnership will accelerate the work of our two institutes across the full spectrum of risks, whether for our national security or our broader society,” said US Commerce Secretary Gina Raimondo. “Our partnership makes it clear that we are not running away from these concerns – we are fighting them. Through our collaboration, our institutes will gain a better understanding of AI systems, conduct more robust evaluations, and issue more rigorous guidelines. “

While this particular partnership is focused on testing and evaluation, governments around the world are also creating regulations to keep AI tools in check. In March, the White House signed an executive order aimed at ensuring that federal agencies only use AI tools that “do not threaten the rights and safety of the American people.” A few weeks earlier, the European Parliament approved sweeping legislation to regulate artificial intelligence. It will ban “AI that manipulates human behavior or exploits people’s vulnerabilities”, “biometric categorization systems based on sensitive characteristics”, as well as the “non-targeted deletion” of individuals from CCTV footage and the network to create recognition databases of persons. In addition, deepfakes and other AI-generated images, videos and audio will have to be clearly labeled as such under its rules.