Major global technology companies, including Google, DeepMind, Meta, Microsoft, and OpenAI, are urging the United Kingdom to expedite safety tests on their AI software, seeking clarity on the testing process and outcomes. These companies voluntarily agreed to participate in tests conducted by the newly established AI Safety Institute (AISI) following the AI Safety Summit as part of the UK’s effort to lead in AI regulation. The tech companies want to know how long the tests will take and what actions will be taken if the AISI identifies flaws in their software.
AISI’s chair, Ian Hogarth, defended the testing approach, stating that ‘companies agreed that governments should test their models before they are released’ and that ‘the AI Safety Institute is putting that into practice.’
AISI, backed by the government, is crucial to Prime Minister Rishi Sunak’s ambition for the UK to play a central role in addressing existential risks associated with AI. By testing both existing and unreleased AI models, such as Google’s Gemini Ultra, the AISI focuses on the risks associated with AI misuse, especially in cybersecurity and bioweapon construction.
Leveraging expertise from the National Cyber Security Centre, AISI allocated £1 million for capabilities to test ‘jailbreaking’ and ‘spear-phishing,’ aiming to manipulate AI chatbots for information theft or malware dissemination. Additionally, AISI is involved in ‘reverse engineering automation’ to analyse the details of the source code, employing automated processes to unveil the functionality and structure embedded within the codebase.
The UK government disclosed long-term plans for AI regulation and expressed commitment to address issues related to AI breakthroughs after industry engagement. The ongoing dispute highlights the challenges of relying on voluntary agreements for regulating technology development, prompting government consideration of future binding requirements for AI developers.