The US Commerce Department has proposed new rules that would require developers of advanced AI and cloud computing providers to report their activities to the government. The proposal aims to ensure that cutting-edge AI technologies are safe and secure, particularly against cyberattacks.

It also mandates detailed reporting on cybersecurity measures and the results of ‘red-teaming’ efforts, where systems are tested for vulnerabilities, including potential misuse for cyberattacks or the development of dangerous weapons.

The move comes as AI, especially generative models, has sparked excitement and concern, with fears over job displacement, election interference, and catastrophic risks. Under the proposal, the collected data would help the US government enforce safety standards and protect against threats from foreign adversaries.

Why does this matter?

The following regulatory push follows President Biden’s 2023 executive order requiring AI developers to share safety test results with the government before releasing certain systems to the public. The new rules come amid stalled legislative action on AI and are part of broader efforts to limit the use of US technology by foreign powers, particularly China.

cross-circle