The rapid development and ubiquitous use of artificial intelligence (AI) pose significant security risks, requiring the integration of protective measures into systems from the start rather than addressing vulnerabilities later, a US government official said last week.

According to Jen Easterly, director of the US Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA), AI is too powerful and moving too fast for the traditional approach to technology products, which often have vulnerabilities that consumers are then expected to repair.
To address these rising threats, government agencies from 18 countries, including the United States, adopted new recommendations on AI cybersecurity developed by the UK. The guidelines focus on secure design, development, deployment, and maintenance and emphasize the importance of considering security throughout the lifecycle of AI capabilities.

Why does it matter?

Last month, prominent AI firms decided to work with governments to test new frontier models before they are launched, contributing to mitigating the risks associated with the fast-developing technology.
In addition, the US CISA has released its roadmap for the responsible development of AI tools as part of an agency-wide plan to promote the beneficial uses of AI to enhance cybersecurity capabilities.
All these efforts are converging to mitigate the significant emerging threats and risks related to the rapid expansion of AI models and applications in various sectors, including healthcare, media, education, and national security.

cross-circle