The EU Agency for Cybersecurity (ENISA) has published a report mapping the artificial intelligence (AI) cybersecurity ecosystem and its threat landscape. Noting that AI systems may be tampered with to manipulate expected outcomes, the report underlines that, first and foremost, it is essential to secure AI itself. For this, it it important to: (a) understand what needs to be secured; (b) understand the related data governance models; (c) manage threats in the multi-party ecosystem in a comprehensive way by using shared models and taxonomies; (d) develop specific controls to ensure that AI itself is secure. According to ENISA, enabling a common understanding of relevant AI cybersecurity threats will be key to widespread deployment and acceptance of AI systems and applications. It therefore recommends the development of an AI toolbox with concrete mitigation measures for AI threats in areas such as integrity, confidentiality, and privacy. The agency also notes the need to develop control measures for a variety of threats towards AI and to undertake further research to better foster robust systems and solutions. Another recommendation focuses on fostering cross-border and cross-industry relationships, as well as public-private partnerships for securing the diverse assets of the AI ecosystem and lifecycle. Finally, the report highlights that an EU secure AI ecosystem should place cybersecurity and data protection at the forefront and foster relevant innovation, capacity building, awareness raising, and research and development initiatives.

cross-circle