Cyber & IT Supervisory Forum - Additional Resources

A multilayer framework for good cybersecurity practices for AI June 2023

• Infrastructure (digital and telecom/data). This area highlights policies for the development of ethical guidelines, legislative reforms and (international) standardisation. • Regulation (NLF legislation and trustworthy frameworks / AI standards / compliance with the GDPR). The focus here is on policies for the development of ethical guidelines, legislative reforms and (international) standardisation.

In Annex I, Table 2 provides a detailed overview of the policy areas under consideration, the associated types of public and private initiatives and their relation to cybersecurity in AI. 3.2. SURVEY ANALYSIS The survey was distributed to NCAs that deal with cybersecurity and/or AI. We received 10 responses to the survey, which are analysed below. The survey contained 30 questions, of which 14 were mandatory, organised within the five policy areas mentioned above. The mandatory questions are marked with (M). Human capital Cybersecurity plays a crucial role in ensuring that AI systems are resilient against attempts to alter their use, behaviour, performance or compromise their security properties by malicious third parties exploiting the systems’ vulnerabilities. Raising practical skills and capabilities in handling emerging AI cyber threats and challenges is important in the future development of AI systems.

(1) Have you built / do you plan to build synergies with educational authorities/institutions to increase AI cybersecurity capabilities at all levels of education? If yes, please elaborate. (M)

At this level, most of the MS mention formal and informal collaboration between different entities, such as universities, computer societies, centres for cybersecurity and legal authorities, on the promotion of cybersecurity capabilities. However, most of them do not yet have AI security topics ready. Concerning AI-specific security, one of the MS mentioned an AI observatory that is part of the national strategy for AI and that will bring together all levels of education on AI, including security. Another MS reported on studies conducted in collaboration with universities that focus on analyses of legal aspects of AI and its cybersecurity implications, in particular the usage of AI to carry out cyberattacks.

(2) Do you offer awareness raising campaigns about the secure development and use of AI solutions? If yes, please elaborate. (M)

The MS recognise AI as an emerging and disruptive technology. They encourage public discussion on AI security and support promoting safe, trustworthy and democratic AI which respects the rights and welfare of humans and EU principles. No specific campaigns have been conducted, but two MS mentioned the regular publication of studies and white papers related to this topic. Another MS has already published guidance on security of AI, including a label for digitally responsible businesses to cover cybersecurity, privacy and trustworthy AI.

(3) Do you provide guidance and best practices on how to improve AI security? If yes, please elaborate. (M)

Four MS mentioned related measures, such as: (i) the collection of information on successful examples and best practices of using AI both in the private and public sectors, as well as information on the impact of AI activities on the fundamental rights of natural persons; and (ii) the publication of rules for governments on how to maintain and develop emerging and disruptive technologies without national security disruption. More thorough initiatives were also reported: two MS provide guidance on the security and RM measures during the AI life cycle. One of these reports was also disseminated as a webinar that reached over 500 people in this MS. One MS supports organisations by providing a self-assessment tool for AI security.

(4) Do you consider AI cybersecurity in syllabus of courses dedicated to AI or to cybersecurity? If yes, please elaborate.

28

Made with FlippingBook Annual report maker