Cyber & IT Supervisory Forum - Additional Resources

A multilayer framework for good cybersecurity practices for AI June 2023

Networking According to the AI Act (but also to NIS and the NIS 2), AI providers will be obliged, among other aspects, to inform NCAs about serious incidents or breaches as soon as they become aware of them, along with any recalls or withdrawals of AI systems from the market. NCAs will then collect all the necessary information and investigate the incidents/malfunctions.

(14) Have you developed/plan to develop national incident management or handling procedures considering AI?

Three MS mentioned that AI-related incidents must be reported following regular cybersecurity incident reporting procedures.

(15) Are there national initiatives with focus on collaboration about threat intelligence (AI threats, vulnerabilities and security controls) to the users/community?

MS recognise AI threats as very significant and highlight the potential use of AI by criminals. Despite not having explicitly and dedicated initiatives related to security of AI, three MS mentioned that AI threats are shared through existing mechanisms, provided by the national units on threat intelligence.

(16) Is there a collaboration with the national CSIRT/CERTs, ISACS for the efficient handling of AI-related incidents?

No MS reported on specific mechanisms or entities to handle AI-specific incidents, but three MS (aligned with the answers given on the three previous questions) clarified that regular cybersecurity procedures and mechanisms should be used and that information will then be shared with existing ISACs when appropriate. One of the MS mentioned that at present, no AI-related incidents were registered.

(17) Do you/promote/inform about new initiatives on AI security and vulnerabilities sharing? Like a catalogue of pointers to initiatives (e.g. NIST AI framework)?

One MS mentioned that it is aware of the NIST AI framework, but uses its own guidelines and a set of rules on how to maintain and develop emerging and disruptive technologies, including AI, without national security disruption.

(18) Have you developed appropriate collaboration with the national AI stakeholders for information sharing?

One MS reported to be in direct contact with the most important stakeholders, while another MS mentioned that the collaboration is just starting and is happening on an informal basis through the Competent Authorities on AI (CA@AI) working group, which was established in 2021. High-risk AI systems should perform consistently throughout their life cycle and meet an appropriate level of cybersecurity in accordance with the generally acknowledged state of the art. The level of accuracy and accuracy metrics should be communicated to the users (rule 49 in the explanatory memorandum).

(19) Have you defined/developed/use specific cyber measurements/metrics at national level that AI stakeholders are required to use? If yes, please elaborate.

Nothing specific to AI was reported.

(20) How do you monitor/audit the level of the cybersecurity of the AI systems throughout their life cycle? Please elaborate.

One of the MS reported the establishment of a national committee for AI ethics and reliability, and another MS mentioned that all public sector organisations and all medium and large-sized enterprises that operate AI systems are obliged to maintain a registry with information about their AI systems (AI systems register), containing the measures taken by the organisation or enterprise to ensure the safe usage and operation of its AI systems.

31

Made with FlippingBook Annual report maker