Cyber & IT Supervisory Forum - Additional Resources

A MULTILAYER FRAMEWORK FOR GOOD CYBERSECURITY PRACTICES FOR AI June 2023

According to the Coordinated Plan on AI supporting AI research and innovation related to threats and attacks on AI and offering solutions for testing promising AI solutions is critical to ensure cybersecurity obligations in the uptake of developments from the lab to the market: 7. What type of support (funding/scholarships/

From the lab to the market

collaboration opportunities) do you offer to increase the cybersecurity capabilities of newly innovative solutions that rely on AI? 8. What are the means that you use to support SMEs/MEs to secure their AI products? 9. Do you have/promote testing environments, like sandboxes/cyber ranges/simulation platforms to test and evaluate AI vulnerabilities before market? How? 10. Do you have specific measurements/KPIs/metrics that the AI stakeholders are imposed to use? 11. Have you informed national AI stakeholders on cybersecurity requirements set by the NCAs for their AI products and how do you do it? 12. How do you monitor if such requirements have been met? 13. How do you inform the national stakeholders about the relevant legal instruments and standards available? (e.g. regulatory sandboxes) 14. Have you developed / plan to develop national incident management or handling procedures considering AI? 15. Are there national initiatives that focus on collaboration about threat intelligence (AI threats, vulnerabilities and security controls) to the users/community? 16. Is there collaboration with the national CSIRTs/ CERTs and ISACs for the efficient handling of AI related incidents? 17. Do you promote/inform about new initiatives on AI security and vulnerabilities sharing? Like a catalogue of pointers to initiatives (e.g. NIST AI framework)? 18. Have you developed appropriate collaboration with the national AI stakeholders for information sharing?

According to the AI Act (but also by the NIS and the NIS 2), AI providers will be obliged, among other aspects, to inform NCAs about serious incidents or a breach as soon as they become aware of them, as well as any recalls or withdrawals of AI systems from the market. NCAs will then collect all the necessary information and investigate the incidents/ malfunctions.

Networking

High-risk AI systems should perform consistently throughout their life cycle and meet an appropriate level of cybersecurity in accordance with the generally acknowledged state of the art. The level of accuracy and accuracy metrics should be communicated to the users (rule 49 in the explanatory memorandum).

19. Have you defined/developed/used specific cyber measurements/metrics at the national level that AI stakeholders are required to use?

40

Made with FlippingBook Annual report maker