Cyber & IT Supervisory Forum - Additional Resources

A multilayer framework for good cybersecurity practices for AI June 2023

need to be created to support collaboration of data scientists and cybersecurity experts in order to develop knowledge needed to advance the security and resilience of the AI systems as well as the AI-attacks management. ENISA’s User Manual – European cybersecurity skills framework (ECSF) 97 can be used for this purpose. • Periodic distribution of the survey to capture the NCAs’ monitoring achievements of national AI stakeholders is recommended to accelerate good cybersecurity practices and identify open issues.

The growing use of AI means that its security will become a key challenge for the future. According to ENISA Top 10 Emerging Cybersecurity Threats for 2030 98 , misuse of AI will become a significant threat. State-sponsored operatives or cyber criminals who attack blockchain technology and issue deep fakes might not just exist in fiction. On its own, AI is not going to solve today’s or tomorrow’s complex societal, business or security challenges. However, AI’s ability to identify patterns and adaptively learn in real time as events warrant can accelerate detection, containment and response. It can also help reduce the heavy load on analysts working in security operations centres (SOCs) and enable them to be more proactive. These workers will likely remain in high demand, but AI will change their roles. Finally, as the elements of AI- and ML-driven security threats begin to emerge, AI can help security teams prepare for the eventual development of AI-driven cybercrimes ( 99 ). In order for this transformation to take place experts need to have a good understanding of both AI’s contribution to cybersecurity and cybersecurity issues in AI. This report recommends to stakeholders to realise that AI systems are hosted in their ICT ecosystem and they need to continue protecting all the layers (physical, network, IT, data, users) of the ecosystem by following traditional good cybersecurity practices (FAICP Layer I). Additional practices are needed due to the dynamic nature of the AI (FAICP Layer II) or the security requirements of the environment that AI operate (FAICP Layer III). Research efforts are needed to further develop comprehensive complementary practices.

97 ENISA, User Manual – European cybersecurity skills framework (ECSF) , 2022, https://www.enisa.europa.eu/publications/european-cybersecurity skills-framework-ecsf. 98 https://www.enisa.europa.eu/news/cybersecurity-threats-fast-forward-2030 99 Aubley, C., Frank, W., Bowen, E. and Golden, D., ‘Cyber AI: Real defense – Augmenting security teams with data and machine intelligence’, Deloitte Insights, Deloitte, 2021, https://www2.deloitte.com/us/en/insights/focus/tech-trends/2022/future-of-cybersecurity-and-ai.html.

37

Made with FlippingBook Annual report maker