Cyber & IT Supervisory Forum - Additional Resources

A multilayer framework for good cybersecurity practices for AI June 2023

Figure 5: Adversary characterisation

Adversarial threats 21 are mainly caused by people who have a deliberate intention to cause harm. Typically, these threat actors are referred to as attackers or adversaries. In the literature, cyber threat actor lists and taxonomies are still being developed, and most of these lists identify the following intentional threat actors : insider attackers, cyber terrorists, hacktivists / civil activists, organised cybercriminals, script kiddies, state-sponsored attackers, commercial industrial espionage agents, cyberwarriors / individual cyber fighters, cyber vandals and black hat hackers. Today there is no universally accepted standard for an attackers’ taxonomy and new definitions and proposals for taxonomies are still emerging; 11 attacker types were defined by ENISA in 2021 22 by consolidating, refining and improving previous taxonomies, which reflect the current threat landscape and can be mapped to other taxonomies in use by MS and EU bodies. The attackers target ICT infrastructures hosting AI systems/products or AI systems at any stage of their life cycle. Cybersecurity certification Cybersecurity certification under the EU’s Cybersecurity Act (CSA) 23 is intended to increase trust and security for European consumers and businesses of ICT products (including the ones using AI technologies). The main standard for certification is ISO/IEC 15408 24 (particularly the Common Criteria – CC 25 ) that establishes the principles for ICT security assessment, while ISO/IEC 18045 26 provides a methodology to help an evaluator conduct a CC evaluation by defining the minimum actions. These standards have been implemented in various methodologies (e.g. ETSI-TVRA 27 , ENISA-RCA 28 , CYRENE 29 ) that AI stakeholders can use to evaluate ICT products. These methodologies can be used to evaluate the security of ICT assets hosting AI components, such as a server that hosts AI models or a supply chain service in which AI assets participate during the provision of the service. Evaluation methodologies to identify security requirements for development of certification schemes for AI products are not available yet. Additional research efforts are needed to evaluate the security of AI systems, due to their dynamic nature.

Cybersecurity legislation and policies

21 Source: ENISA, Methodology for Sectoral Cybersecurity Assessments , 2021, https://www.enisa.europa.eu/publications/methodology-for-a-sectoral cybersecurity-assessment. 22 See footnote ( 17 ). 23 Regulation (EU) 2019/881 of the European Parliament and of the Council of 17 April 2019 on ENISA (the European Union Agency for Cybersecurity) and on information and communications technology cybersecurity certification and repealing Regulation (EU) No 526/2013 (Cybersecurity Act), https://eur-lex.europa.eu/eli/reg/2019/881/oj.

24 https://www.iso.org/standard/72891.html. 25 https://www.commoncriteriaportal.org/. 26 https://www.iso.org/standard/72889.html.

27 https://www.etsi.org/deliver/etsi_ts/102100_102199/10216501/05.02.03_60/ts_10216501v050203p.pdf. 28 https://www.enisa.europa.eu/publications/methodology-for-a-sectoral-cybersecurity-assessment. 29 CYRENE was an EU Horizon 2020 project, see https://www.cyrene.eu, accessed on 14 May 2021.

11

Made with FlippingBook Annual report maker