Cyber & IT Supervisory Forum - Additional Resources
A multilayer framework for good cybersecurity practices for AI June 2023
Figure 10: The relation between AI threats and security controls
Security testing In 2022, the ETSI working group on AI published draft of Security Testing of AI 61 . This report identifies methods and techniques that are appropriate for security testing of ML-based components. The scope of this report covers the following elements. • Security testing approaches for AI (used to generate test cases that are executed against the ML component). • Security test oracles for AI (enable the calculation of a test verdict to determine whether a test case has passed, i.e. no vulnerability has been detected, or failed, i.e. a vulnerability has been identified). • Definitions of test adequacy criteria for security testing (used to determine the overall progress and can be employed to specify a stop condition for security testing). According to the report, security testing of AI does not end at the component level. As for testing of traditional software, its integration with other components of a system needs to be tested as well. Security testing of AI has some commonalities with security testing of traditional systems, but also provides new challenges and requires different approaches, due to: • significant differences between subsymbolic AI and traditional systems that have strong implications on their security and on how to test their security properties; • non-determinism that may result from self-learning, i.e. AI-based systems may evolve over time and as a consequence, security properties may degrade; • the test oracle problem, where assigning a test verdict is different and more difficult for AI-based systems, since not all expected results are known a priori; • data-driven algorithms, where in contrast to traditional systems, (training) data forms the behaviour of subsymbolic AI. AI-related standards Several initiatives are underway to provide standards and specific guidelines for AI security and trustworthiness: ISO/IEC is working on RM, trustworthiness and management systems; ETSI provides an AI threat ontology and data supply chain security, among other items; and IEEE is working on AI explainability. In this section, a list of the current available standards and initiatives is presented. The reader can find a list with AI-related standards in Annex II. Ethical and trustworthy AI Besides the standardisation organisations, other groups are also working on guidelines for ethical and trustworthy AI. The following list shows some examples that we identified during our desktop research:
61 https://portal.etsi.org/webapp/WorkProgram/Report_WorkItem.asp?WKI_ID=58860
21
Made with FlippingBook Annual report maker