Cyber & IT Supervisory Forum - Additional Resources

A multilayer framework for good cybersecurity practices for AI June 2023

and trustworthy AI.

• ALLAI 73 is an independent Dutch organisation dedicated to drive and foster responsible AI. ALLAI’s vision is responsible AI for a world where AI is developed, deployed and used responsibly, i.e. in a safe and sustainable manner and in line with ethical principles, societal values, existing and new laws and regulations, human rights, democracy and the rule of law. • The Confederation of Laboratories for Artificial Intelligence Research in Europe 74 seeks to strengthen European excellence in AI research and innovation. The network forms a pan-European Confederation of Laboratories for Artificial Intelligence Research in Europe. Its member groups and organisations are committed to work together towards realising the vision of CLAIRE: European excellence across all of AI, for all of Europe, with a human-centred focus. • The European Network of Human-centred AI 75 aims to facilitate a European brand of trustworthy, ethical AI that enhances human capabilities and empowers citizens and society to effectively deal with the challenges of an interconnected globalized world. • The D-seal initiative 76 is the new labelling programme for IT security and responsible use of data in Denmark. It provides interesting guidelines and criteria on how to combine IT security and responsible use of data in the same label, with AI as one of the criteria. • AI testing and experimentation facilities have been included by the Commission in the Digital Europe Programme. These are meant to be large-scale reference sites for ‘testing state-of-the-art AI-based soft and hardware solutions and products’ 77 . technical, societal, ethical and legal. Collaboration between cybersecurity experts, data scientists, social scientists, psychologists and legal experts is needed in order to identify the continuous evolving AI threat landscape and develop corresponding countermeasures. • Among the different types of AI described in Section 2.3.3, ML and DL undoubtedly pose the main challenges to security and imply a dynamic analysis of the threats, both along the life cycle and in the interrelations within other blocks of an ICT infrastructure. • AI-specific risk assessment efforts need to consider their unique properties and enhance their robustness, resilience, fairness and explainability, along with preventing loss of transparency, loss of managing bias and loss of accountability. • Assigning a test verdict is different and more difficult for AI-based systems, since not all of the expected results are known a priori. 2.3. LAYER III – SECTOR-SPECIFIC CYBERSECURITY GOOD PRACTICES New challenges The security of AI should be considered at all stages of its life cycle, taking into account the following elements. • AI systems are multi-disciplinary socio-technical systems and their threats are AI is a technology that has entered all economic sectors (e.g. automotive, health, maritime, finance). The third layer of the FAICP framework provides additional recommendations and best practices available in order to address cybersecurity issues in the AI systems used in some of these sectors. While almost every economic sector already relies on AI systems, we have identified below only those sectors for which we managed to find relevant cybersecurity guidelines. Additionally, ENISA’s reports can be used to identify sectoral threats (e.g. 5G, AI, supply chain). Energy

73 https://allai.nl/. 74 https://claire-ai.org/. 75 https://www.humane-ai.eu/. 76 https://d-seal.eu/. 77 https://digital-strategy.ec.europa.eu/en/activities/testing-and-experimentation-facilities.

23

Made with FlippingBook Annual report maker