Cyber & IT Supervisory Forum - Additional Resources

CYBERSECURITY OF AI AND STANDARDISATION

1. INTRODUCTION

1.1 DOCUMENT PURPOSE AND OBJECTIVES The overall objective of the present document is to provide an overview of standards (existing, being drafted, under consideration and planned) related to the cybersecurity of artificial intelligence (AI), assess their coverage and identify gaps in standardisation. The report is intended to contribute to the activities preparatory to the implementation of the proposed EU regulation laying down harmonised rules on artificial intelligence (COM(2021) 206 final) (the draft AI Act) on aspects relevant to cybersecurity.

1.2 TARGET AUDIENCE AND PREREQUISITES The target audience of this report includes a number of different stakeholders that are concerned by the cybersecurity of AI and standardisation.

The primary addressees of this report are standards-developing organisations (SDOs) and public sector / government bodies dealing with the regulation of AI technologies.

The ambition of the report is to be a useful tool that can inform a broader set of stakeholders of the role of standards in helping to address cybersecurity issues, in particular:

• academia and the research community; • the AI technical community, AI cybersecurity experts and AI experts (designers, developers, machine learning (ML) experts, data scientists, etc.) with an interest in developing secure solutions and in integrating security and privacy by design in their solutions; • businesses (including small and medium-sized enterprises) that make use of AI solutions and/or are engaged in cybersecurity, including operators of essential services.

The reader is expected to have a degree of familiarity with software development and with the confidentiality, integrity and availability (CIA) security model, and with the techniques of both vulnerability analysis and risk analysis.

1.3 STRUCTURE OF THE STUDY The report is structured as follows:

• definition of the perimeter of the analysis (Chapter 2): introduction to the concepts of AI and cybersecurity of AI; • inventory of standardisation activities relevant to the cybersecurity of AI (Chapter 3): overview of standardisation activities (both AI-specific and non-AI specific) supporting the cybersecurity of AI; • analysis of coverage (Chapter 4): analysis of the coverage of the most relevant standards identified in Chapter 3 with respect to the CIA security model and to trustworthiness characteristics supporting cybersecurity; • wrap-up and conclusions (Chapter 5): building on the previous sections, recommendations on actions to ensure standardisation support to the cybersecurity of AI, and on preparation for the implementation of the draft AI Act.

8

Made with FlippingBook Annual report maker