Cyber & IT Supervisory Forum - Additional Resources

CYBERSECURITY OF AI AND STANDARDISATION

2. SCOPE OF THE REPORT: DEFINITION OF AI AND CYBERSECURITY OF AI

2.1 ARTIFICIAL INTELLIGENCE Understanding AI and its scope seems to be the very first step towards defining cybersecurity of AI. Still, a clear definition and scope of AI have proven to be elusive. The concept of AI is evolving and the debate over what it is, and what it is not, is still largely unresolved – partly due to the influence of marketing behind the term ‘AI’. Even at the scientific level, the exact scope of AI remains very controversial. In this context, numerous forums have adopted/proposed definitions of AI. 2 In its draft version, the AI Act proposes a definition in Article 3(1): ‘artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with. The techniques and approaches referred to in Annex I are: • Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning; • logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems; • statistical approaches, Bayesian estimation, search and optimisation methods Box 1: Example – Definition of AI, as included in the draft AI Act In line with previous ENISA work, which considers it the driving force in terms of AI technologies, the report mainly focuses on ML. This choice is further supported by the fact that there seem to be a general consensus on the fact that ML techniques are predominant in current AI applications. Last but not least, it is considered that the specificities of ML result in vulnerabilities that affect the cybersecurity of AI in a distinctive manner. It is to be noted that the report considers AI from a life cycle perspective 3 . Considerations concerning ML only have been flagged.

2 For example, the United Nations Educational, Scientific and Cultural Organization (UNESCO) in the ‘First draft of the recommendation on the ethics of artificial intelligence’, and the European Commission’s High -Level Expert Group on Artificial Intelligence. 3 See the life cycle approach portrayed in the ENISA report Securing Machine Learning Algorithms (https://www.enisa.europa.eu/publications/securing-machine-learning-algorithms).

9

Made with FlippingBook Annual report maker