Cyber & IT Supervisory Forum - Additional Resources

A MULTILAYER FRAMEWORK FOR GOOD CYBERSECURITY PRACTICES FOR AI June 2023

20. How do you monitor/audit the level of the cybersecurity of the AI systems throughout their life cycle? 21. Do you impose dynamic risk assessment to be conducted by the AI stakeholders? 22. What kind of sanctions have you set up for non compliance with integrity of data and models?

In accordance with the AI Act, to ensure a level of cybersecurity appropriate to the risks, suitable measures would have to be taken by the providers of high-risk AI systems, also considering as appropriate the underlying ICT infrastructure (rule 51 in the explanatory memorandum).

Infrastructure

23. How do you monitor/audit the appropriateness of the controls undertaken by the AI stakeholders (developers, integrators, critical infrastructures, e.g. telecom operators) to adequately secure the underlying ICT infrastructure? 24. Have you specified/defined measurements and KPIs which the AI stakeholders can use to assess the appropriateness of the controls undertaken?

Regulation

One of the AI cybersecurity challenges is the breach of integrity (e.g. poor data quality or biased input data sets) that can lead to automated decision-making systems that wrongly classify individuals and exclude them from certain services or deprive them from their rights (ENISA). The AI Act aims to minimise the risk of algorithmic discrimination.

25. How do you monitor the integrity and quality of data sets used for the development of AI systems? 26. Have national auditors and certification and accreditation bodies been established for assessing the security of the AI systems? 27. How do you evaluate security of the AI systems (e.g. via conformity assessment, certification, standards compliance, risk assessment, etc.)? 28. What are the obligations you have imposed for testing, risk management, documentation and human oversight

throughout the AI systems’ life cycle to ensure continuous data and training model integrity?

According to the proposed AI regulation, the requirements of a high-risk AI system related to products covered by the NLF legislation (e.g. machinery, medical devices, toys) need to be assessed. 29. Do you have a process where you are notified about the high-risk AI systems used in various NLF regulated products? 30. Do you have rules in relation to NLF products that may be relevant to cybersecurity?

41

Made with FlippingBook Annual report maker