Cyber & IT Supervisory Forum - Additional Resources

A multilayer framework for good cybersecurity practices for AI June 2023

Figure 9: AI characteristics mapping to policy documents

Security controls On the other hand, specific ML security controls can be mapped for the introduced threats to provide efficient ways of prevention and mitigation. For evasion , tools can be implemented to detect whether a given input is an adversarial example, adversarial training can be used to make the model more robust, and models that are less easily transferable can be used to significantly decrease the ability of a given attacker to properly study the algorithm that works underneath the system. Similarly, for poisoning attacks, processes that maintain the security levels of ML components over time should be implemented, the exposure level of the used model should be assessed, the training data set should be enlarged as much as possible to reduce its susceptibility to malicious samples, and pre-processing steps that clean the training data from such malicious samples must also be considered. Model or data disclosure can be protected by applying proper access control and federated learning to minimise the risk of data breaches. Similarly, to reduce the level of compromise of ML application components, these should be compliant with protection policies, fully integrated to existing security operations and asset management processes, and evaluated according to the level of security of their foundation blocks (e.g. libraries that are responsible for the algorithm implementation). Finally, to prevent failure or malfunction of ML applications, employed algorithms should have their bias reduced, should be properly evaluated to ensure that they are resilient to the environment in which they will operate and should encompass explainability strategies.

20

Made with FlippingBook Annual report maker