Cyber & IT Supervisory Forum - Additional Resources

Use countermeasures (e.g., authentication, throttling, differential privacy, robust ML approaches) to increase the range of security conditions under which the system is able to return to normal function. Modify system security procedures and countermeasures to increase robustness and resilience to attacks in response to testing and events experienced in production. Verify that information about errors and attack patterns is shared with incident databases, other organizations with similar systems, and system users and stakeholders (MANAGE-4.1). Develop and maintain information sharing practices with AI actors from other organizations to learn from common attacks. Verify that third party AI resources and personnel undergo security audits and screenings. Risk indicators may include failure of third parties to provide relevant security information. Utilize watermarking technologies as a deterrent to data and model extraction attacks. To what extent does the plan specifically address risks associated with acquisition, procurement of packaged software from vendors, cybersecurity controls, computational infrastructure, data, data science, deployment mechanics, and system failure? What assessments has the entity conducted on data security and privacy impacts associated with the AI system? What processes exist for data generation, acquisition/collection, security, maintenance, and dissemination? What testing, if any, has the entity conducted on the AI system to identify errors and limitations (i.e., adversarial or stress testing)? If a third party created the AI, how will you ensure a level of explainability or interpretability?

Organizations can document the following: Transparency & Documentation

132

Made with FlippingBook Annual report maker