Cyber & IT Supervisory Forum - Additional Resources
Measure 3 Mechanisms for tracking identified AI risks over time are in place. Measure 3.1 Approaches, personnel, and documentation are in place to regularly identify and track existing, unanticipated, and emergent AI risks based on factors such as intended and actual performance in deployed contexts. About For trustworthy AI systems, regular system monitoring is carried out in accordance with organizational governance policies, AI actor roles and responsibilities, and within a culture of continual improvement. If and when emergent or complex risks arise, it may be necessary to adapt internal risk management procedures, such as regular monitoring, to stay on course. Documentation, resources, and training are part of an overall strategy to support AI actors as they investigate and respond to AI system errors, incidents or negative impacts. Suggested Actions Compare AI system risks with: simpler or traditional models human baseline performance other manual performance benchmarks Compare end user and community feedback about deployed AI systems to internal measures of system performance. Assess effectiveness of metrics for identifying and measuring emergent risks. Measure error response times and track response quality. Elicit and track feedback from AI actors in user support roles about the type of metrics, explanations and other system information required for fulsome resolution of system issues. Consider: Instances where explanations are insufficient for investigating possible error sources or identifying responses. System metrics, including system logs and explanations, for identifying and diagnosing sources of system error. 157
Made with FlippingBook Annual report maker