Cyber & IT Supervisory Forum - Additional Resources
Measure 4.2 Measurement results regarding AI system trustworthiness in deployment context(s) and across AI lifecycle are informed by input from domain experts and other relevant AI actors to validate whether the system is performing consistently as intended. Results are documented. About Feedback captured from relevant AI Actors can be evaluated in combination with output from Measure 2.5 to 2.11 to determine if the AI system is performing within pre-defined operational limits for validity and reliability, safety, security and resilience, privacy, bias and fairness, explainability and interpretability, and transparency and accountability. This feedback provides an additional layer of insight about AI system performance, including potential misuse or reuse outside of intended settings. Insights based on this type of analysis can inform TEVV-based decisions about metrics and related courses of action. Suggested Actions Integrate feedback from end users, operators, and affected individuals and communities from Map function as inputs to assess AI system trustworthiness characteristics. Ensure both positive and negative feedback is being assessed. Evaluate feedback in connection with AI system trustworthiness characteristics from Measure 2.5 to 2.11. Evaluate feedback regarding end user satisfaction with, and confidence in, AI system performance including whether output is considered valid and reliable, and explainable and interpretable. Identify mechanisms to confirm/support AI system output (e.g., recommendations), and end user perspectives about that output. Measure frequency of AI systems’ override decisions, evaluate and document results, and feed insights back into continual improvement processes. Consult AI actors in impact assessment, human factors, and socio technical tasks to assist with analysis and interpretation of results. 167
Made with FlippingBook Annual report maker