Cyber & IT Supervisory Forum - Additional Resources
Measure 2.9 The AI model is explained, validated, and documented, and an AI system output is interpreted within its context – as identified in the MAP function – and to inform responsible use and governance. About Explainability and interpretability assist those operating or overseeing an AI system, as well as users of an AI system, to gain deeper insights into the functionality and trustworthiness of the system, including its outputs. Explainable and interpretable AI systems offer information that help end users understand the purposes and potential impact of an AI system. Risk from lack of explainability may be managed by describing how AI systems function, with descriptions tailored to individual differences such as the user’s role, knowledge, and skill level. Explainable systems can be debugged and monitored more easily, and they lend themselves to more thorough documentation, audit, and governance. Risks to interpretability often can be addressed by communicating a description of why an AI system made a particular prediction or recommendation. Transparency, explainability, and interpretability are distinct characteristics that support each other. Transparency can answer the question of “what happened”. Explainability can answer the question of “how” a decision was made in the system. Interpretability can answer the question of “why” a decision was made by the system and its meaning or context to the user. Suggested Actions Verify systems are developed to produce explainable models, post-hoc explanations and audit logs. When possible or available, utilize approaches that are inherently explainable, such as traditional and penalized generalized linear models, decision trees, nearest-neighbor and prototype-based approaches, rule-based models, generalized additive models, explainable boosting machines and neural additive models. 137
Made with FlippingBook Annual report maker