Cyber & IT Supervisory Forum - Additional Resources
Test explanation methods and resulting explanations prior to deployment to gain feedback from relevant AI actors, end users, and potentially impacted individuals or groups about whether explanations are accurate, clear, and understandable. Document AI model details including model type (e.g., convolutional neural network, reinforcement learning, decision tree, random forest, etc.) data features, training algorithms, proposed uses, decision thresholds, training data, evaluation data, and ethical considerations. Establish, document, and report performance and error metrics across demographic groups and other segments relevant to the deployment context. Explain systems using a variety of methods, e.g., visualizations, model extraction, feature importance, and others. Since explanations may not accurately summarize complex systems, test explanations according to properties such as fidelity, consistency, robustness, and interpretability. Assess the characteristics of system explanations according to properties such as fidelity (local and global), ambiguity, interpretability, interactivity, consistency, and resilience to attack/manipulation. Test the quality of system explanations with end-users and other groups. Secure model development processes to avoid vulnerability to external manipulation such as gaming explanation processes. Test for changes in models over time, including for models that adjust in response to production data. Use transparency tools such as data statements and model cards to document explanatory and validation information.
Organizations can document the following: Transparency & Documentation
Given the purpose of the AI, what level of explainability or interpretability is required for how the AI made its determination? Given the purpose of this AI, what is an appropriate interval for checking whether it is still accurate, unbiased, explainable, etc.? What are the checks for this model?
138
Made with FlippingBook Annual report maker