Cyber & IT Supervisory Forum - Additional Resources
Measure assurance criteria such as AI actor competency and experience. Document differences between measurement setting and the deployment environment(s).
Organizations can document the following: Transparency & Documentation
What experiments were initially run on this dataset? To what extent have experiments on the AI system been documented? To what extent does the system/entity consistently measure progress towards stated goals and objectives? How will the appropriate performance metrics, such as accuracy, of the AI be monitored after the AI is deployed? How much distributional shift or model drift from baseline performance is acceptable? As time passes and conditions change, is the training data still representative of the operational environment? What testing, if any, has the entity conducted on the AI system to identify errors and limitations (i.e., adversarial or stress testing)? Artificial Intelligence Ethics Framework for the Intelligence Community. WEF Companion to the Model AI Governance Framework- WEF - Companion to the Model AI Governance Framework, 2020. Datasheets for Datasets. AI Transparency Resources Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. 2nd ed. Springer-Verlag, 2009. Jessica Zosa Forde, A. Feder Cooper, Kweku Kwegyir-Aggrey, Chris De Sa, and Michael Littman. "Model Selection's Disparate Impact in Real-World Deep Learning Applications." arXiv preprint, submitted September 7, 2021 Inioluwa Deborah Raji, I. Elizabeth Kumar, Aaron Horowitz, and Andrew Selbst. “The Fallacy of AI Functionality.” FAccT '22: 2022 ACM Conference on Fairness, Accountability, and Transparency, June 2022, 959–72. References
120
Made with FlippingBook Annual report maker