Cyber & IT Supervisory Forum - Additional Resources
Review redundancies related to third-party technology and personnel to assess potential risks due to lack of adequate support.
Organizations can document the following: Transparency & Documentation
Did you establish a process for third parties (e.g., suppliers, end users, subjects, distributors/vendors or workers) to report potential vulnerabilities, risks or biases in the AI system? If your organization obtained datasets from a third party, did your organization assess and manage the risks of using such datasets? How will the results be independently verified? GAO-21-519SP: AI Accountability Framework for Federal Agencies & Other Entities. Intel.gov: AI Ethics Framework for Intelligence Community - 2020. WEF Model AI Governance Framework Assessment 2020. AI Transparency Resources Language models Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT '21). Association for Computing Machinery, New York, NY, USA, 610–623. Julia Kreutzer, Isaac Caswell, Lisa Wang, et al. 2022. Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets. Transactions of the Association for Computational Linguistics 10 (2022), 50–72. Laura Weidinger, Jonathan Uesato, Maribeth Rauh, et al. 2022. Taxonomy of Risks posed by Language Models. In 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT '22). Association for Computing Machinery, New York, NY, USA, 214–229. Office of the Comptroller of the Currency. 2021. Comptroller's Handbook: Model Risk Management, Version 1.0, August 2021 Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, et al. 2021. On the Opportunities and Risks of Foundation Models. arXiv:2108.07258. References
95
Made with FlippingBook Annual report maker