Cyber & IT Supervisory Forum - Additional Resources
Organizations can document the following: Transparency & Documentation
How has the entity identified and mitigated potential impacts of bias in the data, including inequitable or discriminatory outcomes? How has the entity documented the AI system’s data provenance, including sources, origins, transformations, augmentations, labels, dependencies, constraints, and metadata? To what extent has the entity clearly defined technical specifications and requirements for the AI system? To what extent has the entity documented and communicated the AI system’s development, testing methodology, metrics, and performance outcomes? Have you documented and explained that machine errors may differ from human errors?
AI Transparency Resources
GAO-21-519SP: AI Accountability Framework for Federal Agencies & Other Entities. Datasheets for Datasets.
References
Dillon Reisman, Jason Schultz, Kate Crawford, Meredith Whittaker, “Algorithmic Impact Assessments: A Practical Framework For Public Agency Accountability,” AI Now Institute, 2018. H.R. 2231, 116th Cong. (2019). BSA The Software Alliance (2021) Confronting Bias: BSA’s Framework to Build Trust in AI. Anthony M. Barrett, Dan Hendrycks, Jessica Newman and Brandie Nonnecke. Actionable Guidance for High-Consequence AI Risk Management: Towards Standards Addressing AI Catastrophic Risks. ArXiv abs/2206.08966 (2022) https://arxiv.org/abs/2206.08966 David Wright, “Making Privacy Impact Assessments More Effective." The Information Society 29, 2013.
35
Made with FlippingBook Annual report maker