Cyber & IT Supervisory Forum - Additional Resources
To what extent does the entity communicate its AI strategic goals and objectives to the community of stakeholders? Given the purpose of this AI, what is an appropriate interval for checking whether it is still accurate, unbiased, explainable, etc.? What are the checks for this model? If anyone believes that the AI no longer meets this ethical framework, who will be responsible for receiving the concern and as appropriate investigating and remediating the issue? Do they have authority to modify, limit, or stop the use of the AI? GAO-21-519SP - Artificial Intelligence: An Accountability Framework for Federal Agencies & Other Entities. Artificial Intelligence Ethics Framework for the Intelligence Community AI Transparency Resources ISO. "ISO 9241-210:2019 Ergonomics of human-system interaction — Part 210: Human-centered design for interactive systems." 2nd ed. ISO Standards, July 2019. Mark C. Paulk, Bill Curtis, Mary Beth Chrissis, and Charles V. Weber. “Capability Maturity Model, Version 1.1.” IEEE Software 10, no. 4 (1993): 18–27. Jeff Patton, Peter Economy, Martin Fowler, Alan Cooper, and Marty Cagan. User Story Mapping: Discover the Whole Story, Build the Right Product. O'Reilly, 2014. Rumman Chowdhury and Jutta Williams. "Introducing Twitter’s first algorithmic bias bounty challenge." Twitter Engineering Blog, July 30, 2021. HackerOne. "Twitter Algorithmic Bias." HackerOne, August 8, 2021. Josh Kenway, Camille François, Sasha Costanza-Chock, Inioluwa Deborah Raji, and Joy Buolamwini. "Bug Bounties for Algorithmic Harms?" Algorithmic Justice League, January 2022. Microsoft. “Community Jury.” Microsoft Learn's Azure Application Architecture Guide, 2023. Margarita Boyarskaya, Alexandra Olteanu, and Kate Crawford. "Overcoming Failures of Imagination in AI Infused System Development and Deployment." arXiv preprint, submitted December 10, 2020. References
160
Made with FlippingBook Annual report maker