Cyber & IT Supervisory Forum - Additional Resources
Organizations may also consider targeted consultation with subject matter experts as a complement to participatory findings. Experts may assist internal staff in identifying and conceptualizing potential negative impacts that were previously not considered. Suggested Actions Establish AI risk management policies that explicitly address mechanisms for collecting, evaluating, and incorporating stakeholder and user feedback that could include: Recourse mechanisms for faulty AI system outputs. Bug bounties. Human-centered design. User-interaction and experience research. Participatory stakeholder engagement with individuals and communities that may experience negative impacts. Verify that stakeholder feedback is considered and addressed, including environmental concerns, and across the entire population of intended users, including historically excluded populations, people with disabilities, older people, and those with limited access to the internet and other basic technologies. Clarify the organization’s principles as they apply to AI systems – considering those which have been proposed publicly – to inform external stakeholders of the organization’s values. Consider publishing or adopting AI principles. What type of information is accessible on the design, operations, and limitations of the AI system to external stakeholders, including end users, consumers, regulators, and individuals impacted by use of the AI system? To what extent has the entity clarified the roles, responsibilities, and delegated authorities to relevant stakeholders? How easily accessible and current is the information available to external stakeholders? Organizations can document the following: Transparency & Documentation
40
Made with FlippingBook Annual report maker