Cyber & IT Supervisory Forum - Additional Resources
Calibrate controls for users in close collaboration with experts in user interaction and user experience (UI/UX), human computer interaction (HCI), and/or human-AI teaming. Test provided explanations for calibration with different audiences including operators, end users, decision makers and decision subjects (individuals for whom decisions are being made), and to enable recourse for consequential system decisions that affect end users or subjects. Measure and document human oversight of AI systems: Document the degree of oversight that is provided by specified AI actors regarding AI system output. Maintain statistics about downstream actions by end users and operators such as system overrides. Maintain statistics about and document reported errors or complaints, time to respond, and response types. Maintain and report statistics about adjudication activities. Track, document, and measure organizational accountability regarding AI systems via policy exceptions and escalations, and document “go” and “no/go” decisions made by accountable parties. Track and audit the effectiveness of organizational mechanisms related to AI risk management, including: Lines of communication between AI actors, executive leadership, users and impacted communities. Roles and responsibilities for AI actors and executive leadership. Organizational accountability roles, e.g., chief model risk officers, AI oversight committees, responsible or ethical AI directors, etc.
Organizations can document the following: Transparency & Documentation
To what extent has the entity clarified the roles, responsibilities, and delegated authorities to relevant stakeholders? What are the roles, responsibilities, and delegation of authorities of personnel involved in the design, development, deployment, assessment and monitoring of the AI system?
135
Made with FlippingBook Annual report maker