Cyber & IT Supervisory Forum - Additional Resources
Design for end user workflows and toolsets, concept of operations, and explainability and interpretability criteria in conjunction with end user(s) and associated qualitative feedback. Plan and test human-AI configurations under close to real-world conditions and document results. Follow stakeholder feedback processes to determine whether a system achieved its documented purpose within a given use context, and whether end users can correctly comprehend system outputs or results. Document dependencies on upstream data and other AI systems, including if the specified system is an upstream dependency for another AI system or other data. Document connections the AI system or data will have to external networks (including the internet), financial markets, and critical infrastructure that have potential for negative externalities. Identify and document negative impacts as part of considering the broader risk thresholds and subsequent go/no-go deployment as well as post deployment decommissioning decisions. Does the AI system provide sufficient information to assist the personnel to make an informed decision and take actions accordingly? What type of information is accessible on the design, operations, and limitations of the AI system to external stakeholders, including end users, consumers, regulators, and individuals impacted by use of the AI system? Based on the assessment, did your organization implement the appropriate level of human involvement in AI-augmented decision making? Organizations can document the following: Transparency & Documentation Datasheets for Datasets. URL WEF Model AI Governance Framework Assessment 2020. WEF Companion to the Model AI Governance Framework- 2020. AI Transparency Resources
72
Made with FlippingBook Annual report maker