Cyber & IT Supervisory Forum - Additional Resources

Define acceptable limits for system performance (e.g., distribution of errors) and include course correction suggestions if/when the system performs beyond acceptable limits. Define metrics for, and regularly assess, AI actor competency for effective system operation, Identify transparency metrics to assess whether stakeholders have access to necessary information about system design, development, deployment, use, and evaluation. Utilize accountability metrics to determine whether AI designers, developers, and deployers maintain clear and transparent lines of responsibility and are open to inquiries. Document metric selection criteria and include considered but unused metrics. Monitor AI system external inputs including training data, models developed for other contexts, system components reused from other contexts, and third-party tools and resources. Report metrics to inform assessments of system generalizability and reliability. Assess and document pre- vs post-deployment system performance. Include existing and emergent risks. Document risks or trustworthiness characteristics identified in the Map function that will not be measured, including justification for non measurement. How will the appropriate performance metrics, such as accuracy, of the AI be monitored after the AI is deployed? What corrective actions has the entity taken to enhance the quality, accuracy, reliability, and representativeness of the data? Are there recommended data splits or evaluation measures? (e.g., training, development, testing; accuracy/AUC)

Organizations can document the following: Transparency & Documentation

106

Made with FlippingBook Annual report maker