Cyber & IT Supervisory Forum - Additional Resources
Measure 1.2 Appropriateness of AI metrics and effectiveness of existing controls is regularly assessed and updated including reports of errors and impacts on affected communities. About Different AI tasks, such as neural networks or natural language processing, benefit from different evaluation techniques. Use-case and particular settings in which the AI system is used also affects appropriateness of the evaluation techniques. Changes in the operational settings, data drift, model drift are among factors that suggest regularly assessing and updating appropriateness of AI metrics and their effectiveness can enhance reliability of AI system measurements. Suggested Actions Assess external validity of all measurements (e.g., the degree to which measurements taken in one context can generalize to other contexts). Assess effectiveness of existing metrics and controls on a regular basis throughout the AI system lifecycle. Document reports of errors, incidents and negative impacts and assess sufficiency and efficacy of existing metrics for repairs, and upgrades. Develop new metrics when existing metrics are insufficient or ineffective for implementing repairs and upgrades. Develop and utilize metrics to monitor, characterize and track external inputs, including any third-party tools. Determine frequency and scope for sharing metrics and related information with stakeholders and impacted communities. Utilize stakeholder feedback processes established in the Map function to capture, act upon and share feedback from end users and potentially impacted communities. Collect and report software quality metrics such as rates of bug occurrence and severity, time to response, and time to repair (See Manage 4.3).
108
Made with FlippingBook Annual report maker