Cyber & IT Supervisory Forum - Additional Resources
Measure 2.13 Effectiveness of the employed TEVV metrics and processes in the MEASURE function are evaluated and documented. About The development of metrics is a process often considered to be objective but, as a human and organization driven endeavor, can reflect implicit and systemic biases, and may inadvertently reflect factors unrelated to the target function. Measurement approaches can be oversimplified, gamed, lack critical nuance, become used and relied upon in unexpected ways, fail to account for differences in affected groups and contexts. Revisiting the metrics chosen in Measure 2.1 through 2.12 in a process of continual improvement can help AI actors to evaluate and document metric effectiveness and make necessary course corrections. Suggested Actions Review selected system metrics and associated TEVV processes to determine if they are able to sustain system improvements, including the identification and removal of errors. Regularly evaluate system metrics for utility and consider descriptive approaches in place of overly complex methods. Review selected system metrics for acceptability within the end user and impacted community of interest. Assess effectiveness of metrics for identifying and measuring risks. To what extent does the system/entity consistently measure progress towards stated goals and objectives? Given the purpose of this AI, what is an appropriate interval for checking whether it is still accurate, unbiased, explainable, etc.? What are the checks for this model? What corrective actions has the entity taken to enhance the quality, accuracy, reliability, and representativeness of the data? To what extent are the model outputs consistent with the entity’s values and principles to foster public trust and equity? Organizations can document the following: Transparency & Documentation
155
Made with FlippingBook Annual report maker