Cyber & IT Supervisory Forum - Additional Resources

Measure 1 Appropriate methods and metrics are identified and applied. Measure 1.1 Approaches and metrics for measurement of AI risks enumerated during the Map function are selected for implementation starting with the most significant AI risks. The risks or trustworthiness characteristics that will not – or cannot – be measured are properly documented. About The development and utility of trustworthy AI systems depends on reliable measurements and evaluations of underlying technologies and their use. Compared with traditional software systems, AI technologies bring new failure modes, inherent dependence on training data and methods which directly tie to data quality and representativeness. Additionally, AI systems are inherently socio-technical in nature, meaning they are influenced by societal dynamics and human behavior. AI risks – and benefits – can emerge from the interplay of technical aspects combined with societal factors related to how a system is used, its interactions with other AI systems, who operates it, and the social context in which it is deployed. In other words, what should be measured depends on the purpose, audience, and needs of the evaluations. These two factors influence selection of approaches and metrics for measurement of AI risks enumerated during the Map function. The AI landscape is evolving and so are the methods and metrics for AI measurement. The evolution of metrics is key to maintaining efficacy of the measures. Suggested Actions Establish approaches for detecting, tracking and measuring known risks, errors, incidents or negative impacts. Identify testing procedures and metrics to demonstrate whether or not the system is fit for purpose and functioning as claimed. Identify testing procedures and metrics to demonstrate AI system trustworthiness.

105

Made with FlippingBook Annual report maker