Cyber & IT Supervisory Forum - Additional Resources
Develop and apply TEVV protocols for models, system and its subcomponents, deployment, and operation. Demonstrate and document that AI system performance and validation metrics are interpretable and unambiguous for downstream decision making tasks and take socio-technical factors such as context of use into consideration. Identify and document assumptions, techniques, and metrics used for testing and evaluation throughout the AI lifecycle including experimental design techniques for data collection, selection, and management practices in accordance with data governance policies established in GOVERN. Identify testing modules that can be incorporated throughout the AI lifecycle and verify that processes enable corroboration by independent evaluators. Establish mechanisms for regular communication and feedback among relevant AI actors and internal or external stakeholders related to the validity of design and deployment assumptions. Establish mechanisms for regular communication and feedback between relevant AI actors and internal or external stakeholders related to the development of TEVV approaches throughout the lifecycle to detect and assess potentially harmful impacts Document assumptions made and techniques used in data selection, development of indices – especially those operationalizing concepts that are inherently unobservable (e.g., “hireability,” “criminality.” “lendability”). Map adherence to policies that address data and construct validity, bias, privacy and security for AI systems and verify documentation, oversight, and processes. Identify and document transparent methods (e.g., causal discovery methods) for inferring causal relationships between constructs being modeled and dataset attributes or proxies. curation, preparation and analysis, including: identification of constructs and proxy targets,
76
Made with FlippingBook Annual report maker