Cyber & IT Supervisory Forum - Additional Resources

Establish thresholds and alert procedures for dataset representativeness within the context of use. Construct datasets in close collaboration with experts with knowledge of the context of use. Follow intellectual property and privacy rights related to datasets and their use, including for the subjects represented in the data. Evaluate data representativeness through investigating known failure modes, assessing data quality and diverse sourcing, applying public benchmarks, traditional bias testing, chaos engineering, stakeholder feedback Use informed consent for individuals providing data used in system testing and evaluation. Given the purpose of this AI, what is an appropriate interval for checking whether it is still accurate, unbiased, explainable, etc.? What are the checks for this model? How has the entity identified and mitigated potential impacts of bias in the data, including inequitable or discriminatory outcomes? To what extent are the established procedures effective in mitigating bias, inequity, and other concerns resulting from the system? To what extent has the entity identified and mitigated potential bias— statistical, contextual, and historical—in the data? If it relates to people, were they told what the dataset would be used for, and did they consent? What community norms exist for data collected from human communications? If consent was obtained, how? Were the people provided with any mechanism to revoke their consent in the future or for certain uses? If human subjects were used in the development or testing of the AI system, what protections were put in place to promote their safety and wellbeing?

Organizations can document the following: Transparency & Documentation

116

Made with FlippingBook Annual report maker