Cyber & IT Supervisory Forum - Additional Resources
Measure 2.10 Privacy risk of the AI system – as identified in the MAP function – is examined and documented. About Privacy refers generally to the norms and practices that help to safeguard human autonomy, identity, and dignity. These norms and practices typically address freedom from intrusion, limiting observation, or individuals’ agency to consent to disclosure or control of facets of their identities (e.g., body, data, reputation). Privacy values such as anonymity, confidentiality, and control generally should guide choices for AI system design, development, and deployment. Privacy-related risks may influence security, bias, and transparency and come with tradeoffs with these other characteristics. Like safety and security, specific technical features of an AI system may promote or reduce privacy. AI systems can also present new risks to privacy by allowing inference to identify individuals or previously private information about individuals. Privacy-enhancing technologies (“PETs”) for AI, as well as data minimizing methods such as de-identification and aggregation for certain model outputs, can support design for privacy-enhanced AI systems. Under certain conditions such as data sparsity, privacy enhancing techniques can result in a loss in accuracy, impacting decisions about fairness and other Specify privacy-related values, frameworks, and attributes that are applicable in the context of use through direct engagement with end users and potentially impacted groups and communities. Document collection, use, management, and disclosure of personally sensitive information in datasets, in accordance with privacy and data governance policies Quantify privacy-level data aspects such as the ability to identify individuals or groups (e.g., k-anonymity metrics, l-diversity, t closeness). values in certain domains. Suggested Actions
141
Made with FlippingBook Annual report maker