Cyber & IT Supervisory Forum - Additional Resources
Stress-test system performance under likely scenarios (e.g., concept drift, high load) and beyond known limitations, in consultation with domain experts. Test the system under conditions similar to those related to past known incidents or near-misses and measure system performance and safety characteristics. Apply chaos engineering approaches to test systems in extreme conditions and gauge unexpected responses. Document the range of conditions under which the system has been tested and demonstrated to fail safely. Measure and monitor system performance in real-time to enable rapid response when AI system incidents are detected. Collect pertinent safety statistics (e.g., out-of-range performance, incident response times, system down time, injuries, etc.) in anticipation of potential information sharing with impacted communities or as required by AI system oversight personnel. Align measurement to the goal of continuous improvement. Seek to increase the range of conditions under which the system is able to fail safely through system modifications in response to in-production testing and events. Document, practice and measure incident response plans for AI system incidents, including measuring response and down times. Compare documented safety testing and monitoring information with established risk tolerances on an on-going basis. Consult MANAGE for detailed information related to managing safety risks.
Organizations can document the following: Transparency & Documentation
What testing, if any, has the entity conducted on the AI system to identify errors and limitations (i.e., adversarial or stress testing)? To what extent has the entity documented the AI system’s development, testing methodology, metrics, and performance outcomes?
129
Made with FlippingBook Annual report maker