Cyber IT Supervisory Forum eBook
Internal Use Only
Cyber Attacks on AI Systems According to NIST: “Adversaries can deliberately confuse AI systems to make them malfunction — and there’s no foolproof defense that their developers can employ.” [1] CrowdStrike identifies three types of cyber attacks targeting AI systems: [2] Poisoning attacks: Poisoning attacks target the AI/ML model training data, which is the information that the model uses to train the algorithm. Evasion attacks: Evasion attacks target an AI/ML model’s input data. These attacks apply subtle changes to the data that is shared with the model, causing it to be misclassified and negatively impacting the model’s predictive capabilities. Model tampering: Model tampering targets the parameters or structure of a pre-trained AI/ML model by making unauthorized alterations to the model to compromise its ability to create accurate outputs.
5
© 2024 THE MITRE CORPORATION. ALL RIGHTS RESERVED. APPROVED FOR PUBLIC RELEASE. DISTRIBUTION UNLIMITED 23-01698-01.
Internal Use Only
100+ organizations are engaged in ATLAS, using ATLAS tools and capabilities to understand and mitigate their organization’s AI security risks atlas.mitre.org 6
© 2024 THE MITRE CORPORATION. ALL RIGHTS RESERVED. APPROVED FOR PUBLIC RELEASE. DISTRIBUTION UNLIMITED 23-01698-01.
Made with FlippingBook Digital Publishing Software