AI Supply Chain Attacks
Compromising AI systems through malicious dependencies, datasets, or pre-trained models
Impact: Widespread compromise of downstream applications
Examples:
Impact: Code execution, data exfiltration, model manipulation
Examples:
Impact: Model learns attacker-controlled behaviors
Examples:
Impact: Mass distribution of compromised models
Examples:
Maintain chain of custody for models and datasets
Automated scanning of ML dependencies for vulnerabilities
Cryptographic signatures for model integrity
Isolate model loading and inference in secure environments
Regular security audits of ML supply chain components
SolarWinds-Style AI Attack (2023)
Compromised ML library distributed through PyPI affected thousands of AI applications, enabling data exfiltration and model manipulation.
HuggingFace Model Backdoor (2024)
Popular pre-trained model found to contain backdoor triggers that activated on specific input patterns, affecting downstream applications.
Dataset Poisoning Campaign (2024)
Large-scale poisoning of public datasets used for training, introducing subtle biases and backdoors into models trained on the data.