Back to Attack Vectors

AI Supply Chain Attacks

Compromising AI systems through malicious dependencies, datasets, or pre-trained models

Threat Categories
Poisoned Pre-trained Models
Critical

Impact: Widespread compromise of downstream applications

Examples:

Backdoored BERT modelsTrojan ResNet weightsMalicious HuggingFace models
Malicious Dependencies
Critical

Impact: Code execution, data exfiltration, model manipulation

Examples:

PyPI package typosquattingnpm AI library backdoorsCompromised ML frameworks
Dataset Poisoning
High

Impact: Model learns attacker-controlled behaviors

Examples:

Backdoored ImageNet subsetsPoisoned text corporaMalicious training data
Model Hub Compromise
Critical

Impact: Mass distribution of compromised models

Examples:

HuggingFace account takeoverTensorFlow Hub injectionPyTorch Hub malware
Defense Strategies
Model Provenance Tracking

Maintain chain of custody for models and datasets

Dependency Scanning

Automated scanning of ML dependencies for vulnerabilities

Model Signing & Verification

Cryptographic signatures for model integrity

Sandboxed Execution

Isolate model loading and inference in secure environments

Supply Chain Audits

Regular security audits of ML supply chain components

Notable Incidents

SolarWinds-Style AI Attack (2023)

Compromised ML library distributed through PyPI affected thousands of AI applications, enabling data exfiltration and model manipulation.

HuggingFace Model Backdoor (2024)

Popular pre-trained model found to contain backdoor triggers that activated on specific input patterns, affecting downstream applications.

Dataset Poisoning Campaign (2024)

Large-scale poisoning of public datasets used for training, introducing subtle biases and backdoors into models trained on the data.