LLM03:2025 Supply Chain
LLM supply chains are susceptible to various vulnerabilities affecting training data, models, and deployment platforms, leading to biased outputs, security breaches, or system failures.
While traditional software vulnerabilities focus on code flaws and dependencies, ML risks extend to third-party pre-trained models and data that can be manipulated through tampering or poisoning attacks.
Impact Level
HighAttack Surface
Models, Data, Dependencies
Risk Source
Third-party Components
1. Traditional Third-party Package Vulnerabilities
Outdated or deprecated components that attackers can exploit to compromise LLM applications, similar to OWASP A06:2021 but with increased risks during model development or fine-tuning.
2. Vulnerable Pre-Trained Models
Models are binary black boxes where static inspection offers little security assurance. Vulnerable models can contain hidden biases, backdoors, or malicious features not identified through safety evaluations.
3. Vulnerable LoRA Adapters
LoRA (Low-Rank Adaptation) fine-tuning creates new risks where malicious adapters can compromise the integrity and security of pre-trained base models in collaborative environments.
4. Weak Model Provenance
Currently, there are no strong provenance assurances in published models. Model Cards provide information but offer no guarantees on model origin, allowing attackers to compromise supplier accounts.
5. On-Device LLM Vulnerabilities
LLM models on devices increase attack surface with compromised manufacturing processes and exploitation of device OS or firmware vulnerabilities to compromise models.
6. Licensing Risks
AI development involves diverse software and dataset licenses creating risks if not properly managed. Different licenses impose varying legal requirements and usage restrictions.