Back to OWASP Top 10#1 Critical Risk
LLM01:2025 Prompt Injection
The most critical vulnerability in LLM applications where user prompts alter the model's behavior or output in unintended ways, potentially bypassing safety measures and enabling unauthorized actions.
Vulnerability Overview
A Prompt Injection Vulnerability occurs when user prompts alter the LLM's behavior or output in unintended ways. These inputs can affect the model even if they are imperceptible to humans, as long as the content is parsed by the model.
Impact Level
CriticalAttack Vector
User Input
Exploitability
High
Direct Prompt Injections
Direct prompt injections occur when a user's prompt input directly alters the behavior of the model in unintended ways.
- • Intentional malicious crafting
- • Unintentional triggering
- • Immediate model behavior change
- • User-controlled input vector
Indirect Prompt Injections
Indirect prompt injections occur when an LLM accepts input from external sources that contain hidden instructions.
- • External content manipulation
- • Website or file-based attacks
- • Hidden instruction embedding
- • Supply chain vulnerabilities
Multimodal Risks
The rise of multimodal AI introduces unique prompt injection risks where malicious actors could exploit interactions between modalities, such as hiding instructions in images that accompany benign text.