🤖
Large Language Model Attacks
3 Attack Techniques
Security vulnerabilities and attack techniques targeting LLM systems, including prompt injection, jailbreaking, and model manipulation.
Filter Attacks
Prompt Injection
CriticalLow
Malicious prompts that manipulate LLM behavior to bypass safety measures.
LLM Jailbreaking
HighMedium
Techniques to bypass AI safety constraints through creative prompt engineering.
Model Inversion
HighHigh
Extracting sensitive training data from language models.
Category Statistics
1
Critical
2
High
0
Medium
1
Low Complexity