Vulnerability Guide

AI Vulnerability Types

Comprehensive guide to AI and machine learning vulnerability types. Understand threats, attack vectors, and defense strategies.

LLM & GenAI Vulnerabilities

Prompt Injection
Critical

Malicious inputs that manipulate LLM behavior by injecting instructions into prompts, bypassing safety controls.

Learn More
Training Data Poisoning
Critical

Injection of malicious data into training sets to compromise model behavior or create backdoors.

Learn More
Model Extraction
High

Stealing model parameters, architecture, or training data through API queries and analysis.

Learn More
Insecure Output Handling
High

Insufficient validation of LLM outputs leading to XSS, SQL injection, or other downstream vulnerabilities.

Learn More

Model Security Vulnerabilities

Adversarial Examples
High

Carefully crafted inputs designed to fool models into making incorrect predictions or classifications.

Learn More
Model Inversion
High

Reconstructing training data or sensitive information from model outputs and parameters.

Learn More
Membership Inference
Medium

Determining whether specific data was used in model training, potentially exposing sensitive information.

Learn More
Backdoor Attacks
Critical

Hidden triggers embedded in models that cause malicious behavior when activated by specific inputs.

Learn More

Agent & System Vulnerabilities

Agent Manipulation
High

Exploiting autonomous agents to perform unauthorized actions or bypass security policies.

Learn More
Supply Chain Attacks
Critical

Compromising AI systems through malicious dependencies, pre-trained models, or datasets.

Learn More
Insecure Plugin Design
High

Vulnerabilities in LLM plugins and extensions that allow unauthorized access or code execution.

Learn More
Excessive Agency
Medium

AI systems granted excessive permissions or autonomy leading to unintended or harmful actions.

Learn More

Learn More About AI Security

Explore our comprehensive resources on AI vulnerabilities and security best practices.