AI Security Glossary

Comprehensive dictionary of 100+ AI security terms, definitions, and concepts. Learn about LLM security, GenAI threats, agentic AI, adversarial attacks, prompt injection, deepfakes, machine learning security, and compliance frameworks. Essential AI terminology for security professionals.

161
Security Terms
16
Categories
46
High-Risk Terms
602
Cross-References
Categories:
Risk Level:
Attack TechniquesHigh Risk
Adversarial Attack
A technique that involves adding small, often imperceptible perturbations to input data to cause AI models to make incorrect predictions or classifications.

Real-World Examples

  • Adding noise to images to fool image classifiers
  • Modifying text to bypass content filters

Related Terms

Adversarial ExamplesEvasion AttackPerturbation
Agentic SecurityCritical Risk
Agent Hijacking
An attack where malicious actors gain control of autonomous AI agents, redirecting their actions to serve unintended purposes while maintaining the appearance of normal operation.

Real-World Examples

  • Redirecting a trading bot to make unauthorized transactions
  • Hijacking a customer service agent to leak sensitive data

Related Terms

Goal HijackingAgent PoisoningAutonomous Agent Security
Model SecurityCritical Risk
Backdoor Attack
A type of attack where malicious functionality is embedded into an AI model during training, activated by specific trigger patterns in the input.

Real-World Examples

  • A model that misclassifies images containing a specific watermark
  • An LLM that generates harmful content when prompted with a secret phrase

Related Terms

Trojan AttackModel PoisoningTrigger Pattern
LLM SecurityHigh Risk
Context Window Poisoning
An attack technique that involves injecting malicious content into the context window of large language models to influence their responses or extract sensitive information.

Real-World Examples

  • Injecting malicious instructions in document summaries
  • Poisoning chat history to influence future responses

Related Terms

Context InjectionPrompt InjectionContext Manipulation
Training SecurityHigh Risk
Data Poisoning
The practice of intentionally corrupting training data to compromise the integrity and performance of machine learning models.

Real-World Examples

  • Adding mislabeled examples to training datasets
  • Injecting biased data to skew model predictions

Related Terms

Training Data ManipulationDataset CorruptionSupply Chain Attack
GenAI SecurityHigh Risk
Deepfake
Synthetic media created using deep learning techniques to replace a person's likeness with someone else's, often used for deception or fraud.

Real-World Examples

  • Fake video calls for CEO fraud
  • Synthetic audio for voice phishing

Related Terms

Synthetic MediaFace SwapVoice Cloning
Privacy ProtectionLow Risk
Differential Privacy
A mathematical framework for quantifying and limiting the privacy loss when statistical information about a dataset is released.

Real-World Examples

  • Adding calibrated noise to query results
  • Protecting individual records in aggregate statistics

Related Terms

Privacy BudgetNoise AdditionPrivacy Preservation
Privacy AttacksMedium Risk
Embedding Inversion
A technique to reconstruct original data from learned embeddings or representations, potentially exposing sensitive information.

Real-World Examples

  • Reconstructing faces from facial recognition embeddings
  • Extracting text from sentence embeddings

Related Terms

Model InversionFeature ExtractionRepresentation Attack
Distributed SecurityHigh Risk
Federated Learning Attack
Attacks targeting federated learning systems where malicious participants can compromise the global model through poisoned local updates.

Real-World Examples

  • Malicious clients sending poisoned gradients
  • Coordinated attacks on federated networks

Related Terms

Byzantine AttackModel PoisoningDistributed Learning
Agentic SecurityCritical Risk
Goal Hijacking
An attack where an AI agent's objectives are maliciously altered or redirected, causing it to pursue unintended goals while appearing to function normally.

Real-World Examples

  • Changing a recommendation system's goals to promote specific products
  • Redirecting an autonomous vehicle's destination

Related Terms

Agent HijackingObjective ManipulationGoal Misalignment
LLM SecurityMedium Risk
Hallucination
When AI models, particularly language models, generate false or nonsensical information that appears plausible, potentially leading to misinformation.

Real-World Examples

  • LLMs citing non-existent research papers
  • Generating fake historical facts

Related Terms

ConfabulationFalse GenerationModel Uncertainty
Privacy AttacksMedium Risk
Inference Attack
Attacks that exploit the outputs or behavior of machine learning models to infer sensitive information about the training data or model parameters.

Real-World Examples

  • Determining if specific data was used in training
  • Inferring demographic information from model outputs

Related Terms

Model InversionMembership InferenceProperty Inference
LLM SecurityHigh Risk
Jailbreaking
Techniques used to bypass safety measures and content filters in AI systems, particularly large language models, to generate prohibited content.

Real-World Examples

  • Using roleplay scenarios to bypass content restrictions
  • Encoding harmful requests to avoid detection

Related Terms

Prompt InjectionSafety BypassContent Filter Evasion
Model SecurityMedium Risk
Knowledge Distillation Attack
An attack that exploits the knowledge distillation process to extract information from teacher models or inject malicious knowledge into student models.

Real-World Examples

  • Extracting proprietary model knowledge through distillation
  • Poisoning student models via malicious teachers

Related Terms

Model ExtractionTeacher-Student AttackKnowledge Transfer
GenAI SecurityMedium Risk
Latent Space Manipulation
Techniques that modify the latent representations in generative models to control or manipulate the generated outputs in specific ways.

Real-World Examples

  • Editing facial expressions in generated images
  • Modifying text style in language generation

Related Terms

Latent Code EditingStyle TransferSemantic Manipulation
Privacy AttacksMedium Risk
Membership Inference Attack
An attack that determines whether a specific data point was included in a model's training dataset by analyzing the model's behavior.

Real-World Examples

  • Determining if a person's medical record was used in training
  • Identifying training images from model responses

Related Terms

Training Data InferencePrivacy LeakageModel Interrogation
IP TheftHigh Risk
Model Extraction
The process of stealing or replicating a machine learning model's functionality by querying it and training a substitute model on the responses.

Real-World Examples

  • Cloning a proprietary image classifier
  • Replicating a commercial recommendation system

Related Terms

Model StealingAPI AbuseIntellectual Property Theft
Model SecurityCritical Risk
Neural Backdoor
A hidden functionality embedded in neural networks that can be activated by specific trigger patterns, causing the model to behave maliciously.

Real-World Examples

  • A face recognition system that fails for specific patterns
  • A text classifier that misclassifies when certain words are present

Related Terms

Backdoor AttackTrojan Neural NetworkHidden Trigger
LLM SecurityHigh Risk
Prompt Injection
An attack technique where malicious instructions are embedded in prompts to manipulate large language models into performing unintended actions.

Real-World Examples

  • Injecting 'ignore previous instructions' in user input
  • Embedding malicious prompts in documents

Related Terms

Indirect Prompt InjectionContext InjectionInstruction Hijacking
Model SecurityMedium Risk
Quantization Attack
Attacks that exploit the quantization process used to compress neural networks, potentially introducing vulnerabilities or degrading performance.

Real-World Examples

  • Exploiting reduced precision to cause misclassifications
  • Attacking quantized models with specific inputs

Related Terms

Model Compression AttackPrecision ReductionQuantization Noise
Security TestingLow Risk
Red Teaming
A systematic approach to testing AI systems by simulating adversarial attacks to identify vulnerabilities and weaknesses before deployment.

Real-World Examples

  • Testing LLMs for harmful content generation
  • Evaluating autonomous systems for safety failures

Related Terms

Adversarial TestingSecurity AssessmentPenetration Testing
Attack InfrastructureMedium Risk
Shadow Model
A model trained to mimic the behavior of a target model, often used as a stepping stone for more sophisticated attacks like membership inference.

Real-World Examples

  • Training a shadow model to attack a private classifier
  • Using shadow models for membership inference

Related Terms

Surrogate ModelModel MimickingAttack Proxy
Privacy AttacksHigh Risk
Training Data Extraction
Attacks that attempt to recover specific training examples from machine learning models, potentially exposing sensitive or private information.

Real-World Examples

  • Extracting personal information from language models
  • Recovering training images from generative models

Related Terms

Data ReconstructionTraining Data LeakageMemorization Attack
Attack TechniquesHigh Risk
Universal Adversarial Perturbation
A single perturbation that can fool a neural network on most inputs from a given distribution, making it particularly dangerous for real-world attacks.

Real-World Examples

  • A single noise pattern that fools most image classifiers
  • Universal patches that cause misclassification

Related Terms

Universal AttackTransferable PerturbationRobust Adversarial
Authentication SecurityHigh Risk
Verification Bypass
Techniques used to circumvent AI-based verification systems, such as biometric authentication or content verification mechanisms.

Real-World Examples

  • Using deepfakes to bypass facial recognition
  • Spoofing voice authentication systems

Related Terms

Authentication BypassBiometric SpoofingIdentity Fraud
Content AuthenticationLow Risk
Watermarking
Techniques for embedding invisible markers in AI-generated content to enable detection and verification of synthetic media.

Real-World Examples

  • Watermarking AI-generated images
  • Embedding signatures in synthetic text

Related Terms

Content ProvenanceSynthetic Media DetectionDigital Fingerprinting
Attack TechniquesHigh Risk
Zero-Shot Attack
Attacks that work against AI models without requiring prior knowledge of the model's architecture, training data, or parameters.

Real-World Examples

  • Attacking unknown models through API queries
  • Using transferable adversarial examples

Related Terms

Black-box AttackQuery-based AttackTransfer Attack
Attack TechniquesHigh Risk
API Poisoning
An attack where malicious data is injected through API endpoints to corrupt AI model training or inference processes, often targeting real-time learning systems.

Real-World Examples

  • Injecting malicious feedback through user rating APIs
  • Corrupting recommendation systems via API calls

Related Terms

Data PoisoningAPI SecurityReal-time Learning Attack
Model SecurityMedium Risk
Bias Amplification
The phenomenon where AI systems amplify existing biases in training data, leading to discriminatory outcomes and unfair treatment of certain groups.

Real-World Examples

  • Hiring algorithms favoring certain demographics
  • Credit scoring systems with racial bias

Related Terms

Algorithmic BiasFairnessDiscrimination
LLM SecurityHigh Risk
Chain-of-Thought Manipulation
An attack technique that exploits the reasoning process of large language models by manipulating their step-by-step thinking to reach malicious conclusions.

Real-World Examples

  • Guiding LLMs to harmful conclusions through flawed reasoning
  • Manipulating multi-step problem solving

Related Terms

Reasoning AttackPrompt EngineeringLogic Manipulation
Attack TechniquesMedium Risk
Distributed Denial of Intelligence
A coordinated attack that overwhelms AI systems with computationally expensive queries, causing service degradation or complete failure.

Real-World Examples

  • Flooding LLM APIs with complex reasoning tasks
  • Overloading image generation services

Related Terms

DDoSResource ExhaustionComputational Attack
Agentic SecurityHigh Risk
Emergent Behavior Exploitation
Attacks that exploit unexpected behaviors that emerge from complex AI systems, particularly in multi-agent environments or large-scale deployments.

Real-World Examples

  • Exploiting unexpected agent interactions
  • Leveraging emergent communication protocols

Related Terms

Emergent PropertiesSystem ComplexityUnintended Behavior
Model SecurityHigh Risk
Fine-tuning Attack
An attack where adversaries fine-tune pre-trained models on malicious data to introduce backdoors or alter model behavior while maintaining performance on benign tasks.

Real-World Examples

  • Fine-tuning language models to generate harmful content
  • Adapting vision models for surveillance evasion

Related Terms

Transfer Learning AttackModel AdaptationBackdoor Injection
Privacy AttacksHigh Risk
Gradient Leakage
A privacy attack where sensitive information about training data is extracted by analyzing gradient updates in federated learning or distributed training scenarios.

Real-World Examples

  • Reconstructing images from gradient updates
  • Extracting text from language model gradients

Related Terms

Gradient InversionFederated Learning AttackPrivacy Leakage
Privacy AttacksMedium Risk
Homomorphic Encryption Bypass
Techniques to circumvent privacy-preserving computation methods that allow processing of encrypted data without decryption.

Real-World Examples

  • Side-channel attacks on encrypted inference
  • Timing attacks on homomorphic operations

Related Terms

Privacy-Preserving MLEncrypted ComputationCryptographic Attack
LLM SecurityHigh Risk
Instruction Following Subversion
An attack that exploits the instruction-following capabilities of AI systems to make them perform unintended actions while appearing to follow legitimate commands.

Real-World Examples

  • Embedding malicious instructions in seemingly benign prompts
  • Chaining instructions to bypass safety measures

Related Terms

Command InjectionInstruction HijackingBehavioral Manipulation
Training SecurityMedium Risk
Knowledge Graph Poisoning
An attack that corrupts knowledge graphs used by AI systems, introducing false relationships or entities to manipulate reasoning and decision-making.

Real-World Examples

  • Injecting false facts into knowledge bases
  • Corrupting entity relationships in graph databases

Related Terms

Graph Neural Network AttackKnowledge Base CorruptionSemantic Attack
GenAI SecurityCritical Risk
Latent Space Backdoor
A sophisticated backdoor attack that embeds triggers in the latent space of generative models, activated by specific patterns in the input representation.

Real-World Examples

  • Backdoors in VAE latent spaces
  • Trigger patterns in diffusion model embeddings

Related Terms

Representation AttackGenerative Model SecurityHidden Trigger
Attack TechniquesHigh Risk
Multi-Modal Attack
Attacks that exploit vulnerabilities across multiple input modalities (text, image, audio) in multi-modal AI systems to achieve malicious objectives.

Real-World Examples

  • Using audio to manipulate vision-language models
  • Cross-modal adversarial examples

Related Terms

Cross-Modal AttackMulti-Modal SecurityModality Fusion
Model SecurityMedium Risk
Neural Architecture Search Poisoning
An attack that corrupts the neural architecture search process to produce models with hidden vulnerabilities or backdoors.

Real-World Examples

  • Biasing NAS to select vulnerable architectures
  • Injecting backdoor-prone components

Related Terms

AutoML AttackArchitecture ManipulationSearch Space Poisoning
Training SecurityHigh Risk
Ontology Manipulation
Attacks that alter the conceptual frameworks and taxonomies used by AI systems, leading to misclassification and reasoning errors.

Real-World Examples

  • Modifying medical ontologies to cause misdiagnosis
  • Corrupting legal taxonomies in AI systems

Related Terms

Semantic AttackConcept DriftTaxonomy Corruption
LLM SecurityHigh Risk
Prompt Chaining Attack
A sophisticated attack technique that uses a sequence of carefully crafted prompts to gradually manipulate AI systems into performing prohibited actions.

Real-World Examples

  • Building up to harmful requests through innocent prompts
  • Chaining context to bypass safety filters

Related Terms

Multi-Step AttackPrompt EngineeringGradual Manipulation
AI FundamentalsLow Risk
Artificial Intelligence (AI)
The simulation of human intelligence in machines that are programmed to think, learn, and make decisions like humans. AI encompasses machine learning, natural language processing, computer vision, and robotics.

Real-World Examples

  • ChatGPT for conversational AI
  • Autonomous vehicles using AI navigation
  • Medical diagnosis AI systems

Related Terms

Machine LearningDeep LearningNeural NetworksCognitive Computing
AI FundamentalsLow Risk
Machine Learning (ML)
A subset of AI that enables systems to learn and improve from experience without being explicitly programmed. ML algorithms build mathematical models based on training data to make predictions or decisions.

Real-World Examples

  • Email spam filters learning from user behavior
  • Recommendation systems on streaming platforms
  • Fraud detection in banking

Related Terms

Supervised LearningUnsupervised LearningReinforcement LearningDeep Learning
AI FundamentalsLow Risk
Deep Learning
A subset of machine learning that uses neural networks with multiple layers (deep neural networks) to learn complex patterns in data. Inspired by the structure and function of the human brain.

Real-World Examples

  • Image recognition systems
  • Natural language processing models
  • Voice assistants

Related Terms

Neural NetworksConvolutional Neural NetworksRecurrent Neural NetworksTransformers
AI FundamentalsLow Risk
Neural Network
A computing system inspired by biological neural networks that make up animal brains. Consists of interconnected nodes (neurons) that process information through weighted connections.

Real-World Examples

  • Image classification networks
  • Language translation models
  • Pattern recognition systems

Related Terms

Deep LearningArtificial NeuronsBackpropagationActivation Function
LLM SecurityMedium Risk
Large Language Model (LLM)
A type of AI model trained on vast amounts of text data to understand and generate human-like text. LLMs can perform tasks like translation, summarization, question-answering, and creative writing.

Real-World Examples

  • ChatGPT
  • Claude
  • Google Bard
  • LLaMA

Related Terms

GPTTransformerNatural Language ProcessingLanguage Model
GenAI SecurityMedium Risk
Generative AI (GenAI)
AI systems capable of generating new content including text, images, audio, video, and code. These models learn patterns from training data and create original outputs based on prompts.

Real-World Examples

  • DALL-E for image generation
  • Midjourney for art creation
  • GitHub Copilot for code generation

Related Terms

LLMDiffusion ModelsGANsSynthetic Media
Agentic SecurityHigh Risk
Agentic AI
Autonomous AI systems that can make decisions, take actions, and interact with their environment independently to achieve specific goals without constant human intervention.

Real-World Examples

  • Autonomous trading bots
  • Customer service chatbots
  • Self-driving vehicles

Related Terms

Autonomous AgentsAI AgentsMulti-Agent SystemsAgentic Infrastructure
AI FundamentalsLow Risk
Transformer Architecture
A deep learning architecture introduced in 2017 that uses attention mechanisms to process sequential data. Transformers revolutionized NLP and are the foundation of modern LLMs.

Real-World Examples

  • BERT for language understanding
  • GPT models for text generation
  • Vision transformers for image processing

Related Terms

Attention MechanismSelf-AttentionBERTGPT
AI FundamentalsMedium Risk
Fine-Tuning
The process of adapting a pre-trained model to a specific task or domain by training it further on a smaller, task-specific dataset. More efficient than training from scratch.

Real-World Examples

  • Fine-tuning GPT for medical diagnosis
  • Adapting image models for specific object detection
  • Customizing LLMs for legal documents

Related Terms

Transfer LearningPre-trained ModelsDomain AdaptationModel Training
AI FundamentalsLow Risk
Transfer Learning
A machine learning technique where knowledge gained from solving one problem is applied to a different but related problem. Reduces training time and data requirements.

Real-World Examples

  • Using ImageNet-trained models for medical imaging
  • Adapting language models for code generation
  • Transferring speech recognition to new languages

Related Terms

Fine-TuningPre-trained ModelsDomain AdaptationKnowledge Transfer
Attack TechniquesHigh Risk
Evasion Attack
Attacks that craft adversarial inputs to cause AI models to misclassify or make incorrect predictions while appearing normal to humans. The most common type of adversarial attack.

Real-World Examples

  • Adding imperceptible noise to images to fool classifiers
  • Modifying text to bypass spam filters
  • Creating adversarial patches

Related Terms

Adversarial AttackAdversarial ExamplesPerturbationFooling Attack
Training SecurityCritical Risk
Poisoning Attack
Attacks that corrupt training data or the training process to introduce vulnerabilities, backdoors, or biases into machine learning models. Can be data poisoning or model poisoning.

Real-World Examples

  • Injecting malicious samples into training datasets
  • Corrupting federated learning updates
  • Poisoning recommendation systems

Related Terms

Data PoisoningModel PoisoningBackdoor AttackTraining Data Manipulation
Privacy AttacksHigh Risk
Model Inversion Attack
An attack that reconstructs sensitive training data or extracts private information by querying a machine learning model and analyzing its outputs or internal representations.

Real-World Examples

  • Reconstructing faces from facial recognition models
  • Extracting text from language models
  • Recovering medical images from diagnostic models

Related Terms

Training Data ExtractionPrivacy AttackInference AttackData Reconstruction
Attack TechniquesHigh Risk
Black-Box Attack
Attacks against AI models where the attacker has no knowledge of the model's architecture, parameters, or training data. Only query access to the model is available.

Real-World Examples

  • Attacking cloud-based ML APIs
  • Attacking proprietary models through public interfaces
  • Transfer attacks on unknown models

Related Terms

White-Box AttackQuery-Based AttackZero-Knowledge AttackAPI Attack
Attack TechniquesHigh Risk
White-Box Attack
Attacks against AI models where the attacker has full knowledge of the model including architecture, parameters, and training data. More powerful but less realistic than black-box attacks.

Real-World Examples

  • Gradient-based adversarial example generation
  • Attacking open-source models
  • Research attacks with full model access

Related Terms

Black-Box AttackGradient-Based AttackModel AccessFull Knowledge Attack
Attack TechniquesHigh Risk
Transfer Attack
An attack where adversarial examples crafted against one model successfully fool a different model, even when the models have different architectures or training data.

Real-World Examples

  • Adversarial images that fool multiple image classifiers
  • Text prompts that work across different LLMs
  • Universal patches for vision models

Related Terms

Adversarial TransferabilityCross-Model AttackUniversal Adversarial PerturbationModel-Agnostic Attack
Training SecurityCritical Risk
Supply Chain Attack
Attacks that compromise AI systems by targeting components in the supply chain, such as pre-trained models, training datasets, ML frameworks, or deployment infrastructure.

Real-World Examples

  • Poisoned pre-trained models from model zoos
  • Malicious ML libraries
  • Compromised training data repositories

Related Terms

Data PoisoningModel BackdoorDependency AttackThird-Party Risk
Model SecurityCritical Risk
Trojan Attack
A type of backdoor attack where malicious functionality is embedded in a model during training, activated by specific trigger patterns. The model performs normally on clean inputs.

Real-World Examples

  • Models that misclassify when specific watermarks are present
  • LLMs generating harmful content with secret triggers
  • Face recognition systems failing for specific patterns

Related Terms

Backdoor AttackNeural TrojanTrigger PatternHidden Functionality
Attack TechniquesHigh Risk
Adversarial Examples
Inputs that are intentionally designed to fool machine learning models. They appear normal to humans but cause models to make incorrect predictions with high confidence.

Real-World Examples

  • Images with imperceptible noise that fool classifiers
  • Text with special characters that bypass filters
  • Audio samples that confuse speech recognition

Related Terms

Adversarial AttackEvasion AttackPerturbationFooling Samples
Attack TechniquesMedium Risk
Perturbation
Small, often imperceptible modifications added to input data to create adversarial examples. The goal is to cause misclassification while keeping the input visually or semantically similar.

Real-World Examples

  • Adding pixel-level noise to images
  • Inserting special characters in text
  • Modifying audio frequencies slightly

Related Terms

Adversarial ExamplesNoise AdditionInput ManipulationAdversarial Crafting
LLM SecurityLow Risk
Token
The basic unit of text processing in language models. Tokens can be words, subwords, or characters. LLMs process and generate text as sequences of tokens.

Real-World Examples

  • GPT-4 processes text in tokens
  • Token limits in API calls
  • Token-based pricing for LLM services

Related Terms

TokenizationSubwordVocabularyText Encoding
LLM SecurityMedium Risk
Context Window
The maximum number of tokens (input + output) that a language model can process in a single interaction. Determines how much information the model can consider at once.

Real-World Examples

  • GPT-4 has a 128K token context window
  • Managing context in long conversations
  • Context window poisoning attacks

Related Terms

Token LimitPrompt LengthMemoryAttention Span
LLM SecurityLow Risk
Temperature
A parameter that controls the randomness of LLM outputs. Higher temperature increases creativity and randomness, while lower temperature makes outputs more deterministic and focused.

Real-World Examples

  • Setting temperature to 0.7 for balanced responses
  • High temperature for creative writing
  • Low temperature for factual answers

Related Terms

SamplingRandomnessCreativityOutput Control
LLM SecurityHigh Risk
System Prompt
Instructions or guidelines provided to an LLM to define its behavior, role, and constraints. System prompts are often targeted in prompt injection attacks.

Real-World Examples

  • Defining ChatGPT's personality
  • Setting safety guidelines for LLMs
  • Configuring assistant behavior

Related Terms

Prompt EngineeringInstruction TuningSystem InstructionsRole Definition
LLM SecurityMedium Risk
Few-Shot Learning
A technique where an LLM learns to perform a task by seeing just a few examples in the prompt. Demonstrates the model's ability to learn from context without fine-tuning.

Real-World Examples

  • Providing examples in prompts to guide LLM behavior
  • Teaching new tasks through demonstrations
  • Context-based task adaptation

Related Terms

In-Context LearningExample-Based LearningPrompt EngineeringZero-Shot Learning
LLM SecurityLow Risk
Zero-Shot Learning
The ability of an LLM to perform a task without any specific training examples or fine-tuning. The model relies on its general knowledge and instruction-following capabilities.

Real-World Examples

  • LLMs translating languages without training
  • Performing new tasks from instructions alone
  • General-purpose AI capabilities

Related Pages

Related Terms

GeneralizationInstruction FollowingTask TransferCapability Emergence
LLM SecurityMedium Risk
Retrieval-Augmented Generation (RAG)
A technique that combines information retrieval with LLM generation. The model retrieves relevant documents and uses them as context to generate more accurate and up-to-date responses.

Real-World Examples

  • Chatbots with access to company documents
  • LLMs answering questions using retrieved articles
  • Enhanced accuracy through external knowledge

Related Terms

Information RetrievalContext EnhancementKnowledge BaseDocument Search
LLM SecurityHigh Risk
Function Calling
The ability of LLMs to call external functions or APIs based on user requests. Enables LLMs to perform actions beyond text generation, such as database queries or API calls.

Real-World Examples

  • LLMs booking flights through APIs
  • Agents executing code based on requests
  • Function calling in autonomous agents

Related Terms

Tool UseAPI IntegrationAction ExecutionAgent Capabilities
LLM SecurityMedium Risk
Reinforcement Learning from Human Feedback (RLHF)
A training technique that uses human feedback to fine-tune LLMs, making them more helpful, harmless, and aligned with human values. Used to improve model safety and behavior.

Real-World Examples

  • Training ChatGPT to be helpful and safe
  • Improving model responses through human ratings
  • Aligning AI with human preferences

Related Terms

AlignmentSafety TrainingHuman FeedbackModel Fine-Tuning
LLM SecurityMedium Risk
Alignment
The process of ensuring AI systems pursue intended goals and behave in ways that are beneficial to humans. Includes safety, helpfulness, and ethical considerations.

Real-World Examples

  • Training models to refuse harmful requests
  • Ensuring AI goals match human values
  • Preventing misaligned behavior

Related Terms

AI SafetyValue AlignmentRLHFEthical AI
GenAI SecurityLow Risk
Diffusion Model
A type of generative model that creates data by gradually removing noise from random noise. Used for high-quality image, audio, and video generation.

Real-World Examples

  • Stable Diffusion for image generation
  • Midjourney's image creation
  • Video generation models

Related Terms

Stable DiffusionDALL-EImage GenerationDenoising
GenAI SecurityMedium Risk
Generative Adversarial Network (GAN)
A framework consisting of two neural networks (generator and discriminator) that compete against each other. The generator creates fake data while the discriminator tries to identify it.

Real-World Examples

  • Creating realistic faces
  • Generating artwork
  • Producing synthetic training data

Related Terms

GeneratorDiscriminatorAdversarial TrainingSynthetic Data
GenAI SecurityHigh Risk
Synthetic Media
Media content (images, video, audio, text) that is artificially generated or manipulated using AI. Includes deepfakes, AI-generated images, and synthetic voices.

Real-World Examples

  • AI-generated profile pictures
  • Deepfake videos
  • Synthetic voice recordings
  • AI-written articles

Related Terms

DeepfakeAI-Generated ContentSynthetic DataFake Media
GenAI SecurityHigh Risk
Voice Cloning
The process of creating a synthetic copy of a person's voice using AI. Can be used for legitimate purposes like accessibility or malicious purposes like fraud.

Real-World Examples

  • AI voice assistants with custom voices
  • Voice phishing attacks
  • Accessibility tools for speech-impaired users

Related Terms

DeepfakeSynthetic AudioVoice SynthesisAudio Deepfake
GenAI SecurityLow Risk
Style Transfer
A technique that applies the artistic style of one image to another while preserving the content. Uses neural networks to separate and recombine style and content.

Real-World Examples

  • Applying Van Gogh style to photos
  • Converting photos to paintings
  • Artistic image transformation

Related Terms

Neural Style TransferImage ManipulationArtistic GenerationContent Preservation
GenAI SecurityLow Risk
Text-to-Image
AI systems that generate images from textual descriptions. Users provide text prompts and the model creates corresponding images using diffusion models or GANs.

Real-World Examples

  • DALL-E generating images from text
  • Midjourney creating artwork
  • Stable Diffusion producing photos

Related Terms

Image GenerationPrompt-to-ImageDALL-EStable Diffusion
GenAI SecurityMedium Risk
Inpainting
The process of filling in missing or masked parts of an image using AI. Can be used for photo editing, restoration, or malicious manipulation.

Real-World Examples

  • Removing objects from photos
  • Restoring damaged images
  • Manipulating evidence in images

Related Terms

Image CompletionImage EditingContent GenerationMask Filling
GenAI SecurityLow Risk
Outpainting
The process of extending an image beyond its original boundaries using AI. Generates new content that seamlessly continues the existing image.

Real-World Examples

  • Extending photos to wider aspect ratios
  • Creating panoramic views
  • Expanding artwork

Related Terms

Image ExtensionContent GenerationBoundary ExpansionImage Completion
Agentic SecurityHigh Risk
Autonomous Agent
An AI system that can operate independently, make decisions, and take actions in an environment to achieve goals without constant human supervision.

Real-World Examples

  • Trading bots making investment decisions
  • Autonomous vehicles navigating roads
  • Customer service agents handling inquiries

Related Terms

AI AgentAgentic AIAutonomous SystemSelf-Directed AI
Agentic SecurityHigh Risk
Multi-Agent System
A system where multiple AI agents interact, collaborate, or compete to achieve individual or collective goals. Can exhibit emergent behaviors and complex interactions.

Real-World Examples

  • Multiple agents coordinating in games
  • Swarm robotics
  • Collaborative problem-solving agents

Related Terms

Agent CollaborationSwarm IntelligenceDistributed AIAgent Communication
Agentic SecurityMedium Risk
ReAct (Reasoning + Acting)
A framework for AI agents that combines reasoning (thinking) and acting (taking actions). Agents alternate between reasoning about the situation and taking actions based on that reasoning.

Real-World Examples

  • Agents planning before executing actions
  • Step-by-step problem solving with tool use
  • Reasoning about consequences before acting

Related Terms

Agent ReasoningAction PlanningChain-of-ThoughtTool Use
Agentic SecurityHigh Risk
Tool Use
The ability of AI agents to use external tools, APIs, or functions to accomplish tasks beyond their core capabilities. Enables agents to interact with the real world.

Real-World Examples

  • Agents using calculators for math
  • Browsing the web for information
  • Executing code to solve problems

Related Terms

Function CallingAPI IntegrationAction ExecutionAgent Capabilities
Agentic SecurityMedium Risk
Agent Orchestration
The coordination and management of multiple AI agents to work together efficiently. Involves task distribution, communication protocols, and conflict resolution.

Real-World Examples

  • Coordinating multiple agents for complex tasks
  • Managing agent workflows
  • Distributing work among agent teams

Related Terms

Multi-Agent CoordinationAgent ManagementTask DistributionWorkflow Orchestration
Agentic SecurityHigh Risk
Model Context Protocol (MCP)
A protocol for managing context and communication between AI models and external systems. Enables secure and standardized interactions in agentic AI systems.

Real-World Examples

  • Secure communication between AI agents
  • Context sharing in multi-agent systems
  • Standardized agent interfaces

Related Terms

Context ManagementProtocol SecurityModel CommunicationAgent Infrastructure
Security TestingLow Risk
Adversarial Training
A defense technique where models are trained on adversarial examples to improve robustness against attacks. The model learns to correctly classify both clean and adversarial inputs.

Real-World Examples

  • Training image classifiers on adversarial images
  • Improving LLM resistance to prompt injection
  • Hardening models against evasion attacks

Related Terms

Robust TrainingDefense MechanismAdversarial ExamplesModel Hardening
Security TestingMedium Risk
Input Sanitization
The process of cleaning and validating user inputs before processing by AI systems. Removes or neutralizes potentially malicious content to prevent attacks.

Real-World Examples

  • Filtering prompt injection attempts
  • Removing malicious code from inputs
  • Validating image formats before processing

Related Terms

Input ValidationData CleaningSanitizationPreprocessing
Security TestingMedium Risk
Output Filtering
The process of screening and filtering AI model outputs to detect and block harmful, biased, or inappropriate content before it reaches users.

Real-World Examples

  • Filtering harmful text from LLMs
  • Detecting deepfakes in generated images
  • Blocking inappropriate AI responses

Related Terms

Content ModerationSafety FiltersOutput ValidationContent Screening
Security TestingMedium Risk
Anomaly Detection
Techniques for identifying unusual patterns or behaviors in AI systems that may indicate attacks, model failures, or security breaches.

Real-World Examples

  • Detecting adversarial inputs
  • Identifying model drift
  • Finding anomalous agent behavior

Related Terms

Outlier DetectionIntrusion DetectionBehavioral AnalysisThreat Detection
Security TestingMedium Risk
Model Monitoring
Continuous observation and analysis of AI models in production to detect performance degradation, security threats, or unexpected behaviors.

Real-World Examples

  • Tracking model accuracy over time
  • Detecting adversarial attacks in real-time
  • Monitoring for data drift

Related Terms

MLOpsModel ObservabilityPerformance MonitoringSecurity Monitoring
Security TestingLow Risk
Explainable AI (XAI)
AI systems designed to provide understandable explanations for their decisions and predictions. Helps users trust and debug AI systems.

Real-World Examples

  • Explaining why a loan was denied
  • Showing which features influenced a prediction
  • Understanding model decision-making

Related Terms

InterpretabilityTransparencyModel ExplainabilityAI Accountability
Security TestingMedium Risk
Robustness
The ability of AI models to maintain performance and make correct predictions even when inputs are perturbed, corrupted, or adversarial.

Real-World Examples

  • Models resistant to adversarial attacks
  • Systems handling noisy inputs
  • Robust image classifiers

Related Terms

Adversarial RobustnessModel ResilienceFault ToleranceStability
Security TestingLow Risk
Certified Defense
Defense mechanisms with mathematical guarantees that models will correctly classify inputs within a certain region, even under adversarial conditions.

Real-World Examples

  • Certified adversarial defenses
  • Provably robust models
  • Formally verified AI systems

Related Terms

Formal VerificationProvable SecurityCertified RobustnessMathematical Guarantees
Privacy ProtectionLow Risk
Differential Privacy
A mathematical framework for quantifying and limiting the privacy loss when statistical information about a dataset is released. Provides formal privacy guarantees.

Real-World Examples

  • Adding calibrated noise to query results
  • Protecting individual records in aggregate statistics
  • Private machine learning

Related Terms

Privacy BudgetNoise AdditionPrivacy PreservationPrivacy Guarantees
Privacy ProtectionLow Risk
Homomorphic Encryption
A form of encryption that allows computation on encrypted data without decrypting it first. Enables privacy-preserving machine learning.

Real-World Examples

  • Training models on encrypted data
  • Performing inference on encrypted inputs
  • Privacy-preserving analytics

Related Terms

Encrypted ComputationPrivacy-Preserving MLCryptographic PrivacySecure Computation
Privacy ProtectionMedium Risk
Federated Learning
A distributed machine learning approach where models are trained across multiple devices or servers without centralizing raw data. Preserves data privacy.

Real-World Examples

  • Training on mobile devices without uploading data
  • Collaborative learning across hospitals
  • Privacy-preserving model training

Related Terms

Distributed LearningPrivacy-Preserving MLEdge ComputingDecentralized Training
Privacy ProtectionMedium Risk
GDPR Compliance
Adherence to the General Data Protection Regulation, which governs data protection and privacy in the EU. AI systems must comply with GDPR requirements for data processing.

Real-World Examples

  • Obtaining consent for AI data processing
  • Providing explanations for automated decisions
  • Implementing data deletion rights

Related Terms

Data ProtectionPrivacy RegulationRight to ExplanationData Minimization
Privacy ProtectionMedium Risk
Right to Explanation
A legal requirement (under GDPR) that individuals have the right to receive meaningful explanations for automated decisions that significantly affect them.

Real-World Examples

  • Explaining loan denial decisions
  • Clarifying hiring algorithm results
  • Providing transparency in automated systems

Related Terms

Explainable AIAlgorithmic TransparencyGDPR ComplianceAI Accountability
Privacy ProtectionLow Risk
Data Minimization
The principle of collecting and processing only the minimum amount of personal data necessary for a specific purpose. A key requirement in privacy regulations.

Real-World Examples

  • Collecting only necessary user information
  • Limiting data retention periods
  • Reducing data collection scope

Related Terms

Privacy by DesignData ProtectionMinimal Data CollectionPurpose Limitation
Compliance & GovernanceLow Risk
AI Governance
The framework of policies, processes, and controls for managing AI systems throughout their lifecycle. Ensures responsible, ethical, and compliant AI deployment.

Real-World Examples

  • Establishing AI ethics boards
  • Creating AI usage policies
  • Implementing AI risk frameworks

Related Terms

AI EthicsRisk ManagementComplianceAI Policy
Compliance & GovernanceMedium Risk
AI Risk Management
The systematic identification, assessment, and mitigation of risks associated with AI systems, including security, privacy, bias, and operational risks.

Real-World Examples

  • Assessing AI security risks
  • Evaluating bias in models
  • Managing AI operational risks

Related Terms

Risk AssessmentThreat ModelingAI GovernanceRisk Mitigation
Compliance & GovernanceLow Risk
Model Card
A document that provides essential information about a machine learning model, including its intended use, performance characteristics, limitations, and ethical considerations.

Real-World Examples

  • Documenting model performance metrics
  • Describing model limitations
  • Providing usage guidelines

Related Terms

Model DocumentationTransparencyModel ReportingAI Documentation
Compliance & GovernanceHigh Risk
Algorithmic Bias
Systematic and unfair discrimination in AI systems that results in different outcomes for different groups. Can arise from biased training data or flawed algorithms.

Real-World Examples

  • Hiring algorithms favoring certain demographics
  • Credit scoring with racial bias
  • Facial recognition performing poorly on certain groups

Related Terms

FairnessDiscriminationBias MitigationEquity
Compliance & GovernanceMedium Risk
Fairness
The principle that AI systems should make decisions without unjust discrimination. Involves ensuring equal treatment and outcomes across different groups.

Real-World Examples

  • Ensuring equal loan approval rates
  • Fair hiring practices
  • Equitable healthcare AI

Related Terms

Algorithmic BiasEquityNon-DiscriminationFair ML
Compliance & GovernanceLow Risk
AI Ethics
The study and application of moral principles to AI development and deployment. Addresses issues like fairness, transparency, accountability, and human welfare.

Real-World Examples

  • Developing ethical AI guidelines
  • Ensuring AI benefits humanity
  • Addressing AI ethical dilemmas

Related Terms

Ethical AIResponsible AIAI GovernanceMoral AI
Security TestingHigh Risk
OWASP Top 10 for LLM
A list of the top 10 most critical security risks for Large Language Model applications, published by OWASP. Helps developers understand and mitigate LLM vulnerabilities.

Real-World Examples

  • Prompt injection vulnerabilities
  • Insecure output handling
  • Training data poisoning

Related Terms

LLM SecurityVulnerability AssessmentSecurity FrameworkOWASP
Compliance & GovernanceLow Risk
NIST AI Risk Management Framework
A voluntary framework developed by NIST to help organizations manage risks associated with AI systems. Provides guidelines for trustworthy and responsible AI.

Real-World Examples

  • Assessing AI risks
  • Implementing AI governance
  • Ensuring trustworthy AI

Related Terms

AI GovernanceRisk ManagementNIST FrameworkAI Standards
Security TestingLow Risk
Penetration Testing
Authorized simulated attacks on AI systems to identify vulnerabilities and security weaknesses. Helps organizations improve their AI security posture.

Real-World Examples

  • Testing LLMs for prompt injection
  • Assessing model robustness
  • Evaluating system security

Related Terms

Red TeamingSecurity AssessmentVulnerability TestingAI Security Testing
Security TestingMedium Risk
Threat Modeling
A structured process for identifying, analyzing, and mitigating security threats to AI systems. Helps prioritize security efforts and allocate resources effectively.

Real-World Examples

  • Identifying attack vectors
  • Assessing threat likelihood
  • Designing secure AI systems

Related Terms

Risk AssessmentThreat AnalysisSecurity DesignAttack Surface
Privacy ProtectionLow Risk
Confidential Computing
A security model that protects data and code while in use by executing computations in a hardware-based trusted execution environment (TEE).

Real-World Examples

  • Protecting AI model weights
  • Secure model inference
  • Privacy-preserving computation

Related Terms

Trusted Execution EnvironmentSecure EnclavesData ProtectionIn-Use Encryption
Privacy ProtectionLow Risk
Secure Multi-Party Computation
A cryptographic protocol that allows multiple parties to jointly compute a function over their inputs while keeping those inputs private.

Real-World Examples

  • Collaborative model training without sharing data
  • Privacy-preserving analytics
  • Secure data aggregation

Related Terms

Privacy-Preserving MLCryptographic ProtocolsDistributed ComputationData Privacy
Security TestingLow Risk
Model Versioning
The practice of tracking and managing different versions of machine learning models. Essential for reproducibility, rollback, and security auditing.

Real-World Examples

  • Tracking model changes
  • Rolling back to previous versions
  • Auditing model updates

Related Terms

MLOpsModel ManagementVersion ControlModel Registry
Security TestingMedium Risk
Data Drift
The phenomenon where the distribution of input data changes over time, causing model performance to degrade. Can indicate attacks or changing conditions.

Real-World Examples

  • Input data distribution changing
  • Model accuracy decreasing over time
  • Detecting anomalous data patterns

Related Terms

Concept DriftModel DegradationPerformance MonitoringDistribution Shift
Security TestingMedium Risk
Concept Drift
The change in the relationship between input features and target variables over time. Requires model retraining or adaptation to maintain performance.

Real-World Examples

  • Customer preferences changing
  • Market conditions evolving
  • Requiring model updates

Related Terms

Data DriftModel AdaptationOnline LearningContinual Learning
Security TestingLow Risk
MLOps
Machine Learning Operations - practices and tools for deploying, monitoring, and maintaining machine learning models in production. Combines ML with DevOps principles.

Real-World Examples

  • Automated model deployment
  • Continuous model monitoring
  • ML pipeline automation

Related Terms

DevOpsModel DeploymentML LifecycleProduction ML
Security TestingLow Risk
Model Registry
A centralized repository for storing, versioning, and managing machine learning models. Enables model discovery, collaboration, and governance.

Real-World Examples

  • Storing trained models
  • Tracking model versions
  • Managing model metadata

Related Terms

Model ManagementModel VersioningMLOpsModel Storage
AI FundamentalsLow Risk
Feature Store
A centralized repository for storing, managing, and serving features (input variables) used in machine learning models. Ensures feature consistency and reusability.

Real-World Examples

  • Centralized feature storage
  • Reusable feature definitions
  • Consistent feature serving

Related Terms

Feature EngineeringData ManagementML InfrastructureFeature Pipeline
Security TestingLow Risk
A/B Testing
A method for comparing two versions of a model or system to determine which performs better. Used for model evaluation and gradual rollouts.

Real-World Examples

  • Comparing model versions
  • Testing new features
  • Evaluating improvements

Related Terms

Model EvaluationExperimentationPerformance ComparisonGradual Rollout
Security TestingLow Risk
Shadow Deployment
A deployment strategy where a new model runs in parallel with the production model, processing the same inputs but not affecting user-facing outputs.

Real-World Examples

  • Testing new models safely
  • Comparing model performance
  • Gradual model rollout

Related Terms

Canary DeploymentModel TestingSafe DeploymentProduction Testing
AI FundamentalsLow Risk
Attention Mechanism
A component in neural networks that allows models to focus on relevant parts of the input when making predictions. Enables models to weigh the importance of different input elements.

Real-World Examples

  • BERT using attention to understand context
  • Vision transformers focusing on relevant image regions
  • LLMs attending to important tokens

Related Terms

Self-AttentionTransformerMulti-Head AttentionAttention Weights
AI FundamentalsLow Risk
Self-Attention
A mechanism where each position in a sequence can attend to all positions in the same sequence. Core component of transformer architectures that enables understanding of relationships between elements.

Real-World Examples

  • GPT models understanding word relationships
  • Vision transformers processing image patches
  • Language models capturing long-range dependencies

Related Terms

Attention MechanismTransformerMulti-Head AttentionScaled Dot-Product Attention
AI FundamentalsLow Risk
Embedding
A dense vector representation of discrete objects (words, images, etc.) in a continuous vector space. Enables neural networks to process categorical data and capture semantic relationships.

Real-World Examples

  • Word2Vec creating word embeddings
  • Image embeddings for similarity search
  • Sentence embeddings for semantic understanding

Related Terms

Word EmbeddingVector SpaceFeature RepresentationLatent Space
AI FundamentalsMedium Risk
Overfitting
A phenomenon where a model learns the training data too well, including noise and irrelevant patterns, resulting in poor performance on new, unseen data.

Real-World Examples

  • Model memorizing training examples
  • Perfect training accuracy but poor test performance
  • Model failing on new data

Related Terms

GeneralizationUnderfittingRegularizationBias-Variance Tradeoff
AI FundamentalsLow Risk
Underfitting
A phenomenon where a model is too simple to capture the underlying patterns in the data, resulting in poor performance on both training and test data.

Real-World Examples

  • Linear model failing to capture non-linear relationships
  • Model too simple for complex data
  • Poor performance on all datasets

Related Terms

OverfittingModel CapacityBiasGeneralization
AI FundamentalsLow Risk
Regularization
Techniques used to prevent overfitting by adding constraints or penalties to the model training process. Helps improve generalization to unseen data.

Real-World Examples

  • Weight decay in neural networks
  • Dropout layers preventing co-adaptation
  • L1/L2 penalties on model parameters

Related Terms

L1 RegularizationL2 RegularizationDropoutEarly Stopping
AI FundamentalsLow Risk
Hyperparameter
Configuration settings that control the learning process but are not learned from data. Must be set before training begins and significantly impact model performance.

Real-World Examples

  • Learning rate in gradient descent
  • Number of layers in a neural network
  • Batch size for training

Related Terms

ParameterLearning RateBatch SizeHyperparameter Tuning
AI FundamentalsLow Risk
Gradient Descent
An optimization algorithm used to minimize a loss function by iteratively moving in the direction of steepest descent (negative gradient). Fundamental to training neural networks.

Real-World Examples

  • Training neural networks
  • Optimizing model parameters
  • Minimizing loss functions

Related Terms

Stochastic Gradient DescentAdam OptimizerLearning RateBackpropagation
AI FundamentalsLow Risk
Backpropagation
An algorithm for training neural networks that calculates gradients by propagating errors backward through the network. Enables efficient computation of gradients for all parameters.

Real-World Examples

  • Training deep neural networks
  • Computing gradients efficiently
  • Updating network weights

Related Terms

Gradient DescentChain RuleForward PassGradient Computation
AI FundamentalsLow Risk
Loss Function
A function that measures how well a model's predictions match the actual target values. The goal of training is to minimize this function.

Real-World Examples

  • Cross-entropy for classification
  • Mean squared error for regression
  • Custom loss functions for specific tasks

Related Terms

Cost FunctionObjective FunctionCross-Entropy LossMean Squared Error
AI FundamentalsLow Risk
Activation Function
A function applied to the output of a neuron to introduce non-linearity into neural networks. Enables networks to learn complex patterns and relationships.

Real-World Examples

  • ReLU activation in hidden layers
  • Sigmoid for binary classification
  • Softmax for multi-class classification

Related Terms

ReLUSigmoidTanhNon-Linearity
AI FundamentalsLow Risk
Convolutional Neural Network (CNN)
A type of neural network designed for processing grid-like data such as images. Uses convolutional layers to detect local patterns and features.

Real-World Examples

  • Image recognition systems
  • Object detection models
  • Medical image analysis

Related Terms

ConvolutionPoolingFeature MapImage Classification
AI FundamentalsLow Risk
Recurrent Neural Network (RNN)
A type of neural network designed for processing sequential data. Maintains hidden state that captures information from previous time steps.

Real-World Examples

  • Language modeling
  • Time series prediction
  • Speech recognition

Related Terms

LSTMGRUSequence ModelingTemporal Dependencies
AI FundamentalsLow Risk
Long Short-Term Memory (LSTM)
A type of RNN architecture designed to overcome the vanishing gradient problem. Uses gates to control information flow and maintain long-term dependencies.

Real-World Examples

  • Language translation
  • Time series forecasting
  • Sequence-to-sequence models

Related Terms

RNNGRUMemory CellGating Mechanism
AI FundamentalsLow Risk
Gated Recurrent Unit (GRU)
A simplified variant of LSTM that uses fewer gates while maintaining similar performance. More computationally efficient than LSTM.

Real-World Examples

  • Text generation
  • Sequence prediction
  • Language modeling

Related Terms

LSTMRNNGating MechanismSequence Modeling
AI FundamentalsLow Risk
Batch Normalization
A technique that normalizes the inputs to each layer by adjusting and scaling activations. Helps stabilize training and enables faster convergence.

Real-World Examples

  • Stabilizing deep network training
  • Enabling higher learning rates
  • Reducing training time

Related Terms

NormalizationLayer NormalizationTraining StabilityInternal Covariate Shift
AI FundamentalsLow Risk
Dropout
A regularization technique that randomly sets a fraction of neurons to zero during training. Prevents overfitting by reducing co-adaptation of neurons.

Real-World Examples

  • Preventing overfitting in deep networks
  • Improving model generalization
  • Reducing model complexity

Related Terms

RegularizationOverfitting PreventionNeural Network TrainingModel Generalization
AI FundamentalsLow Risk
Ensemble Learning
A technique that combines predictions from multiple models to improve overall performance. Often achieves better results than individual models.

Real-World Examples

  • Random forests combining decision trees
  • Voting classifiers
  • Stacking multiple models

Related Terms

BaggingBoostingRandom ForestModel Combination
AI FundamentalsLow Risk
Cross-Validation
A technique for assessing model performance by splitting data into multiple folds, training on some folds and testing on others. Provides more reliable performance estimates.

Real-World Examples

  • K-fold cross-validation
  • Stratified cross-validation
  • Time series cross-validation

Related Terms

K-Fold Cross-ValidationModel EvaluationHoldout SetPerformance Estimation
AI FundamentalsLow Risk
Precision
A metric that measures the proportion of positive predictions that are actually correct. Calculated as true positives divided by (true positives + false positives).

Real-World Examples

  • Measuring spam detection accuracy
  • Evaluating fraud detection systems
  • Assessing model quality

Related Terms

RecallF1 ScoreAccuracyClassification Metrics
AI FundamentalsLow Risk
Recall
A metric that measures the proportion of actual positives that are correctly identified. Calculated as true positives divided by (true positives + false negatives).

Real-World Examples

  • Measuring disease detection completeness
  • Evaluating security threat detection
  • Assessing model coverage

Related Terms

PrecisionF1 ScoreSensitivityTrue Positive Rate
AI FundamentalsLow Risk
F1 Score
A metric that combines precision and recall into a single score. Calculated as the harmonic mean of precision and recall, providing a balanced performance measure.

Real-World Examples

  • Balanced model evaluation
  • Comparing classification models
  • Performance benchmarking

Related Terms

PrecisionRecallHarmonic MeanClassification Metrics
AI FundamentalsLow Risk
Confusion Matrix
A table used to visualize the performance of a classification model. Shows true positives, true negatives, false positives, and false negatives.

Real-World Examples

  • Evaluating classifier performance
  • Understanding model errors
  • Performance visualization

Related Terms

Classification MetricsPrecisionRecallAccuracy
AI FundamentalsLow Risk
ROC Curve
Receiver Operating Characteristic curve - a graph showing the performance of a classification model at different classification thresholds. Plots true positive rate against false positive rate.

Real-World Examples

  • Evaluating binary classifiers
  • Comparing model performance
  • Threshold selection

Related Terms

AUCClassification ThresholdTrue Positive RateFalse Positive Rate
AI FundamentalsLow Risk
AUC (Area Under Curve)
Area Under the ROC Curve - a metric that measures the overall quality of a classification model. Values range from 0 to 1, with 1 being perfect classification.

Real-World Examples

  • Measuring classifier quality
  • Comparing model performance
  • Model selection

Related Terms

ROC CurveClassification PerformanceModel EvaluationBinary Classification
AI FundamentalsLow Risk
Feature Engineering
The process of selecting, modifying, or creating features from raw data to improve model performance. Critical for machine learning success.

Real-World Examples

  • Creating interaction features
  • Encoding categorical variables
  • Feature scaling

Related Terms

Feature SelectionFeature ExtractionData PreprocessingDimensionality Reduction
AI FundamentalsLow Risk
Dimensionality Reduction
Techniques for reducing the number of features in a dataset while preserving important information. Helps with visualization, computation, and overfitting prevention.

Real-World Examples

  • PCA for visualization
  • Reducing feature space
  • Removing redundant features

Related Terms

PCAt-SNEFeature SelectionData Compression
AI FundamentalsLow Risk
Principal Component Analysis (PCA)
A dimensionality reduction technique that transforms data into a lower-dimensional space by finding principal components that capture maximum variance.

Real-World Examples

  • Reducing image dimensions
  • Visualizing high-dimensional data
  • Feature compression

Related Terms

Dimensionality ReductionEigenvaluesVarianceLinear Transformation
AI FundamentalsLow Risk
Supervised Learning
A machine learning paradigm where models learn from labeled training data. The model learns to map inputs to known outputs.

Real-World Examples

  • Image classification
  • Spam detection
  • Price prediction

Related Terms

Unsupervised LearningClassificationRegressionLabeled Data
AI FundamentalsLow Risk
Unsupervised Learning
A machine learning paradigm where models learn patterns from unlabeled data. No target labels are provided during training.

Real-World Examples

  • Customer segmentation
  • Anomaly detection
  • Topic modeling

Related Terms

Supervised LearningClusteringDimensionality ReductionAnomaly Detection
AI FundamentalsLow Risk
Reinforcement Learning
A machine learning paradigm where an agent learns to make decisions by interacting with an environment and receiving rewards or penalties.

Real-World Examples

  • Game playing AI
  • Robotics control
  • Autonomous systems

Related Terms

AgentRewardPolicyQ-Learning
AI FundamentalsLow Risk
Clustering
An unsupervised learning technique that groups similar data points together. Identifies patterns and structures in data without labeled examples.

Real-World Examples

  • Customer segmentation
  • Image segmentation
  • Document clustering

Related Terms

K-MeansHierarchical ClusteringUnsupervised LearningData Segmentation
AI FundamentalsLow Risk
K-Means Clustering
A clustering algorithm that partitions data into k clusters by minimizing the sum of squared distances between data points and cluster centroids.

Real-World Examples

  • Customer segmentation
  • Image compression
  • Market research

Related Terms

ClusteringCentroidUnsupervised LearningPartitioning
AI FundamentalsLow Risk
Decision Tree
A tree-like model that makes decisions by following a series of rules based on feature values. Easy to interpret and visualize.

Real-World Examples

  • Medical diagnosis
  • Credit scoring
  • Feature importance analysis

Related Terms

Random ForestGradient BoostingClassificationRegression
AI FundamentalsLow Risk
Random Forest
An ensemble learning method that combines multiple decision trees. Each tree is trained on a random subset of data and features, with final predictions made by voting.

Real-World Examples

  • Feature selection
  • Classification tasks
  • Regression problems

Related Terms

Decision TreeEnsemble LearningBaggingFeature Importance
AI FundamentalsLow Risk
Gradient Boosting
An ensemble learning technique that builds models sequentially, with each new model correcting errors made by previous models. Often achieves high performance.

Real-World Examples

  • Winning Kaggle competitions
  • High-performance classification
  • Regression tasks

Related Terms

BoostingXGBoostLightGBMEnsemble Learning
AI FundamentalsLow Risk
Support Vector Machine (SVM)
A classification algorithm that finds the optimal hyperplane to separate classes. Effective for high-dimensional data and non-linear problems with kernel tricks.

Real-World Examples

  • Text classification
  • Image recognition
  • Bioinformatics

Related Terms

Kernel TrickHyperplaneClassificationMargin
AI FundamentalsLow Risk
Naive Bayes
A probabilistic classification algorithm based on Bayes' theorem with an assumption of feature independence. Fast and effective for text classification.

Real-World Examples

  • Spam email detection
  • Document classification
  • Sentiment analysis

Related Terms

Bayes' TheoremProbabilistic ModelText ClassificationSpam Detection
AI FundamentalsLow Risk
K-Nearest Neighbors (KNN)
A simple, instance-based learning algorithm that classifies data points based on the majority class of their k nearest neighbors in the feature space.

Real-World Examples

  • Recommendation systems
  • Pattern recognition
  • Image classification

Related Terms

Instance-Based LearningLazy LearningDistance MetricClassification
AI FundamentalsLow Risk
Linear Regression
A regression algorithm that models the relationship between a dependent variable and one or more independent variables using a linear equation.

Real-World Examples

  • Price prediction
  • Sales forecasting
  • Trend analysis

Related Terms

RegressionLeast SquaresLinear ModelPrediction
AI FundamentalsLow Risk
Logistic Regression
A classification algorithm that uses a logistic function to model the probability of a binary outcome. Despite its name, it's used for classification, not regression.

Real-World Examples

  • Disease diagnosis
  • Credit approval
  • Spam detection

Related Terms

ClassificationSigmoid FunctionBinary ClassificationProbability
Quick Reference Guide
Essential AI security concepts organized by risk level

Critical Risk Terms

High Risk Terms

Defense Terms