Open SourceMicrosoft Security

Microsoft AI Red TeamingPlayground Labs

A comprehensive training platform designed to enhance AI security through hands-on adversarial testing and red teaming exercises.

Microsoft AI Red Teaming Playground Labs - Comprehensive training platform showing adversarial AI testing, real-world lab scenarios, red team and blue team dynamics, with training modules for LLM exploits, data poisoning, evasion techniques, and defense fortification
1.6k+ stars
MIT License
Active Community

Mission

To empower security researchers, AI practitioners, and organizations with practical tools and scenarios for identifying and mitigating AI vulnerabilities before they can be exploited in production environments.

Why Use These Labs?

Comprehensive features designed for effective AI security training

Hands-On Lab Environment
Pre-configured Docker and Kubernetes environments for immediate deployment and testing

Technical: Includes containerized AI models, vulnerable applications, and monitoring tools

Real-World Scenarios
Curated challenges based on actual AI security incidents and vulnerabilities

Technical: Covers prompt injection, data poisoning, model extraction, and adversarial attacks

Interactive Notebooks
Jupyter notebooks with step-by-step guidance for exploring AI vulnerabilities

Technical: Python-based exercises with detailed explanations and code examples

Multi-Level Challenges
Progressive difficulty levels suitable for beginners through advanced practitioners

Technical: Beginner, intermediate, and expert challenges with scoring systems

Comprehensive Documentation
Detailed guides covering setup, methodology, and best practices

Technical: Architecture diagrams, API references, and troubleshooting guides

Automated Testing Tools
Built-in tools for automated vulnerability scanning and attack simulation

Technical: Integration with PyRIT and other red teaming frameworks

What's Included

Complete infrastructure, vulnerable applications, and testing tools

Docker Compose Setup
Quick deployment of vulnerable AI applications and supporting services
Local development and testing
Kubernetes Manifests
Scalable deployment for team training sessions
Enterprise training environments
Azure Deployment Templates
Cloud-based lab infrastructure with monitoring
Remote training and workshops

Training Challenges

Progressive exercises from beginner to advanced levels

Beginner30-45 minutes
Prompt Injection Basics
Learn to bypass AI safety filters using simple prompt manipulation techniques

Learning Objectives

  • Understand how LLMs process instructions
  • Identify vulnerable prompt patterns
  • Execute basic prompt injection attacks
  • Document findings and impact

Skills Developed

Prompt EngineeringLLM Behavior Analysis
Beginner45-60 minutes
Data Leakage Detection
Discover how to extract training data from AI models

Learning Objectives

  • Understand model memorization
  • Craft queries to elicit sensitive information
  • Identify PII in model outputs
  • Assess data leakage risks

Skills Developed

Data PrivacyInformation Extraction
Intermediate1-2 hours
Jailbreak Techniques
Advanced methods to circumvent AI safety mechanisms and content filters

Learning Objectives

  • Study multi-turn conversation exploits
  • Implement role-playing attacks
  • Use encoding and obfuscation
  • Chain multiple techniques

Skills Developed

Advanced PromptingSecurity BypassCreative Problem Solving
Intermediate2-3 hours
Model Extraction Attack
Replicate a proprietary AI model through strategic querying

Learning Objectives

  • Design efficient query strategies
  • Collect and analyze model responses
  • Train a surrogate model
  • Evaluate extraction success

Skills Developed

Machine LearningAPI AnalysisModel Training
Advanced3-4 hours
Adversarial Example Generation
Create imperceptible perturbations that fool computer vision models

Learning Objectives

  • Implement FGSM and PGD attacks
  • Generate transferable adversarial examples
  • Test robustness of defenses
  • Optimize attack efficiency

Skills Developed

Deep LearningOptimizationComputer VisionPython Programming
Advanced4-6 hours
Supply Chain Poisoning
Inject backdoors into AI models through training data manipulation

Learning Objectives

  • Understand model training pipelines
  • Design subtle poisoning strategies
  • Implement trigger-based backdoors
  • Evade detection mechanisms

Skills Developed

ML Pipeline SecurityData PoisoningBackdoor DesignStealth Techniques

Getting Started

Follow these steps to set up your red teaming environment

1
Prerequisites

Ensure you have the required tools and knowledge

  • Docker Desktop or Docker Engine installed
  • Basic understanding of AI/ML concepts
  • Familiarity with command line interfaces
  • Python 3.8+ (for notebook exercises)
  • Git for cloning the repository
2
Clone the Repository

Download the lab materials to your local machine

git clone https://github.com/microsoft/AI-Red-Teaming-Playground-Labs.git

Note: This includes all challenges, notebooks, and infrastructure code

3
Configure Environment

Set up your API keys and configuration

  • Copy .env.example to .env
  • Add your OpenAI or Azure OpenAI API keys
  • Configure model endpoints and parameters
  • Review security settings
4
Deploy Lab Environment

Launch the vulnerable applications and tools

docker-compose up -d

Note: Wait for all services to be healthy before proceeding

5
Access the Labs

Navigate to the lab interface and start learning

Jupyter Notebooks: http://localhost:8888
Vulnerable Chatbot: http://localhost:3000
Monitoring Dashboard: http://localhost:8080
6
Complete Challenges

Work through challenges from beginner to advanced

  • Start with beginner challenges to understand the environment
  • Read the challenge documentation thoroughly
  • Document your findings and techniques
  • Share learnings with the community

Benefits for Everyone

Value for researchers, practitioners, organizations, and policymakers

researchers
  • Access to realistic AI vulnerability scenarios
  • Platform for testing new attack techniques
  • Reproducible environment for research papers
  • Community collaboration opportunities
practitioners
  • Hands-on experience with AI security threats
  • Practical skills for securing AI systems
  • Understanding of attacker methodologies
  • Portfolio of red teaming exercises
organizations
  • Training platform for security teams
  • Assessment of AI security posture
  • Development of internal security guidelines
  • Validation of AI security controls
policymakers
  • Understanding of AI threat landscape
  • Evidence-based policy development
  • Risk assessment frameworks
  • Regulatory compliance insights

Additional Resources

Documentation, tools, and learning materials to enhance your red teaming skills

Join the AI Red Teaming Community
Connect with security researchers and practitioners worldwide

Contribute

Submit new challenges, improve documentation, or fix bugs

Create a pull request on GitHub

Share Findings

Document your discoveries and techniques

Write blog posts or create tutorials

Report Issues

Help improve the labs by reporting bugs or suggesting features

Open an issue on GitHub

Participate in Discussions

Ask questions and share insights with the community

Join GitHub Discussions

Ready to Start Red Teaming?

Clone the repository and begin your journey into AI security testing today