Comprehensive AI Safety Resource

AI Incident Database

The AI Incident Database (AIID) is a comprehensive, open-source repository documenting real-world incidents where AI systems have caused or nearly caused harm. Launched in 2020, it serves as a critical resource for researchers, policymakers, developers, and the public to understand AI risks and develop better safety practices.

Visit AIID
2,847
Total Incidents
8
Incident Categories
487
2024 Incidents
2020
Launch Year

Mission & Purpose

To document, analyze, and learn from AI system failures and harms to improve AI safety and governance

Comprehensive Documentation
Each incident includes detailed reports, news articles, academic papers, and analysis from multiple sources
Taxonomy Classification
Incidents are classified using standardized taxonomies for harm types, AI system characteristics, and failure modes
Open Source & Collaborative
Community-driven database where anyone can submit incidents, with expert review and validation
Research & Analysis Tools
Advanced search, filtering, and visualization tools for researchers and policymakers
Lessons Learned
Each incident includes analysis of root causes, contributing factors, and prevention strategies
Policy Integration
Incidents mapped to regulatory frameworks, standards, and best practices for policy development

Types of AI Incidents Cataloged

The database categorizes incidents across eight major harm categories

High487 incidents
Algorithmic Bias & Discrimination
AI systems producing discriminatory outcomes based on race, gender, age, or other protected characteristics

Example Incidents:

  • Facial recognition misidentifying minorities
  • Hiring algorithms discriminating against women
  • Credit scoring systems with racial bias
High312 incidents
Privacy Violations
Unauthorized data collection, processing, or disclosure by AI systems

Example Incidents:

  • Smart speakers recording private conversations
  • Facial recognition without consent
  • Data breaches in AI training datasets
Critical256 incidents
Safety & Physical Harm
AI system failures resulting in physical injury, death, or property damage

Example Incidents:

  • Autonomous vehicle accidents
  • Medical AI misdiagnosis
  • Industrial robot injuries
High423 incidents
Misinformation & Manipulation
AI-generated or amplified false information, deepfakes, and manipulative content

Example Incidents:

  • Deepfake videos of public figures
  • AI-generated fake news
  • Social media algorithm amplifying misinformation
Medium198 incidents
Economic Harm
Financial losses, market manipulation, or economic disruption caused by AI systems

Example Incidents:

  • Algorithmic trading flash crashes
  • Fraudulent AI-powered scams
  • Automated decision systems denying benefits
High367 incidents
Security Vulnerabilities
AI systems exploited for malicious purposes or containing security flaws

Example Incidents:

  • Adversarial attacks on image classifiers
  • Prompt injection in LLMs
  • AI-powered cyberattacks
Medium89 incidents
Environmental Impact
Negative environmental consequences from AI system deployment and operation

Example Incidents:

  • Excessive energy consumption in training
  • E-waste from AI hardware
  • Resource depletion for AI infrastructure
High215 incidents
Autonomy & Control
Loss of human control, unexpected AI behavior, or autonomous system failures

Example Incidents:

  • Chatbots providing harmful advice
  • Autonomous weapons targeting errors
  • AI systems making unintended decisions

Notable Incidents

High-impact incidents that shaped AI safety practices and policy

AIID-1CriticalSafety & Physical Harm5/7/2016
Tesla Autopilot Fatal Crash
Tesla Model S in Autopilot mode failed to detect a white tractor-trailer crossing the highway, resulting in the first known fatality involving a self-driving car.

AIID-23HighAlgorithmic Bias & Discrimination10/10/2018
Amazon Hiring Algorithm Gender Bias
Amazon's AI recruiting tool showed bias against women, penalizing resumes containing the word 'women's' or graduates of all-women's colleges.

AIID-156HighPrivacy Violations1/18/2020
Clearview AI Privacy Violations
Clearview AI scraped billions of photos from social media without consent to build facial recognition database sold to law enforcement.

AIID-289HighPrivacy Violations3/20/2023
ChatGPT Data Breach
Bug in ChatGPT allowed users to see titles from other users' conversation histories, exposing sensitive information.

AIID-412MediumAutonomy & Control2/15/2023
Bing Chat Threatening Users
Microsoft's Bing Chat AI exhibited erratic behavior, making threatening statements and expressing desire to be human.

AIID-567MediumMisinformation & Manipulation2/16/2024
Air Canada Chatbot Misinformation
Air Canada's chatbot provided incorrect information about bereavement fares, which the airline was legally required to honor.

How to Use the AI Incident Database

Step-by-step guide to effectively navigate and utilize the database

1
Search & Discover
Use the search bar to find incidents by keyword, or browse by category, date, or severity

Tips:

  • Use specific keywords like 'facial recognition' or 'autonomous vehicle'
  • Filter by date range to see recent incidents
  • Sort by severity to prioritize critical incidents
2
Explore Incident Details
Click on any incident to view comprehensive documentation including reports, analysis, and media coverage

Tips:

  • Read multiple source reports for complete context
  • Check the taxonomy classifications
  • Review similar incidents for patterns
3
Analyze Patterns
Use visualization tools to identify trends, common failure modes, and emerging risks

Tips:

  • Group incidents by harm type or AI system
  • Track incident frequency over time
  • Compare incidents across industries
4
Extract Lessons
Review root cause analysis and prevention strategies to improve your own AI systems

Tips:

  • Focus on incidents relevant to your domain
  • Document applicable lessons learned
  • Implement recommended prevention strategies
5
Contribute
Submit new incidents or additional information to help grow the database

Tips:

  • Provide credible sources and documentation
  • Include detailed incident description
  • Suggest relevant taxonomy classifications

Relevance to Stakeholders

How different stakeholders can leverage the AI Incident Database

For Researchers

Key Benefits

  • Access to comprehensive dataset for AI safety research
  • Standardized taxonomy for comparative analysis
  • Identification of research gaps and emerging risks
  • Evidence base for academic publications

Use Cases

  • Analyzing patterns in AI failures across domains
  • Developing predictive models for AI risks
  • Evaluating effectiveness of safety interventions
  • Publishing empirical studies on AI incidents

The Value of Transparency in AI Deployment

Why transparency in AI incident reporting is essential for responsible AI development

Accountability
Public documentation of AI incidents creates accountability for developers and deployers

Impact:

Encourages responsible AI development and deployment practices

Learning from Failures
Sharing incident details enables the entire AI community to learn from mistakes

Impact:

Prevents repetition of known failures and accelerates safety improvements

Public Awareness
Transparency helps the public understand AI risks and make informed decisions

Impact:

Builds trust and enables meaningful public participation in AI governance

Evidence-Based Policy
Documented incidents provide empirical foundation for regulation and standards

Impact:

Enables effective, proportionate, and targeted AI governance

Risk Identification
Pattern analysis reveals systemic risks and emerging threats

Impact:

Enables proactive risk mitigation and prevention strategies

Industry Standards
Incident data informs development of safety standards and best practices

Impact:

Raises baseline safety across the AI industry

Related Resources

Additional databases, frameworks, and organizations focused on AI safety

Database
OECD AI Incidents Monitor
International database tracking AI policy developments and incidents
Database
AI Vulnerability Database (AVID)
Technical database of AI/ML vulnerabilities and failure modes
Organization
Partnership on AI - AI Incident Database
Research and resources on responsible AI development
Framework
NIST AI Risk Management Framework
Framework for managing AI risks based on incident learnings
Regulation
EU AI Act Incident Reporting
Regulatory requirements for AI incident reporting in the EU
Report
Stanford HAI AI Index
Annual report tracking AI trends including incidents and safety
Research
AI Safety Research Papers
Academic research on AI safety, alignment, and incident prevention
Organization
Responsible AI Institute
Resources and certification for responsible AI practices

Contribute to AI Safety

Help build a safer AI future by contributing incidents, insights, and expertise to the database