AI Incident Database
The AI Incident Database (AIID) is a comprehensive, open-source repository documenting real-world incidents where AI systems have caused or nearly caused harm. Launched in 2020, it serves as a critical resource for researchers, policymakers, developers, and the public to understand AI risks and develop better safety practices.
Mission & Purpose
To document, analyze, and learn from AI system failures and harms to improve AI safety and governance
Types of AI Incidents Cataloged
The database categorizes incidents across eight major harm categories
Example Incidents:
- Facial recognition misidentifying minorities
- Hiring algorithms discriminating against women
- Credit scoring systems with racial bias
Example Incidents:
- Smart speakers recording private conversations
- Facial recognition without consent
- Data breaches in AI training datasets
Example Incidents:
- Autonomous vehicle accidents
- Medical AI misdiagnosis
- Industrial robot injuries
Example Incidents:
- Deepfake videos of public figures
- AI-generated fake news
- Social media algorithm amplifying misinformation
Example Incidents:
- Algorithmic trading flash crashes
- Fraudulent AI-powered scams
- Automated decision systems denying benefits
Example Incidents:
- Adversarial attacks on image classifiers
- Prompt injection in LLMs
- AI-powered cyberattacks
Example Incidents:
- Excessive energy consumption in training
- E-waste from AI hardware
- Resource depletion for AI infrastructure
Example Incidents:
- Chatbots providing harmful advice
- Autonomous weapons targeting errors
- AI systems making unintended decisions
Notable Incidents
High-impact incidents that shaped AI safety practices and policy
How to Use the AI Incident Database
Step-by-step guide to effectively navigate and utilize the database
Tips:
- •Use specific keywords like 'facial recognition' or 'autonomous vehicle'
- •Filter by date range to see recent incidents
- •Sort by severity to prioritize critical incidents
Tips:
- •Read multiple source reports for complete context
- •Check the taxonomy classifications
- •Review similar incidents for patterns
Tips:
- •Group incidents by harm type or AI system
- •Track incident frequency over time
- •Compare incidents across industries
Tips:
- •Focus on incidents relevant to your domain
- •Document applicable lessons learned
- •Implement recommended prevention strategies
Tips:
- •Provide credible sources and documentation
- •Include detailed incident description
- •Suggest relevant taxonomy classifications
Relevance to Stakeholders
How different stakeholders can leverage the AI Incident Database
Key Benefits
- Access to comprehensive dataset for AI safety research
- Standardized taxonomy for comparative analysis
- Identification of research gaps and emerging risks
- Evidence base for academic publications
Use Cases
- Analyzing patterns in AI failures across domains
- Developing predictive models for AI risks
- Evaluating effectiveness of safety interventions
- Publishing empirical studies on AI incidents
The Value of Transparency in AI Deployment
Why transparency in AI incident reporting is essential for responsible AI development
Impact:
Encourages responsible AI development and deployment practices
Impact:
Prevents repetition of known failures and accelerates safety improvements
Impact:
Builds trust and enables meaningful public participation in AI governance
Impact:
Enables effective, proportionate, and targeted AI governance
Impact:
Enables proactive risk mitigation and prevention strategies
Impact:
Raises baseline safety across the AI industry
Related Resources
Additional databases, frameworks, and organizations focused on AI safety
Contribute to AI Safety
Help build a safer AI future by contributing incidents, insights, and expertise to the database