Day 1 Framework

AI Security Shared Responsibility Model

A comprehensive framework for understanding who's responsible for what when deploying AI systems. Maps security responsibilities across 8 deployment models and 16 security domains.

8
Deployment Models
16
Security Domains
128
Responsibility Mappings
2025
Latest Version

Why This Framework?

Your Day 1 framework for AI security - getting everyone on the same page before diving into technical specifications

Vendor-Agnostic

Works across all cloud providers and deployment scenarios. Not tied to any specific vendor or platform.

Business Language First

Speaks business language before technical jargon. Perfect for getting organizational alignment.

Deployment-Model Focused

Covers the full AI landscape from SaaS to on-premises, agentic systems to coding assistants.

Framework Comparison

This Framework

  • • Day 1 organizational alignment
  • • Vendor-agnostic approach
  • • Business language first
  • • Deployment-model focused
  • • Covers emerging AI patterns

Other Frameworks

  • NIST AI RMF: Excellent for mature AI programs
  • CSA Models: Great for cloud-specific implementations
  • Microsoft's Approach: Comprehensive for Azure users

8 Deployment Models

From SaaS AI to on-premises, agentic systems to coding assistants - comprehensive coverage of all AI deployment scenarios

SaaS AI
Fully managed AI services like ChatGPT, Claude, or Gemini

Examples:

ChatGPTClaudeGoogle GeminiMicrosoft Copilot

Customer focuses on usage policies and data governance

PaaS AI
Platform services for building AI applications

Examples:

Azure OpenAIAWS BedrockGoogle Vertex AI

Shared responsibility for application and platform security

IaaS AI
Infrastructure for hosting custom AI models

Examples:

AWS EC2 with MLAzure VMsGCP Compute Engine

Customer manages most security aspects except infrastructure

On-Prem AI
Self-hosted AI systems in your data center

Examples:

Llama on-premisesCustom modelsPrivate deployments

Customer responsible for all security layers

Embedded AI
AI running on edge devices and IoT

Examples:

Mobile AIIoT devicesEdge computing

Device and application security critical

Agentic AI
Autonomous AI agents with decision-making capabilities

Examples:

AutoGPTBabyAGICustom agents

Agent governance and oversight essential

AI Coding
AI-powered development tools and assistants

Examples:

GitHub CopilotCursorCodeium

Code review and security validation required

MCP
Model Context Protocol for AI integrations

Examples:

Claude DesktopMCP serversCustom integrations

Context security and protocol implementation

Responsibility Matrix

Complete mapping of security responsibilities across all deployment models and security domains

P
Provider Responsibility
C
Customer Responsibility
S
Shared Responsibility
AI Security Shared Responsibility Model Matrix showing responsibilities across 8 deployment models and 16 security domains

Comprehensive responsibility matrix mapping Provider (P), Customer (C), and Shared (S) responsibilities across all AI deployment models

16 Security Domains

Traditional security areas plus emerging AI-specific domains for comprehensive coverage

Application Security
Security of AI applications and interfaces
AI Ethics and Safety
Ethical AI use and safety measures
Model Security
Protection of AI models and algorithms
User Access Control
Identity and access management
Data Privacy
Protection of sensitive data
Data Security
Data encryption and protection
Monitoring & Logging
System monitoring and audit trails
Compliance & Governance
Regulatory compliance and governance
Supply Chain Security
Third-party and dependency security
Network Security
Network protection and segmentation
Infrastructure Security
Physical and virtual infrastructure
Incident Response
Security incident handling

Key Principles

Fundamental principles for understanding and applying the shared responsibility model

No Deployment Model is Responsibility-Free

Even fully managed SaaS AI services require customer security efforts. You can't outsource all responsibility.

Example: Using ChatGPT? You're still responsible for data classification, usage policies, and user training.

Responsibilities Increase with Control

More control over your AI deployment means more security obligations. The trade-off is flexibility vs. responsibility.

Example: On-premises AI gives you full control but requires managing all security layers.

Shared Means Coordination

When responsibility is shared, both parties must fulfill their parts. Communication and coordination are critical.

Example: Data security in PaaS requires provider encryption AND customer key management.

New Domains Matter Now

Agent governance and context pollution aren't future problems. They're current challenges requiring immediate attention.

Example: Agentic AI systems need governance frameworks today, not tomorrow.

Getting Started

Four simple steps to implement the AI Security Shared Responsibility Model in your organization

1
Identify Deployment Models

Use the deployment models guide to categorize your current and planned AI systems

2
Check Responsibility Matrix

Review the matrix to understand your obligations for each deployment model

3
Review Security Domains

Understand coverage areas and ensure all domains are addressed in your security program

4
Plan Improvements

Identify gaps and create action plans to address security responsibilities

Related Resources

Explore complementary frameworks and resources for comprehensive AI security

NIST AI RMF
Comprehensive AI risk management framework
AI Governance
Frameworks for AI governance and ethics
Cloud Shared Responsibility
Cloud provider-specific responsibility models

Start Implementing Today

Get the framework, contribute improvements, and join the community building better AI security practices