Shadow AI Policy Template: How to Build and Enforce It
Free Shadow AI policy template with 12 essential sections. Mapped to NIST AI RMF, ISO 42001, and EU AI Act compliance frameworks. Learn why policies without enforcement fail.
Why You Need a Shadow AI Policy
The adoption of AI tools across organizations has outpaced the development of governance frameworks to manage them. According to IBM's Cost of a Data Breach Report (2025), only 37% of organizations have policies specifically designed to detect and manage Shadow AI. The remaining 63% are operating without a formal plan to address the fastest-growing category of data risk in the enterprise.
The financial consequences are severe. Shadow AI breaches cost organizations $4.63 million on average (IBM, 2025), which is $670,000 more than organizations with low shadow AI exposure. And even when policies exist, they often fail: 78% of employees use unapproved AI tools even when their organization has an AI policy in place (WalkMe/SAP, 2025).
If you are not familiar with the concept of Shadow AI, start with our comprehensive guide: What is Shadow AI?
A Shadow AI policy is not the same as a generic "AI Acceptable Use Policy." While an acceptable use policy covers how employees should interact with approved AI tools, a Shadow AI policy specifically addresses the detection, prevention, and response to unauthorized and unmanaged AI usage. The distinction matters because the risk profile is entirely different. Approved tools have data processing agreements, security reviews, and audit trails. Shadow AI has none of these protections.
A policy without enforcement tools is paper. You need both a well-structured policy and the technical controls to make it real. This guide gives you the policy framework. The final section shows you how to enforce it.
Shadow AI Policy vs AI Acceptable Use Policy
Organizations often confuse these two documents or try to combine them into one. This is a mistake. Each serves a distinct purpose, and you need both to cover the full spectrum of AI governance.
An AI Acceptable Use Policy (AUP) is broad. It covers all AI usage in the organization, including approved and sanctioned tools. It defines how employees should use AI responsibly: what data they can share, how they should review AI outputs, attribution requirements, and quality standards.
A Shadow AI Policy is specific. It targets unauthorized AI usage: how the organization detects it, prevents it, and responds when it occurs. It defines what constitutes an unauthorized tool, what monitoring the organization conducts, and what happens when violations are discovered.
| Dimension | AI Acceptable Use Policy | Shadow AI Policy |
|---|---|---|
| Focus | How to use approved AI tools responsibly | How to detect, prevent, and respond to unauthorized AI usage |
| Scope | All sanctioned AI tools and features | All unsanctioned, unapproved, or unmanaged AI tools |
| Tone | Enabling and prescriptive | Protective and responsive |
| Key sections | Data handling rules, output review, attribution, quality standards | Monitoring, incident response, enforcement, tool classification |
| Audience | All employees using approved AI | Security, IT, compliance teams; all employees as subjects |
Why you need both: the AUP governs the sanctioned use of AI, but according to UpGuard (2025), 81% of enterprise AI usage is unauthorized. Your AUP covers the 19% that is visible and approved. Your Shadow AI policy covers the other 81%.
The 12 Sections Every Shadow AI Policy Needs
A comprehensive Shadow AI policy should cover the following twelve sections. Each section addresses a specific dimension of the Shadow AI problem, from defining what counts as unauthorized AI to establishing how employees can request new tools.
1. Purpose and Scope
Begin by stating why the policy exists and who it applies to. The purpose should reference the organization's commitment to responsible AI adoption while protecting sensitive data, intellectual property, and regulatory compliance.
The scope should be broad: all employees, contractors, consultants, temporary workers, interns, and third-party vendors who access company systems or data. Shadow AI risk does not respect organizational boundaries. A contractor pasting confidential data into ChatGPT creates the same exposure as a full-time employee doing so.
2. Definitions
Define all key terms clearly to eliminate ambiguity. At a minimum, your definitions section should cover:
- Shadow AI: Any AI tool, service, model, extension, or AI-powered feature used for work purposes without explicit approval from IT, Security, or the designated governance body.
- Approved AI tools: AI tools that have passed the organization's security review and are authorized for specified use cases with defined data restrictions.
- Prohibited AI tools: AI tools that have been evaluated and explicitly banned due to security, privacy, or compliance concerns.
- Sensitive data categories: Define what constitutes sensitive data in the context of AI usage, including PII, PHI, financial data, intellectual property, source code, credentials, trade secrets, and regulated data.
3. AI Tool Classification (3 Tiers)
Establish a clear tiered system for categorizing AI tools. A three-tier model provides the right balance between simplicity and granularity:
- Tier 1: Fully Approved. Tools that have completed security review, have enterprise data processing agreements in place, and are managed by IT. No usage restrictions beyond the AUP. Examples: enterprise ChatGPT with data protection, company-managed Copilot with SSO.
- Tier 2: Limited Use. Tools that are permitted for non-sensitive data only. Employees may use these tools for general research, brainstorming, and public information processing, but must not submit confidential, regulated, or proprietary data. Examples: free-tier AI tools for non-sensitive tasks.
- Tier 3: Prohibited. Tools that are blocked entirely due to unacceptable risk. This includes AI tools with known data retention issues, tools based in jurisdictions that conflict with data residency requirements, and tools that have failed security review. Examples: AI tools without clear privacy policies, tools that train on user data by default.
4. Data Classification for AI Use
Map your existing data classification framework to AI usage permissions. This creates a clear matrix that employees can reference when deciding whether to use AI for a specific task:
- Public data: Any AI tool may be used. No restrictions on tool tier.
- Internal data: Tier 1 (Approved) tools only. Examples: internal meeting notes, non-sensitive project plans, general business communications.
- Confidential data: Tier 1 tools only, with DLP controls active. Examples: financial reports, customer data, employee records, business strategies.
- Regulated data (PII, PHI, PCI): No AI usage without explicit approval from the Data Protection Officer and active DLP controls that sanitize sensitive fields before submission.
5. Approved Tools List
Maintain a living document that lists every sanctioned AI tool along with its approved use cases, data restrictions, and the team responsible for its governance. This list should be easily accessible to all employees and updated regularly.
For each approved tool, document: the tool name and version, approved use cases (e.g., "code completion for non-sensitive repositories"), data restrictions (e.g., "no PII, no credentials, no proprietary algorithms"), the responsible team or tool owner, the date of the last security review, and any configuration requirements (e.g., "must use SSO, must disable training on company data").
6. Tool Request and Approval Process
One of the primary drivers of Shadow AI is that employees cannot get the tools they need through official channels quickly enough. Your policy should define a streamlined request process:
- Submission: Employees submit a request form specifying the tool, intended use case, data types involved, and business justification.
- Security review: The security team evaluates the tool against defined criteria: data processing practices, privacy policy, data retention, encryption, SOC 2/ISO 27001 certification, and jurisdiction.
- Timeline: Commit to a maximum review period (e.g., 10 business days for standard requests, 5 for expedited). Long review cycles drive Shadow AI adoption.
- Escalation path: Define how employees can escalate if the review process stalls or if they disagree with a decision.
7. Monitoring and Transparency
This section must balance organizational security needs with employee privacy expectations. Be transparent about what the organization monitors and why:
- What is monitored: Access to known AI tool domains, content submitted to AI platforms (for sensitive data detection), AI browser extensions installed, and AI features activated in SaaS tools.
- What is not monitored: Clearly define boundaries, such as personal device usage on personal networks.
- Employee notification: State how employees are informed about monitoring (e.g., at onboarding, through annual acknowledgment, via browser extension notifications).
- Data retention: Define how long monitoring logs are retained and who has access to them.
8. Incident Definition and Response
Define what constitutes a Shadow AI incident and establish clear response procedures:
- Incident categories: (a) Use of a prohibited AI tool without sensitive data exposure. (b) Use of any AI tool with sensitive data exposure. (c) Use of AI tools that violates regulatory requirements (GDPR, HIPAA, EU AI Act).
- Response procedures: Containment (immediately assess what data was exposed), Investigation (determine the scope, the tool involved, and data types affected), Remediation (revoke access, rotate credentials if needed, notify affected parties), and Documentation (record the incident for compliance and audit purposes).
- Notification requirements: Define when incidents must be reported to internal stakeholders (CISO, DPO, Legal) and when external notification is required (regulatory bodies, affected individuals).
9. Training Requirements
According to WalkMe/SAP (2025), 48.8% of employees actively hide their AI usage from their employer. Training should address not just the "what" but the "why," helping employees understand the genuine risks rather than simply imposing restrictions.
- Mandatory AI literacy: All employees must complete baseline AI risk awareness training that covers data exposure risks, compliance implications, and proper use of approved tools.
- Role-specific training: Developers need training on AI coding assistant risks (secret leakage, code licensing). HR teams need training on AI and employee data. Finance teams need training on AI and financial data protection. Legal teams need training on AI and privilege/confidentiality.
- Frequency: At onboarding, with annual refreshers and ad-hoc updates when significant policy changes occur or after notable incidents.
10. Enforcement and Consequences
Define a progressive enforcement model that balances accountability with learning:
- First violation (low severity): Documented warning and mandatory refresher training. Focus on education, not punishment.
- Repeat violation or moderate severity: Access restrictions, manager notification, and formal documentation in the employee's record.
- Severe violation (sensitive data exposure, regulatory breach): Escalation to HR and Legal, potential disciplinary action up to and including termination.
- Technical enforcement: In parallel with human enforcement, implement technical controls that block prohibited tools, display warnings for risky actions, and log all AI interactions for audit purposes.
11. Review Cadence
The AI landscape evolves faster than any other technology category. Your Shadow AI policy must evolve with it:
- Scheduled reviews: Quarterly at minimum. Review the approved tools list, data classification matrix, and incident logs.
- Trigger-based reviews: Conduct an immediate review when new regulations are enacted (e.g., EU AI Act enforcement milestones), after a significant Shadow AI incident, when major new AI tools are released, or when the organization's data classification framework changes.
- Review team: The review should involve Security, Legal, Compliance, HR, and business unit representatives.
12. Employee Feedback Mechanism
This section is often overlooked, but it is one of the most important. Shadow AI exists because employees have unmet needs. If your policy does not include a mechanism for employees to voice those needs, you are treating symptoms instead of root causes.
- Tool request channel: A clear, accessible process for employees to request new AI tools or features.
- Policy feedback: A mechanism for employees to suggest changes to the policy, report friction, or flag overly restrictive rules that drive workarounds.
- Response commitment: Commit to responding to feedback within a defined timeframe. Silence breeds Shadow AI.
Aligning with Compliance Frameworks
Your Shadow AI policy does not exist in isolation. It should align with established AI governance frameworks and regulatory requirements. Here is how the 12 sections map to the three most relevant frameworks.
NIST AI Risk Management Framework (AI RMF)
The NIST AI RMF organizes AI risk management into four functions: Govern, Map, Measure, and Manage. Your Shadow AI policy touches all four:
- Govern: Sections 1 (Purpose and Scope), 10 (Enforcement), 11 (Review Cadence), and 12 (Feedback) establish the governance structure for AI risk management.
- Map: Sections 2 (Definitions), 3 (Tool Classification), and 4 (Data Classification) map the AI risk landscape by identifying tools, data flows, and risk categories.
- Measure: Sections 7 (Monitoring) and 8 (Incident Response) provide the measurement framework for assessing AI risk exposure.
- Manage: Sections 5 (Approved Tools), 6 (Request Process), and 9 (Training) actively manage risk by providing approved alternatives and educating employees. The Manage function explicitly addresses converting Shadow AI users to governed alternatives.
ISO 42001
ISO 42001 is the first international standard for AI management systems, and it is certifiable. Organizations pursuing ISO 42001 certification must demonstrate:
- AI inventory: A complete catalogue of all AI systems in use (Sections 3 and 5 of your Shadow AI policy).
- Risk assessment: Identification and evaluation of AI-related risks (Sections 4 and 8).
- Controls: Implemented controls to mitigate identified risks (Sections 7 and 10).
- Continuous improvement: Evidence of ongoing monitoring, review, and policy refinement (Sections 11 and 12).
A Shadow AI policy is a prerequisite for ISO 42001 certification. Without one, the organization cannot demonstrate control over its AI environment.
EU AI Act
The EU AI Act is enforceable from August 2026 and introduces the most stringent AI governance requirements in the world. Key implications for Shadow AI:
- AI system catalogue: Organizations must catalogue ALL AI systems that influence decisions about people. Shadow AI makes this requirement impossible to meet because unauthorized tools are, by definition, uncatalogued.
- Risk classification: The EU AI Act classifies AI systems by risk level (unacceptable, high, limited, minimal). Shadow AI tools cannot be classified if they are unknown to the organization.
- Penalties: Fines reach up to EUR 35 million or 7% of global annual turnover, whichever is higher. These penalties apply to organizations that fail to maintain an AI inventory, use prohibited AI systems, or deploy high-risk AI without proper controls.
Shadow AI makes EU AI Act compliance impossible. You cannot classify, document, or govern AI systems you do not know about.
Why a Policy Alone Is Not Enough: From Policy to Enforcement with Onefend
The data tells a clear story: 78% of employees use unapproved AI tools despite existing policies (WalkMe/SAP, 2025). Nearly half of them actively hide their usage. Policies without automated enforcement are aspirational documents. They describe what should happen, not what actually happens.
The gap between policy and practice is where Shadow AI risk lives. Closing that gap requires technology that turns each policy section into an enforceable control.
Here is how each section of your Shadow AI policy maps to an Onefend capability:
| Policy Section | Onefend Capability |
|---|---|
| Tool Classification (Section 3) | Shadow AI Discovery: automatically detects all AI tools and services accessed across your organization, including embedded AI features in approved SaaS. |
| Data Classification (Section 4) | DLP Engine: detects 50+ sensitive data types in real time, including PII, credentials, financial data, health records, and source code, before they reach external AI models. |
| Monitoring (Section 7) | Real-time Audit Trails: immutable logs of every AI interaction, searchable by user, tool, data type, and action taken. Built for compliance reporting. |
| Incident Response (Section 8) | Configurable Actions: block, warn, sanitize, or log per policy. Each action is configurable per AI tool, data type, and user group. |
| Training (Section 9) | Educational Modals: in-browser coaching at the moment of risk. When an employee attempts a risky action, Onefend displays contextual guidance explaining the risk and the approved alternative. |
| Review Cadence (Section 11) | Analytics Dashboard: AI usage trends, risk metrics, policy violation patterns, and compliance reports. Provides the data foundation for quarterly policy reviews. |
The difference between a policy and a program is enforcement. Onefend turns your Shadow AI policy from a document into a living, automated governance layer. Instead of hoping employees read and follow the policy, Onefend enforces it at every AI interaction, in real time, across every browser in your organization.
Organizations using Onefend can deploy a Shadow AI policy knowing that every section has a corresponding technical control. The policy defines the rules. Onefend enforces them.
Request a demo to see how Onefend enforces Shadow AI policies in real time.
Ready to secure your AI journey?
Join the organizations setting the standard for safe AI adoption.
Start detecting Shadow AI