Your employees are already using AI. They’re drafting emails in ChatGPT, summarizing contracts in Claude, building reports with Copilot, and asking Gemini to review vendor agreements. None of it went through IT. None of it was approved. And in most cases, nobody knows it’s happening.
This is shadow AI, and it’s now one of the fastest-growing security and compliance risks for mid-market Canadian businesses. This post breaks down what shadow AI is, what data is actually leaving your organization, and what a practical response looks like.
Shadow AI refers to employees using generative AI tools (ChatGPT, Gemini, Claude, Copilot, and others) without IT knowledge, approval, or data governance controls in place. Unlike shadow IT (unauthorized software), shadow AI is harder to detect because it often runs entirely in the browser and leaves no footprint on your network. The risk: confidential business data is leaving your environment, there is no audit trail, and your compliance obligations may already be violated.
What Is Shadow AI?
Shadow AI is the use of artificial intelligence tools particularly generative AI applications like ChatGPT, Google Gemini, Anthropic Claude, and Microsoft Copilot by employees without the knowledge, authorization, or oversight of the IT or security team. It is an evolution of the shadow IT problem, but with a critical difference: shadow AI doesn’t just introduce unapproved software into your environment; it actively transmits data out of it.
The term distinguishes between AI tools that have been formally evaluated, licensed, and governed (such as Microsoft Copilot deployed through an M365 enterprise tenant) and those being used informally through personal accounts or free-tier access. The distinction matters because enterprise and consumer versions of the same product carry fundamentally different data handling terms.
How Common Is Shadow AI in the Workplace?
More common than most IT leaders assume. According to Microsoft’s 2025 Work Trend Index, 75% of knowledge workers now use AI tools in their daily work, and 78% are bringing their own AI tools to the job rather than waiting for employer-provided options. The report refers to this as “BYOAI” (Bring Your Own AI), and it is accelerating.
The gap between employee adoption and IT governance is significant. Most organizations have policies covering software installation, data classification, and acceptable use, but very few have updated those policies to explicitly address generative AI. In our conversations with GTA mid-market IT teams, the question “do you have an AI use policy?” is still answered with “not yet” more often than not.
The absence of an AI policy does not mean employees aren’t using AI. It means they are using it without guardrails, and you have no visibility into what data is involved.
What Data Are Employees Actually Putting Into AI Tools?
This is where the theoretical risk becomes concrete. Research from data security firm Cyberhaven found that 11% of the data employees paste into ChatGPT is confidential business data, including source code, financial records, client information, and internal strategy documents. That figure was measured across enterprise environments where employees knew their activity could be monitored.
The most documented example remains Samsung’s 2023 incident, in which engineers at the company’s semiconductor division pasted proprietary source code and internal meeting notes into ChatGPT while troubleshooting software bugs. The data was submitted before Samsung had an AI use policy in place. Samsung responded by banning generative AI tools across the company: a reaction that is effective but not sustainable for most organizations.
In practice, the most common categories of data entering AI tools without authorization include:
- Client contracts and proposals being summarized or edited
- Financial reports and budget documents being analyzed
- HR data including performance reviews and compensation information
- Internal technical documentation and system architecture details
- Personal information about clients, employees, or patients
Employees are not being reckless; they are being efficient. The problem is that efficiency and data governance are not the same objective, and without clear guidance, employees default to the tools that get the job done fastest.
Three Security Risks Shadow AI Creates
1. Data Transmitted to Third-Party Servers Without Your Control
When an employee submits a prompt to a consumer AI tool, that data travels to the vendor’s servers. For free and personal-tier accounts, OpenAI’s privacy policy historically allowed user inputs to be used to improve their models unless users opted out. Even with opt-outs enabled, the data has left your environment and sits on infrastructure you do not control, governed by terms of service your legal team has never reviewed.
2. No Audit Trail
Email is logged. File access is logged. Cloud storage has version history. Generative AI prompts submitted through a browser on a personal account leave no trace in your systems. If a compliance audit asks what client data was shared externally in the last 12 months, you cannot answer that question. If a confidentiality dispute arises, you have no record of what was disclosed and when. This is not a hypothetical gap; it is a gap in your controls right now.
3. Confidentiality Agreement and NDA Exposure
Most client-facing confidentiality agreements were drafted before generative AI existed. They prohibit disclosure of covered information to third parties, and submitting that information to a public AI model almost certainly constitutes disclosure, even if the employee’s intent was purely to get help with a task. The exposure is real whether or not a breach occurs. If a client or partner discovers their information was processed through an unapproved AI tool, the conversation with legal is not a comfortable one.
Shadow AI and Canadian Compliance: PIPEDA and PHIPA
Canadian businesses face specific compliance obligations that make shadow AI a regulatory issue, not just a security one.
Under PIPEDA (Personal Information Protection and Electronic Documents Act), organizations are responsible for personal information under their control, including information held by third parties on their behalf. When an employee submits personal information to a consumer AI tool without a data processing agreement in place, the organization is likely in breach of its accountability obligations under PIPEDA Principle 1, regardless of whether the employee acted intentionally.
For Ontario healthcare organizations and their vendors, PHIPA (Personal Health Information Protection Act) applies an even stricter standard. Submitting personal health information to any unauthorized third-party system including an AI tool, without explicit consent and a data custodian agreement is a reportable breach. The Office of the Information and Privacy Commissioner of Ontario has been explicit that AI tools are not categorically exempt from these requirements.
The Office of the Privacy Commissioner of Canada has issued guidance indicating that PIPEDA applies to AI systems that process personal information, including third-party AI tools used by employees. Organizations cannot transfer accountability for personal data to a vendor simply by virtue of employees choosing to use that vendor’s tool.
Approved vs. Unapproved AI Tools: What’s the Difference?
Not all AI tools carry the same risk. The critical variable is not which tool is used; it is what data agreement governs how that tool handles your information.
| Tool / Access Type | Data Stays in Your Tenant? | Audit Trail? | Enterprise Data Agreement? | Risk Level |
|---|---|---|---|---|
| Microsoft Copilot (M365 enterprise tenant) | Yes (data stays within your M365 environment) | Yes, via Microsoft Purview | Yes, covered by Microsoft’s DPA | Low (with proper M365 configuration) |
| ChatGPT Enterprise | Yes inputs not used for training | Limited | Yes enterprise data processing agreement | Low-medium |
| ChatGPT Free / Plus (personal account) | No data sent to OpenAI servers | No | No | High |
| Google Gemini (Google Workspace Business/Enterprise) | Yes covered by Google’s DPA | Yes, via Google Vault | Yes | Low |
| Google Gemini (personal Google account) | No | No | No | High |
| Any AI tool via personal browser, personal account | No | No | No | High |
The pattern is consistent: enterprise-tier access with a signed data processing agreement is manageable. Consumer-tier access with a personal account is not, regardless of which AI provider is involved.
How to Build an AI Use Policy That Actually Works
Banning AI tools is not a realistic response. Employees will continue using them; the ban simply pushes activity further underground and adds compliance exposure without adding security. The goal is governance, not prohibition.
Classify your data first: Before you can govern AI use, you need to know which categories of data carry the highest risk: client information, personal data, financial records, source code, health information. Most organizations already have a data classification framework; the AI policy maps onto it directly.
Define approved tools and tiers: Publish a short, clear list of AI tools that are approved for business use specifying which tier or account type is required. “Microsoft Copilot through your M365 account: approved. ChatGPT with a personal account: not approved for any business data.”
Define what data categories can never enter AI tools: Regardless of which approved tool is being used, certain data categories should be off-limits for AI prompts client personal information, confidential contracts, employee records, health data. Write this out explicitly. Employees need clear rules, not general warnings.
Set up monitoring and logging where possible: Enterprise AI tools with admin consoles (Copilot, Gemini Workspace, ChatGPT Enterprise) provide usage data and, in some cases, prompt-level logging. Enable this. For DLP (Data Loss Prevention) tooling, configure rules that flag when sensitive data categories are being transmitted to AI endpoints.
Train your team once, clearly: A one-page policy memo is not enough. A 30-minute lunch-and-learn that explains what the policy covers, why it exists, and what employees should do when they are unsure is far more effective. People follow rules they understand the reason for.
Review the policy every six months: The AI tool landscape changes faster than annual policy review cycles. Build in a semi-annual review to add newly approved tools, remove deprecated ones, and update data classification rules as your business changes.
The fastest path to a working AI use policy is not starting from scratch. Map your existing acceptable use policy and data classification framework onto AI-specific scenarios. Most of the governance structure is already there; you are adding a layer specific to generative AI, not rebuilding from zero. A focused half-day workshop between IT, legal, and HR is usually enough to produce a first draft.
What to Do Right Now if You Have No AI Policy
If your organization does not yet have an AI use policy, you are not in a minority, but you are carrying avoidable risk. The practical starting point is not a comprehensive policy document; it is a two-step interim position you can communicate this week:
- Communicate a temporary rule: “Until we have a formal policy, do not submit client information, personal information, or anything covered by a confidentiality agreement to any AI tool you access through a personal account.” This is not a ban; it is a data minimization instruction your team can actually follow.
- Audit your enterprise AI tool access: Determine which AI tools your organization already has access to through existing software agreements: M365 Copilot, Google Workspace Gemini, GitHub Copilot for developers. These are your sanctioned options. Communicate them clearly so employees have an approved path rather than defaulting to personal accounts.
Shadow AI is not a future risk; it is happening in your organization right now. The question is not whether your employees are using AI tools; they are. The question is whether they are using tools that keep your data in your control, under governance terms you have reviewed, with an audit trail you can produce when asked. An AI use policy does not require banning AI; it requires channeling AI use through approved tools and clear data rules. Most organizations can produce a working first draft in a single focused session.
At Balanced+, we help GTA mid-market businesses build practical AI governance frameworks as part of broader IT and cybersecurity programs, including data classification, DLP configuration, and employee training. If you want to understand where your current AI exposure sits, a cybersecurity assessment is a good starting point. It is not a sales process it is a structured look at where the gaps are.
Frequently Asked Questions
What is shadow AI, and how is it different from shadow IT?
Shadow IT refers to any software or system used by employees without IT approval: unauthorized apps, personal cloud storage, unapproved SaaS tools. Shadow AI is a specific subset of shadow IT focused on generative AI tools like ChatGPT, Gemini, and Claude. The key difference is the data risk: shadow IT introduces unapproved software into your environment, while shadow AI actively transmits data out of your environment to third-party servers, often with no visibility or audit trail.
Is it a compliance violation for employees to use ChatGPT at work?
It depends on what data is involved and what account type is used. Using ChatGPT through a personal or free account to submit client personal information, health data, or information covered by a confidentiality agreement likely creates a PIPEDA or PHIPA compliance issue, and may breach contractual obligations with clients. Using ChatGPT Enterprise under a signed data processing agreement with proper controls is a different matter. The tool itself is not the determining factor; the governance terms and data involved are.
Can I just ban AI tools to avoid the risk?
You can issue a ban, but it will not eliminate the risk; it will push activity underground and remove whatever visibility you currently have. Employees who rely on AI tools to do their jobs will continue using them; they will simply avoid mentioning it. A more effective approach is to define approved tools with enterprise data agreements, publish clear rules about which data categories are off-limits regardless of the tool used, and monitor usage through available admin consoles. Governance is more durable than prohibition.
What should an AI use policy include for a mid-market Canadian business?
At minimum: a list of approved AI tools and the specific account type required (enterprise, not personal); a list of data categories that cannot be entered into any AI tool regardless of approval status (personal information, health data, client confidential data, source code); a process for employees to request approval for new AI tools; and a reference to existing data classification and acceptable use policies. For businesses subject to PIPEDA or PHIPA, the policy should also address third-party data processing agreements and how AI tool vendors are evaluated for compliance.



