They knew. They launched anyway. On May 6, 2026, Canada's Privacy Commissioner Philippe Dufresne — flanked by his counterparts in Quebec, British Columbia, and Alberta — released the findings of a joint investigation into how OpenAI trained its first ChatGPT model. Their conclusion: OpenAI did not respect Canadian privacy laws. The company collected vast amounts of personal information to train its large language model without adequate safeguards, without valid consent, and — according to statements from OpenAI leadership at the time — with full knowledge that privacy issues remained unresolved. "We felt we had to move, we knew that there were others out there and so we launched it."
— OpenAI leadership, as quoted in the investigation report That was enough for Dufresne. "We found that problematic," he said. What the investigation found The joint probe, which began in 2023 after a formal complaint, identified several specific violations: Collection without consent: OpenAI gathered personal data from Canadians to train ChatGPT without establishing a valid legal basis under PIPEDA (federal) or provincial equivalents. Sensitive data included: The collected information included health conditions, political views, and data about children — all categories that carry higher legal protections. No safeguards: The company lacked adequate controls to prevent this information from being used in model training, meaning Canadians' personal details could be reproduced by ChatGPT in responses to other users. Users were unaware: Many Canadians whose data was collected had no knowledge it was being used to train an AI system. The Tumbler Ridge connection The investigation predates the February 2026 mass shooting in Tumbler Ridge, B.C. — but the timing of the report's release puts the incident in stark relief. Seven lawsuits have been filed in California accusing OpenAI and Sam Altman of negligence in connection with the attack. According to filings, the shooter had a ChatGPT account banned for "disturbing content," allegedly including detailed plans for violent scenarios. The lawsuits claim approximately 12 different OpenAI employees urged the company to notify Canadian law enforcement about the account. Nothing was done until after the tragedy. Late last month, Altman wrote an apology letter to the Tumbler Ridge community for failing to alert the RCMP. What OpenAI says now The company disagreed with the investigation's findings, asserting it was compliant "in most respects" with applicable privacy legislation. In a blog post published the same day as the report, OpenAI explained its data practices: It claims to use only freely and openly accessible information for training
It says it applies a privacy filter to mask personal information in text
It emphasized ongoing efforts to "strengthen how we detect and respond to credible threats of violence while maintaining privacy safeguards" The privacy commissioners acknowledged that OpenAI has since taken steps to improve privacy protections and has agreed to implement further measures. Why this matters beyond Canada The report comes amid a global crackdown on AI training data practices: The EU's AI Act requires transparency about training data and prohibits certain uses of personal data in AI systems
Multiple lawsuits are pending in the U.S. alleging unauthorized use of copyrighted and personal content in LLM training
Australia's recent social media ban for under-16s reflects growing concern about how tech platforms handle user data Canada's privacy czar used the report to argue for updated federal privacy legislation. "As AI is increasingly being integrated into personal and professional applications," Dufresne said, "updated laws would help further support the safe deployment of new technologies to protect Canadians' fundamental right to privacy." Conservative Leader Pierre Poilievre voiced support for reviewing privacy laws "to make sure they're matched with the times." What you can do Opt out of data training where available. OpenAI and other AI companies offer opt-out mechanisms for future training — the default is "in."
Review what you've shared with AI assistants. If you've used ChatGPT for anything sensitive (health questions, legal issues, work documents), assume that data sits in training corpuses compiled before opt-outs existed.
Pressure your representatives. Canada still operates under PIPEDA, a privacy framework that predates modern AI. The Commissioner explicitly called for legislative updates. The bottom line OpenAI built one of the most popular consumer products in history using data it admits it didn't have full rights to. The company knew about the privacy gaps, launched anyway, and is now — three years later — agreeing to fix them. The investigation was a joint effort. Four separate regulators reached the same conclusion. The question isn't whether OpenAI violated Canadian privacy law. It's whether any meaningful consequence follows.