Apple markets Apple Intelligence with a clear proposition: your data stays on your device. The messaging is consistent across every product page, keynote, and support document. On-device processing is the default. The cloud is the exception. And when the cloud is used, Apple has built something called Private Cloud Compute to extend device-level privacy into the server. The architecture is genuinely different from how Google, Microsoft, or Meta handle AI requests. But "different" and "private" are not the same thing. Here is a technical analysis of what stays on your device, what leaves it, and what the privacy claims look like when you read the fine print. What Stays On Your Device Apple Intelligence is designed with a tiered processing model. The on-device model — a relatively small language model running on the Apple silicon neural engine — handles requests that do not require reasoning over large amounts of external data or complex multi-step inference. On-device processing covers: Text prediction and autocorrection. The system predicts your next word, corrects typos, and suggests completions based on your writing patterns. This runs entirely on the neural engine. No data leaves the device. Notification summaries. When Apple Intelligence summarizes a stack of notifications, it uses the on-device model. The content of your notifications is processed locally. Basic image generation. The Image Playground feature uses an on-device diffusion model for simple image creation. The prompt and the output stay on the device. Writing tools (simple). Basic rewriting, proofreading, and tone adjustment for short passages can be handled on-device. Siri request routing. The determination of whether your Siri request can be handled on-device or needs cloud processing is made locally. The routing decision itself does not require sending the full request to Apple's servers. The key constraint is model size. On-device models are limited by the memory and compute available on an iPhone or Mac. They cannot match the reasoning capability of the larger models available in the cloud. When a request exceeds what the on-device model can handle, the system escalates to Private Cloud Compute. What Goes to Private Cloud Compute When Apple Intelligence determines that a request requires a larger, more capable model, it routes the request to Private Cloud Compute. According to Apple's documentation, this happens for: Complex writing tasks. Extended rewriting, creative generation, and multi-paragraph composition that exceed the on-device model's capacity. Complex knowledge queries. Questions that require reasoning over large knowledge bases or synthesizing information from multiple domains. Image analysis. Tasks that require understanding the content of photos, documents, or screenshots in detail. Multi-step reasoning. Tasks that require chaining multiple inferences together, such as planning, analysis, or complex Q&A. When a request is routed to PCC, the following data is transmitted: The full text of your prompt Any attached images or documents referenced in the prompt The model and inferencing parameters selected by the on-device system Limited contextual metadata required for routing (but no personally identifiable information about the device or user, per Apple's claim) Apple states that this data is encrypted end-to-end between the device and the specific PCC nodes processing the request. Supporting infrastructure — load balancers, privacy gateways — operate outside the trust boundary and do not hold the keys to decrypt the request. Private Cloud Compute: How It Works Apple designed PCC with a specific set of security properties, detailed in a June 2024 technical blog post from Apple Security Engineering and Architecture. The claims are worth examining individually. Stateless computation. PCC processes the request in memory, returns the result to the device, and deletes all associated data. There is no persistent storage of user data. The Secure Enclave randomizes data volume encryption keys on every reboot, meaning that even if data were somehow written to disk, it would be cryptographically erased on the next boot cycle. No privileged runtime access. PCC nodes have no remote shell, no interactive debugging, and no SSH access. There is no mechanism for an Apple administrator to log into a running PCC node and inspect the data being processed. The observability tools that do exist emit only pre-specified, structured metrics that have been reviewed to ensure they cannot leak user data. Non-targetability. Apple designed the system so that an attacker cannot route a specific user's requests to a compromised node. The load balancer receives no personally identifiable information about the user. Requests pass through an OHTTP relay operated by a third party, which strips the source IP address before the request reaches PCC infrastructure. The device encrypts the request only to a subset of PCC nodes, and the load balancer — which cannot identify the user — selects which nodes to include. Verifiable transparency. Apple publishes the software images of every production PCC build in a public, append-only, cryptographically tamper-proof transparency log. User devices will only send data to PCC nodes that can cryptographically attest to running software listed in that log. Security researchers can download the images, verify the code, and confirm that the production environment matches what they inspected. Apple has also committed to releasing a PCC Virtual Research Environment for testing, and has extended its Security Bounty program to cover PCC. These are strong technical claims. They are also claims that Apple has asked the security research community to verify — which is more than any other major AI company has offered. The question is whether the verification framework is sufficient. The Verification Problem Verifiable transparency is the most important claim in the PCC architecture. It is also the hardest to validate in practice. The model works like this: Apple publishes the binary images of every PCC build. Researchers download them. Researchers audit the code. Researchers confirm that the privacy properties Apple claims match the implementation. If the code running in production matches the code in the transparency log, the claims hold. The challenges: Binary auditing is difficult. PCC images are large, complex software stacks. Thoroughly auditing even one build requires significant time and expertise. Most security researchers do not have the resources to perform a complete audit of every production build before it is deployed. The attestation depends on Apple's infrastructure. The transparency log and the attestation mechanism are operated by Apple. A sufficiently motivated insider at Apple could, in theory, modify the attestation infrastructure to accept unauthorized builds. The transparency log is append-only and cryptographically tamper-proof, but the root of trust for the log itself is Apple. The threat model excludes legal compulsion. PCC is designed to prevent Apple staff from accessing user data. It is not designed to prevent a government from compelling Apple to modify the system. A court order requiring Apple to introduce a logging mechanism into PCC would bypass the technical architecture entirely. Apple would need to modify the software, publish the modified build in the transparency log (where researchers would detect it), or resist the order. The verification is retrospective. Researchers can verify that a build was clean after the fact. They cannot prevent a compromised build from being deployed. If Apple were compelled to deploy a modified build, researchers would eventually detect it — but the data from users whose requests were processed by that build would already be compromised. This is not a reason to dismiss PCC. The architecture represents a genuine advance in cloud AI privacy. It is a reason to understand that "verifiable" and "prevented" are different things. The ChatGPT Problem Apple Intelligence includes an optional integration with OpenAI's ChatGPT. When a user's query is determined to benefit from ChatGPT's capabilities, Apple asks the user for permission to route the request. When a request goes to ChatGPT: The prompt is sent to OpenAI's servers, which are not part of PCC Apple states that IP addresses are obscured through a proxy and that OpenAI cannot associate requests with specific Apple accounts The user's Apple ID is not shared with OpenAI However, OpenAI's own data retention policies apply to the content of the request This means that when you use Apple Intelligence to ask ChatGPT a question, your data leaves Apple's privacy architecture entirely. The PCC guarantees — ephemeral processing, no logging, verifiable transparency — do not apply. OpenAI stores conversations according to its own policies, which allow it to retain data for safety and abuse monitoring. Users can opt out of chat history in their OpenAI account settings, but the default is retention. Apple has framed the ChatGPT integration as an optional feature that requires explicit user permission. This is accurate. But the permission prompt does not explain that sending a query to ChatGPT means the data is subject to different privacy rules than queries handled by PCC or on-device processing. The distinction between Apple's privacy architecture and OpenAI's is not clearly communicated in the user-facing prompt. What Apple Does Not Address Several privacy-relevant aspects of Apple Intelligence fall outside the PCC architecture: Training data. Apple states that it does not use user data to train its foundation models. The company's privacy documentation says that on-device processing data is never sent to Apple for training. But the training data for the on-device and PCC models themselves — what data Apple used to build them, how it was sourced, and whether it included copyrighted material — is not disclosed in the same level of detail. App developer access. Third-party apps that integrate with Apple Intelligence through the App Intents API can define what data they share with the system. The on-device model processes this data locally, but the data still enters Apple Intelligence's processing pipeline. Apple's documentation states that app data is not shared with other apps or sent to Apple servers, but the system's access to app content is broader than most users assume. Regional variation. Apple Intelligence features are not available in all regions. In China, Apple operates under different regulatory requirements that include government access to data. The PCC architecture as described applies to Apple's global infrastructure, but the legal environment in which it operates varies by jurisdiction. Future feature expansion. The current privacy architecture is designed for the current feature set. As Apple Intelligence adds capabilities — deeper system integration, always-on assistants, ambient computing — the scope of data processed by the system will expand. Privacy protections that are adequate for notification summaries may not be adequate for an always-listening assistant that has access to your calendar, email, messages, photos, and location. How It Compares Apple's approach to AI privacy is meaningfully different from its competitors. The comparison is instructive. Google Gemini. Google processes most AI requests in the cloud. User data is used to improve models by default (users can opt out). Google's infrastructure does not offer verifiable transparency or ephemeral-only processing. Google does have on-device processing for some features (like voice recognition), but the default is cloud processing with data retention. Microsoft Copilot. Copilot processes requests through Microsoft's Azure infrastructure. Enterprise customers can configure data retention policies, but consumer data is retained by default. Microsoft does not offer a PCC-equivalent architecture with cryptographic guarantees about data deletion or verifiable transparency. Meta AI. Meta's AI assistant processes requests in the cloud and uses interaction data to improve models. Meta's privacy policy allows broad data collection for AI training. There is no on-device processing option for complex requests and no verifiable transparency mechanism. Apple is the only major AI provider that has built a cloud processing architecture designed to be provably unable to retain user data. Whether that architecture delivers on its promises depends on independent verification — which Apple has invited — and on whether the legal and institutional safeguards hold when they are tested. What You Can Do Keep as much on-device as possible. Simple requests that the on-device model can handle never leave your device. If you do not need complex AI features, the default on-device processing provides the strongest privacy guarantee. Be aware of what triggers PCC routing. Complex writing tasks, detailed image analysis, and multi-step reasoning queries will be routed to the cloud. If you are working with sensitive content — medical information, legal documents, personal correspondence — consider whether you need AI assistance for that specific task. Monitor ChatGPT requests. When Apple Intelligence suggests routing a query to ChatGPT, it asks for permission. Read the prompt. Understand that accepting means your data is subject to OpenAI's retention policies, not Apple's. Audit your OpenAI settings. If you use the ChatGPT integration, check your OpenAI account settings to disable chat history if you do not want your interactions retained. Watch the transparency log. Apple's commitment to publishing PCC software images is the most important accountability mechanism in the system. Security researchers should have the resources to audit these builds. Supporting independent security research is the best way to ensure the verification promise is kept. --- Apple has built the most privacy-conscious cloud AI architecture currently deployed at scale. That is a factual statement. It is also a statement about the state of the industry: the bar was on the floor. Apple raised it. Whether it raised it high enough depends on verification, legal resilience, and what happens when the architecture is tested by a government that wants access to data it was designed to deny. _- The Department_