What is Apple's Private Cloud Compute and is it really trustworthy?



Apple Private Cloud Compute - Personal AI


A good friend of mine, who’s always geeking out over the latest AI trends, hit me up with a question: “What’s going to happen to my sensitive work email data once Apple’s new AI starts working its magic all over the cloud?” Great question, right? So, I thought it might be a good idea to get into the weeds a little bit (based on what we know so far) and try to explain how Private Cloud Compute (PCC) actually works.

Buckle up, because this actually feels like AI finally meats privacy!


Private Cloud Compute

Apple just dropped a game-changer and an oxymoron at the same time: Private Cloud Compute (PCC), which is just like inventing a Private Public Transportation... 

This tech lets your iPhone, iPad, or Mac handle super complex AI tasks by offloading them to Apple's ultra-secure cloud. Think of it as giving your iPhone a brain boost, but without sacrificing your privacy.


On-Device vs. Cloud Processing

Apple has always been about on-device processing. Your iPhone does all its magic, like recognizing text in photos, right on the device. But here's the catch: modern AI tasks need more horsepower than even the latest iPhone can provide. That’s where PCC steps in, taking over the heavy lifting while keeping your data locked down.


Privacy First

Sending data to the cloud usually means risking privacy. Spies, hackers, and data-hungry companies are always on the prowl. Apple's solution? Build super secure, trustworthy servers in their own data centers.

By default, most AI tasks try to run locally on your device, but whenever needed, the device can offloads tasks to these secure servers, ensuring your data remains yours and yours alone.


The Tech Behind PCC

When your device needs more processing power, it sends an encrypted request to the PCC. This request is encrypted with public keys of the PCC nodes, ensuring end-to-end encryption from your device to the cloud. 

Apple is using the same secure tech from their iPhones and Macs in these new servers. Secure Boot and Secure Enclave Processor (SEP) are just the start. They’ve beefed up the hardware to resist tampering and made sure the software can’t keep your data after processing it. In other words, Apple's hardware security ensures that only authorized, cryptographically measured code can run on these nodes. 

Every time a PCC node finishes running the complicated AI tasks it deletes all user data, and wipes its storage clean, so nothing lingers.

The PCC nodes use a special system that checks and rechecks every piece of software running on them. They sign a hash of the software and share it with your device, which can then verify that the software is legit. Plus, Apple is making these software images available to security researchers for verification, adding another layer of trust in case a company wishes to ensure everything is working as claimed. 

Apple claims Siri will now get to know you better based on previous tasks and ongoing interaction - that raises the question: if nothing is stored how come Siri learns about your personal preferences? 

I think the answer lies within the new architecture, in which the first line of processing is performed on the device itself. That's also where personalization and preferences will be stored. Meanwhile, the heavy lifting is performed remotely (and totally "stateless") and will either not be as personal as the on-device part or will receive a larger context that includes some personalized hints. I don't know how Apple designed it, but this is how I would have.. 


Taking Confidential Computing to the Next Level

This post wouldn't feel right without using technical jargon so here it is: Apple's PCC doesn't just stop at being secure; it pushes the envelope of what's possible with confidential computing. Here's how:

  • No Data Retention: Unlike other cloud providers, PCC uses stateless computation that doesn’t keep any user data after processing requests.
  • No Privileged Access: Even Apple's Site Reliability Engineers (SREs) can't access your data, no matter what.
  • Custom Hardware and OS: Built with Secure Enclave and Secure Boot, the hardware and operating system are designed to minimize the attack surface.
  • Non-Targetability: User requests are masked so that attackers can't route them to compromised servers.
  • Verifiable Transparency: Security researchers can inspect software images to verify security assurances and identify issues.


The Big Questions

This all sounds great, but it raises some tough questions:

  • Is this really as secure as keeping everything on-device?
  • Could this system be exploited in ways we haven’t thought of yet?
  • Will Apple tell us when our data is being sent to the cloud? 
  • Will there be an option to opt out of any "out-of-device" computation?
  • Most interesting question: how much of the data will end up reaching OpenAI and what would they be able to do with it? 

To some users, some of these questions are critical for full adoption and might turn into real blockers. 


The Future of AI Processing

Despite the challenges, PCC feels to me like a good step forward. Instead of sending your private data to some sketchy third-party server (which is the case when we're using all kind of new AI services and startup products), Apple is keeping it in-house with a promise not to peek. so if you trust them with your private photos, stocks, payments, work email, slack and personal notes, you should probably trust them with regards to AI tasks. 


Wow, a full post about Apple's latest announcements without mentioning emojis, ranting about stealing Android's homescreen customization, or the fact that they just ripped off and rebranded the term AI to become Apple Intelligence. 


Comments