Skip to content

The business impact of Apple Intelligence

Brian Madden
Brian Madden,Workplace AI Consultant & Analyst
This page was last updated on June 24, 2024.
This page is part of The Workplace AI Strategy Guide

This page is part of a step-by-step guide to Workplace AI strategy, which I'm currently in the process of writing. I'm creating it like an interactive online book. The full table of contents is on the left (or use the menu if you're on a mobile device).

What's this guide all about? Check out the intro or full table of contents.

Want to stay updated when this guide is updated or things are added? There's an RSS feed specifically for strategy guide content. RSS feed for guide updates .

This page is incomplete!

This page is part of my step-by-step guide to Workplace AI, which I'm in the process of writing. I'm doing it in the open, which allows people to see it and provide feedback early. However many of the pages are just initial brain dumps, bullets, random notes, and/or incomplete.

There's an overview of what I'm trying to accomplish on the "What is this site about?" page.

Want to stay updated pages get major updates or things are added? I have a news feed specifically for guide content here: RSS feed for guide updates .

Brian (July 2024)

Apple announced their AI plans, called “Apple Intelligence,” at their Worldwide Developer Conference in June 2024. Hundreds of opinion & analysis pieces have been written on it, with people split on whether it’s awesome or lame, and whether Apple is pulling ahead or falling behind. Diving into that isn’t really the point of this guide, so I won’t offer anything there, though I’ll link to Scott Galloway’s perspective as it’s the one I most agree with.

Instead, let’s look at Apple Intelligence through the lens of corporate IT. When large swaths of their employees get access to these capabilities over the coming months, what will the impacts to the company be? Should they be concerned? What are the risks?

I’m not going to rehash the details of Apple Intelligence since there are already a million articles on that. Instead I’ll walk through the specific things that corporate IT folks should know about.

Quick note: Apple Intelligence will not be available to Europe

On June 21, 2024, Apple announced that Apple Intelligence will be indefinitely delayed for users in Europe. They claim the European Digital Markets Act (DMA) could force them to modify their services in a way that compromises use privacy and security.

No further information or potential timelines or paths to resolution is available at this time. I will keep this section updated as we learn more.

The IT security risk of Apple Intelligence is low

The first thing that comes to mind about employees using Apple Intelligence is the security risk. Elon Musk lit that fire when he wrote that Apple devices would be banned from his companies due to these risks. While most of what he wrote is untrue and has since been fact-checked, he succeeded in planting the seed in millions of people that Apple Intelligence is not secure or should not be trusted.

To me, the real question is “secure compared to what?”

For companies who do not allow personal devices, or who require EMM, UEM, and/or locked down devices, they can simply block or disable these new features and carry on as usual.

For companies who allow employees to use personal devices, then Apple Intelligence can’t possibly be worse than all the existing LLM apps, sites, assistants, and tools that employees already have access to and are using on their devices today.

But it’s sending data to the cloud!

These new Apple Intelligence features are “dual-stage”, where there’s a lighter-weight LLM which will run locally on the device, and then for scenarios where the device model isn’t powerful enough, the request will automatically be “cloud bursted” to Apple’s Private Cloud Compute.

For cloud and virtualization nerds like me (and probably most people reading this), this is awesome, and, quite frankly, exactly the holy grail of edge computing / cloud-bursting / modern app architectures that we’ve been excitedly hyping for a decade. I would think we’d be celebrating this.

Negative reactions to Apple’s plans here fall into the broad “but you can’t trust the cloud” category. These arguments are anachronistic and tired, and from the corporate standpoint, the remaining organizations who “can’t trust the cloud” in 2024 already have x-ray security with no electronics go in or out, employees BYOAI is not a thing for them. The rest of the world is already using Azure, AWS, GCP, and others, so the real security questions about Apple Intelligence is whether it’s at least as secure as the cloud things companies are already doing today.

From what Apple has shared so far, their approach to security here is impressive. Apple’s Security Engineering and Architecture team published a paper which explains more about how the Apple Private Cloud Compute works, including why they don’t trust public clouds for this task and how their Private Cloud Compute is designed. (TLDR: stateless compute, enforceable guarantees, custom stripped-down OS, verifiable transparency, and truckloads of M2 Ultra processors.)

What about the ChatGPT part? Is that more of a risk?

Apple has also designed a framework where Siri can reach out to third-party LLMs for requests it’s not able to fulfill on its own. Apple will first launch this functionality with ChatGPT as their partner, but they mentioned that other LLM vendors (Google, etc.) will be added in the future.

Peoples’ negative reactions to this have been all over the map, but they broadly sort into expected categories like, “You can’t trust OpenAI,” “If Siri has access to your entire device, now OpenAI will have access to your entire device,” and “Now they’re going to train the next GPT on your own personal data!”

Apple and OpenAI have described how this integration will work, and the things many people are afraid of do not align with those explanations. The functionality which escalates user requests to third-party LLMs on Apple devices is optional. It will behave like existing third-party integrations (browser, email client, maps, music, etc.), and the user or device manager will be able configure whether it’s always allowed, allowed on a case-by-case basis with user approval, or disabled entirely.

So if you manage a bunch of Apple devices and don’t trust OpenAI, you can just disable this integration. And if you have a bunch of employees with Apple devices that you can’t manage, then many of them are already using the ChatGPT app and this new functionality does not introduce any new risk.

Like other GenAI solutions, the bigger risk is around employee and company culture in this near future where AI assistance is personalized, ubiquitous, and “helps” people so seamlessly and smoothly that they hardly even notice it.