February 5, 2025 By Sue Poremba 3 min read

The rising influence of artificial intelligence (AI) has many organizations scrambling to address the new cybersecurity and data privacy concerns created by the technology, especially as AI is used in cloud systems. Apple addresses AI’s security and privacy issues head-on with its Private Cloud Compute (PCC) system.

Apple seems to have solved the problem of offering cloud services without undermining user privacy or adding additional layers of insecurity. It had to do so, as Apple needed to create a cloud infrastructure on which to run generative AI (genAI) models that need more processing power than its devices could supply while also protecting user privacy, stated a ComputerWorld article.

Apple is opening the PCC system to security researchers to “learn more about PCC and perform their own independent verification of our claims,” the company announced. In addition, Apple is also expanding its Apple Security Bounty.

What does this mean for AI security going forward? Security Intelligence spoke with Ruben Boonen, CNE Capability Development Lead at IBM, to learn what researchers think about PCC and Apple’s approach.

SI: ComputerWorld reported this story, saying that Apple hopes that “the energy of the entire infosec community will combine to help build a moat to protect the future of AI.” What do you think of this move?

Boonen: I read the ComputerWorld article and reviewed Apple’s own statements about their private cloud. I think what Apple has done here is good. I think it goes beyond what other cloud providers do because Apple is providing an insight into some of the internal components they use and are basically telling the security community, you can have a look at this and see if it is secure or not.

Also good from the perspective that AI is constantly getting bigger as an industry. Bringing generative AI components into regular consumer devices and getting people to trust their data with AI services is a really good step.

SI: What do you see as the pros of Apple’s approach to securing AI in the cloud?

Boonen: Other cloud providers do provide high-security guarantees for data that’s stored on their cloud. Many businesses, including IBM, trust their corporate data to these cloud providers. But a lot of times, the processes to secure data aren’t visible to their customers; they don’t explain exactly what they do. The biggest difference here is that Apple is providing this transparent environment for users to test that plane.

Explore AI cybersecurity solutions

SI: What are some of the downsides?

Boonen: Currently, the most capable AI models are very big, and that makes them very useful. But when we want AI on consumer devices, there’s a tendency for vendors to ship small models that can’t answer all questions, so it relies on the larger models in the cloud. That comes with additional risk. But I think it is inevitable that the whole industry will be moving to that cloud model for AI. Apple is implementing this now because they want to give consumers trust to the AI process.

SI: Apple’s system doesn’t play well with other systems and products. How will Apple’s efforts to secure AI in the cloud benefit other systems?

Boonen: They are providing a design template that other providers like Microsoft, Google and Amazon can then replicate. I think it is mostly effective as an example for other providers to say maybe we should implement something similar and provide similar testing capabilities for our customers. So I don’t think this directly impacts other providers except to push them to be more transparent in their processes.

It’s also important to mention Apple’s Bug Bounty as they invite researchers in to look at their system. Apple has a history of not doing very well with security, and there have been cases in the past where they’ve refused to pay out bounties for issues found by the security community. So I’m not sure they’re doing this entirely out of the interest of attracting researchers, but also in part of convincing their customers that they are doing things securely.

That being said, having read their design documentation, which is extensive, I think they’re doing a pretty good job in addressing security around AI in the cloud.

More from Artificial Intelligence

How red teaming helps safeguard the infrastructure behind AI models

4 min read - Artificial intelligence (AI) is now squarely on the frontlines of information security. However, as is often the case when the pace of technological innovation is very rapid, security often ends up being a secondary consideration. This is increasingly evident from the ad-hoc nature of many implementations, where organizations lack a clear strategy for responsible AI use.Attack surfaces aren’t just expanding due to risks and vulnerabilities in AI models themselves but also in the underlying infrastructure that supports them. Many foundation…

The straight and narrow — How to keep ML and AI training on track

3 min read - Artificial intelligence (AI) and machine learning (ML) have entered the enterprise environment.According to the IBM AI in Action 2024 Report, two broad groups are onboarding AI: Leaders and learners. Leaders are seeing quantifiable results, with two-thirds reporting 25% (or greater) boosts to revenue growth. Learners, meanwhile, say they're following an AI roadmap (72%), but just 40% say their C-suite fully understands the value of AI investment.One thing they have in common? Challenges with data security. Despite their success with AI…

Will AI threaten the role of human creativity in cyber threat detection?

4 min read - Cybersecurity requires creativity and thinking outside the box. It’s why more organizations are looking at people with soft skills and coming from outside the tech industry to address the cyber skills gap. As the threat landscape becomes more complex and nation-state actors launch innovative cyberattacks against critical infrastructure, there is a need for cybersecurity professionals who can anticipate these attacks and develop creative preventive solutions.Of course, a lot of cybersecurity work is mundane and repetitive — monitoring logs, sniffing out…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today