Since launching Apple Intelligence, a new AI-powered platform, the company has increased the scope of its bug bounty program and is rewarding anyone who can compromise the platform’s servers with $1 million.
This incentive program aims to find any weaknesses in Apple Intelligence before it is made available to the general public.
Apple Offers Rs 8 Crore Reward For Hacking Its Apple Intelligence Servers
With iOS 18.1, Apple Intelligence—which was formally unveiled at WWDC 2024—became accessible to the general public. It provides improved AI capabilities, enhanced Siri, more robust privacy controls, and safe on-device AI processing.
Apple has made protecting the security of its infrastructure, especially its Private Cloud Compute (PCC) system, a top priority in order to address worries about AI abuse and privacy.
Using Apple’s in-house silicon servers and a secure environment designed to avoid intrusions, the PCC system supports on-device AI activities by managing cloud-based workloads.
Apple has asked security specialists and ethical hackers to look for any vulnerabilities in the PCC system in order to guarantee the platform’s security.
Three vulnerability categories make up Apple’s bug bounty program, with prize amounts varying according to the severity of each category.
Categories of the Program
With incentives of up to $250,000, the first category, Accidental Data Disclosure, focuses on problems like configuration errors or design errors that unintentionally reveal user data.
The second category, External Compromise via User Requests, focuses on security holes that hackers might take advantage of by having users engage with infected files or links, for example, and offers incentives of up to $1 million for significant breaches.
The third category, known as Internal or Physical Access Exploits, offers incentives of up to $150,000 and comprises vulnerabilities caused by internal access points or privilege escalation within the system.
Apple is giving researchers tools to study the PCC system, such as a Private Cloud Compute Security Guide that describes security features, privacy precautions, and protocols.
Additionally, researchers will be able to test PCC software in a controlled environment by using a Virtual Research Environment (VRE) on Macs.
Researchers can now more thoroughly examine the platform’s architecture thanks to the release of some of the PCC’s source code on GitHub.
Apple’s proactive approach to cybersecurity defense, which aims to promote a safe AI experience, is demonstrated by this hefty bug bounty offer.
Apple hopes to fix flaws in the system before end customers see it by bringing in the cybersecurity community to analyze it.
In order to guarantee that Apple Intelligence is safe when it is released, the program reflects Apple’s goal to establish a high security standard for AI platforms.