Thursday, 31 July 2025
28.6 C
Singapore
30.1 C
Thailand
21.6 C
Indonesia
28.5 C
Philippines

Apple offers US$1M reward for hacking its AI cloud

Apple is offering a US$1 million reward to anyone who can hack its AI cloud and inviting researchers to test the security of its Private Cloud Compute.

Apple has raised the stakes in digital security, offering up to US$1 million to anyone who can hack into its AI cloud, Private Cloud Compute (PCC). This bold move underscores Apple’s commitment to protecting user privacy and ensuring its artificial intelligence systems remain secure and trustworthy. The PCC will handle tasks requiring greater processing power than achievable on devices, but that comes with a heightened need for robust security.

Apple opens its AI cloud to the public for testing

In a recent Apple Security blog post, the company announced a new public research initiative, inviting developers, security researchers, and the tech-savvy public to examine PCC’s security. Previously, the PCC was only accessible to select security researchers and auditors. Now, by expanding access, Apple hopes to uncover any vulnerabilities in its AI cloud, offering hackers and security experts a chance to find and report flaws.

The PCC operates as Apple’s AI powerhouse, handling complex tasks when on-device capabilities, such as those on iPhones and Macs, fall short. Apple stresses that much of its AI processing is managed on-device to ensure data privacy. However, in cases where greater processing is necessary, data is transferred to the PCC, where Apple employs end-to-end encryption to keep user information secure. Still, leaving personal data on one’s device can be unsettling to some users, and Apple’s bug bounty aims to reassure them by proving the robustness of its cloud security.

Up to US$1 million on offer for critical vulnerabilities

Apple’s highest bug bounty payout is set at US$1 million for those able to execute malicious code on the PCC servers, posing the greatest potential threat to security. This reward aims to identify any vulnerabilities that could compromise user data or cloud functionality. A second, substantial bounty of US$250,000 will go to those who manage to exploit user data from the AI cloud, with smaller rewards starting at US$150,000 for accessing data from within Apple’s network in a “privileged network position.”

The tiered structure of Apple’s bounty program encourages ethical hackers and security professionals to discover a wide range of vulnerabilities, from critical issues that could allow malicious software to infiltrate the PCC to less severe but still concerning weaknesses. By providing rewards proportional to the risks posed, Apple is protecting its system and fostering a collaborative approach to security.

Security and privacy are Apple’s priorities

Apple has a history of offering rewards for bug identification and has successfully used such programs to prevent security threats. For example, two years ago, Apple paid a university student US$100,000 to identify a vulnerability within its Mac operating system. The stakes have increased with Apple’s AI cloud, and the company hopes that the added incentive will attract top security talent to help safeguard its latest technological advancements.

The PCC will be critical in delivering seamless AI capabilities as Apple Intelligence becomes more widely integrated into products and services. While Apple assures users that data handled by the PCC remains private and is only accessible to the user, it is taking no chances regarding security. By opening its code to public scrutiny and incentivising discoveries, Apple is betting on the expertise of the tech community to identify and address any issues before they become real risks.

Apple sets a high standard for transparency and security in the tech industry with its new AI cloud bug bounty. This initiative could enhance user confidence in Apple’s commitment to privacy and secure data handling if successful.

Hot this week

YouTube to use AI to identify and restrict underage users’ accounts

YouTube will use AI to identify underage users in the US and apply child safety restrictions, including limits on ads and video content.

DeepSeek faces growing competition from Alibaba and other Chinese AI rivals

DeepSeek’s dominance in China’s AI market is slipping as Alibaba’s Qwen models and other rivals rapidly gain ground, new data shows.

US fines Cadence US$140 million over illegal tech sales to Chinese military-linked university

Cadence to pay US$140 million and plead guilty to violating US export controls over sales to a Chinese military university.

ASUS sets 46 new performance records with Pro WS WRX90E-SAGE SE motherboard

ASUS Pro WS WRX90E-SAGE SE sets 46 performance records with AMD Threadripper PRO 9000 series, including 8 world records and 31 global firsts.

Motorola Solutions introduces AI nutrition labels for safety and security tech

Motorola Solutions launches AI nutrition labels to boost transparency in public safety and enterprise security technologies.

Yelp launches AI-generated videos for restaurants and nightlife venues

Yelp introduces AI-generated videos to showcase restaurants and nightlife spots using user content, OpenAI scripts, and voiceovers from ElevenLabs.

Google adds AI-powered narrated slideshows to NotebookLM

Google updates NotebookLM with Video Overviews, enabling AI-generated narrated slideshows using user documents and visual elements.

YouTube to use AI to identify and restrict underage users’ accounts

YouTube will use AI to identify underage users in the US and apply child safety restrictions, including limits on ads and video content.

Opera files competition complaint in Brazil over Microsoft’s Edge tactics

Opera files a competition complaint in Brazil, accusing Microsoft of steering users toward Edge through anti-competitive tactics in Windows.

Related Articles

Popular Categories