Google Offers Up to dollar 20,000 for Critical AI Vulnerabilities in Gemini
Posted by Suman Sourav
Oct-08-2025 11:10:PM
Category: Lifestyle


Google has launched a new AI Vulnerability Reward Program (VRP) targeting high-impact exploits in its Gemini AI systems, offering rewards up to $20,000 for critical findings. This initiative aims to bolster the security of flagship AI products like Gemini, Search, and Google Workspace applications.

What Qualifies for a Reward?


Google's VRP focuses on vulnerabilities that pose significant risks to user safety and data integrity. Eligible exploits include:

  • Indirect Prompt Injection: Manipulating Gemini to extract sensitive information from connected accounts.

  • Data Exfiltration: Crafting prompts that lead Gemini to summarize and send user emails to unauthorized parties.

  • Tool Misuse: Exploiting Gemini's browsing or code-execution tools to access unauthorized data.

  • Model Metadata Extraction: Uncovering hidden system prompts or sensitive model information that could compromise safety measures.


These vulnerabilities are considered "rogue actions," representing the highest-tier threats in Google's classification system. They involve modifying user accounts or data to compromise security or perform unwanted actions.

Why It Matters


Contemporary AI systems like Gemini interact with various Google services, making them susceptible to complex exploits. For instance, a malicious query could manipulate Gemini to perform unintended actions, such as unlocking smart devices or leaking private information. Security teams are particularly concerned about indirect prompt injections, where malicious instructions embedded in web pages or documents can subtly redirect outputs or steal data the user never meant to share.

The OWASP Top 10 for Large Language Model Applications lists prompt injection, data leakage, and insecure plugin/tooling as top risks. Similarly, MITRE’s ATLAS knowledge base documents adversarial machine learning maneuvers ranging from model inversion to data poisoning.

How to Participate


Security researchers interested in participating can submit their findings through Google's official bug bounty platform. Google emphasizes that only high-impact vulnerabilities demonstrating real-world harm potential will qualify for the top-tier rewards. Simple exploits that do not pose significant risks will not be eligible for substantial compensation.

This program underscores Google's commitment to enhancing the security of its AI systems and encourages the cybersecurity community to actively contribute to identifying and mitigating potential threats
Social Share :

Join Telegram
Join Group