Skip to content

Google's Gemini AI Suite Hit by Three Serious Vulnerabilities

Google's Gemini AI suite was vulnerable. Here's what you need to know about the patched flaws and how to protect your data.

In the picture we can see three boys standing near the desk on it, we can see two computer systems...
In the picture we can see three boys standing near the desk on it, we can see two computer systems towards them and one boy is talking into the microphone and they are in ID cards with red tags to it and behind them we can see a wall with an advertisement board and written on it as Russia imagine 2013.

Google's Gemini AI Suite Hit by Three Serious Vulnerabilities

Tenable Research has uncovered three serious vulnerabilities in Google's Gemini AI suite. These flaws allowed attackers to manipulate the assistant, extract user data, and access location history. Google has since patched the issues, and no user action is required.

The first vulnerability, found in Google Cloud Assist, permitted attackers to inject poisoned log entries. This manipulation could influence Google's behaviour or facilitate unauthorised cloud resource access. Tenable Research warned that enterprises should treat AI-driven features as active attack surfaces and regularly audit logs, search histories, and integrations for signs of manipulation or poisoning.

The second vulnerability affected the Google Search Personalisation Model. Attackers could insert queries into a victim's Chrome search history, exposing saved data and location information. Meanwhile, the third issue in the Google Browsing Tool allowed attackers to trick the tool into sending hidden outbound requests, embedding private information to attacker-controlled servers. This infiltration could occur through indirect prompt injection, where attacker-controlled content is silently pulled into Google's context, and tool execution provides a pathway for attackers to embed sensitive information into outbound requests for exfiltration.

The vulnerabilities affected Google Cloud Assist, Google Search Personalisation Model, and Google's Browsing Tool. Google has remediated these issues, and no action is required from end users. However, Tenable Research's findings serve as a reminder for enterprises to monitor for unusual outbound requests and regularly audit AI-driven features for signs of manipulation or poisoning.

Read also:

Latest