Skip to content

Potential Privacy Issue: Momentary Malfunction in Meta AI May have Unintentionally Disclosed Private Chats to Other Users

Meta AI glitch potentially disclosed users' private queries and AI-generated responses to others, a problem that's been addressed. However, this incident underscores significant privacy issues.

Potential Data Leak from Meta AI: Your Private Chats May Have Been Accessed by Other Users
Potential Data Leak from Meta AI: Your Private Chats May Have Been Accessed by Other Users

Potential Privacy Issue: Momentary Malfunction in Meta AI May have Unintentionally Disclosed Private Chats to Other Users

In a recent development, a privacy issue has been uncovered in Meta's AI chatbot. The problem stemmed from a security bug that allowed unauthorised users to access other users' private interactions with the AI.

The core issue was poor access control. Meta's servers failed to verify if a user was authorised to view specific prompts, making it easier for potential attackers to guess or use automated tools to access other users' data. The ID numbers used to identify these prompts were simple and predictable, exacerbating the vulnerability.

The simplicity and predictability of the ID numbers, coupled with the lack of verification, facilitated unauthorised access. This highlights the need for rigorous testing to prevent such security lapses, especially in the fast-paced world of AI development where bugs can often be overlooked.

The bug was identified by Sandeep Hodkasia, the founder of security testing firm AppSecure, and was fixed by Meta on January 24, 2025. Fortunately, there was no evidence of malicious exploitation of the bug.

Meta rewarded Hodkasia with a bug bounty of $10,000 for privately disclosing the bug on December 26, 2024. It is worth noting that this incident underscores the importance of thorough security measures and testing in AI development to prevent similar privacy breaches.

This vulnerability serves as a reminder that even trusted platforms can have unexpected lapses in security. As AI continues to evolve and become more integrated into our daily lives, it is crucial that developers prioritise security to protect users' privacy and maintain trust.

The lack of robust data-and-cloud-computing measures, as seen in this incident, underscores the importance of cybersecurity in AI development. The simplicity and predictability of the ID numbers, coupled with the lack of verification, facilitated unauthorized access, emphasizing the need for rigorous testing and security measures in technology.

Read also:

    Latest