Hey everyone, Nuked here! Let’s talk about a recent security fix at Meta involving their AI chatbots. It’s a wild ride in the tech world, so buckle up!
Meta recently patched a serious security bug that could have let users see private prompts and responses from others. Security researcher Sandeep Hodkasia discovered this flaw and was rewarded $10,000 by Meta through their bug bounty program. The fix was rolled out in January 2025, and luckily, no evidence suggests any malicious exploitation took place.
Hodkasia figured out the bug by examining how Meta AI lets users modify their prompts to generate new text or images. He noticed that when editing a prompt, the backend assigns a unique number. By analyzing network traffic, he realized changing this number could trick Meta’s servers into revealing someone else’s prompt and its generated content. That’s because the server wasn’t verifying if the requestor was authorized to view that specific prompt.
The prompt IDs were surprisingly easy to guess, which meant that malicious actors could automate the process to scrape user data rapidly. Fortunately, Meta addressed the issue quickly and confirmed that no abuse was detected. They also acknowledged Hodkasia’s responsible disclosure and awarded him the bounty.
This story comes at a time when AI products are booming, but security and privacy risks remain a major concern. Meta’s standalone AI app, launched earlier this year, faced criticism after some users accidentally shared private chats publicly. The fix underscores the importance of security in AI deployment, especially as these tools become more intertwined with our daily lives.