In a startling revelation about the vulnerabilities of artificial intelligence (AI) companion apps, cybersecurity researchers have uncovered a massive data leak exposing millions of private conversations, images, and videos shared between users and their AI partners. This breach, discovered by Cybernews—a leading global cybersecurity research organization—highlights significant risks faced by users who entrust deeply personal information to AI chat applications, emphasizing the urgent need for stronger privacy protections and developer accountability in the rapidly growing AI companion industry.
### The Data Leak: What Happened?
On August 28, 2025, Cybernews researchers identified a glaring security lapse involving two popular AI companion apps, Chattee Chat and GiMe Chat. Developed by Hong Kong-based Imagime Interactive Limited, these apps had left a Kafka Broker server—used for streaming real-time chat data—completely unsecured and accessible to the public. This oversight meant that anyone with knowledge of the server’s address could tap into live conversations between users and their AI companions, along with links to personal photos, videos, and AI-generated imagery.
The exposed data was staggering: over 43 million private messages and more than 600,000 images and videos from roughly 400,000 users across iOS and Android platforms. The content was described by researchers as “virtually not safe for work,” underscoring the highly intimate nature of the conversations. Most users affected resided in the United States, with approximately two-thirds of the data coming from iOS devices and the remaining third from Android phones.
### Privacy and Security Failures
Despite the sensitive nature of the data, Imagime Interactive Limited had implemented no authentication or access controls on the server, allowing unrestricted public access. This was particularly alarming given the company’s privacy policy, which claimed that protecting user security was “of paramount importance.” The reality, however, was that anyone with a simple link could access private chats, photos, and videos, exposing a significant disconnect between user trust and developer responsibility.
Though the leak did not include full names or email addresses, it did reveal IP addresses and unique device identifiers. Cybersecurity experts warn that this information, when combined with other publicly available data, could be used to identify and track individuals. Users averaged 107 messages per AI companion, creating detailed digital footprints ripe for exploitation in identity theft, harassment, or blackmail schemes.
In fact, purchase records indicated some users spent as much as $18,000 interacting with their AI girlfriends, suggesting the developer had earned over $1 million prior to the breach’s discovery. The financial stakes, combined with the personal nature of the data, highlight the profound consequences of such security oversights.
### Potential Risks and Consequences
The ramifications of this data leak extend far beyond mere embarrassment. Experts warn that the exposed conversations and media could serve as fodder for sextortion scams, phishing attacks, and other cybercrimes. Cybercriminals could exploit the intimate details to blackmail victims, impersonate them, or damage their reputations publicly.
Compounding the problem was the fact that the unsecured server was indexed on public Internet of Things (IoT) search engines, making it easily discoverable by hackers. Although the server was taken offline in mid-September 2025, after Cybernews alerted the developer, it remains unclear whether malicious actors accessed or extracted data prior to its removal.
This incident serves as a stark reminder that even if someone never uses AI companion apps, the digital age demands vigilant protection of personal information. The increasing integration of AI into daily life means large volumes of sensitive data are being generated and stored—often without adequate safeguards.
### What Users Can Do to Protect Their Privacy
Given the risks highlighted by this breach, experts recommend several practical steps for users to safeguard their privacy when interacting with AI chat applications or any online service:
1. **Avoid Sharing Sensitive Content:** Users should refrain from sending highly personal or sensitive information, images, or videos through AI chat apps. Once shared, control over this data is effectively lost.
2. **Choose Trusted Apps:** Opt for AI companions with transparent privacy policies, secure encryption, and a proven track record of protecting user data. Avoid apps that do not clearly outline their data protection measures.
3. **Consider Data Removal Services:** To mitigate risks from past or future leaks, users can employ data removal services that monitor and systematically erase personal information from hundreds of websites and databases. While no service can guarantee complete removal, these tools significantly reduce one’s digital footprint and make it harder for scammers to cross-reference leaked
