In the hands of cybercriminals or an authoritarian state, AI, like ChatGPT, can become a real threat to online anonymity and our privacy.
Mar 10
Tue, 10 Mar 2026 at 06:00 AM 0

In the hands of cybercriminals or an authoritarian state, AI, like ChatGPT, can become a real threat to online anonymity and our privacy.

A study by two researchers specializing in artificial intelligence has revealed that large language models can quickly link an anonymous social media account to a real person.

Is this the end of online anonymity thanks to AI acting as a super private detective capable of unmasking even the most secretive internet users? The idea may seem far-fetched, but the work of these two researchers appears to prove that it's a reality, not a fantasy. A study reported by The Guardian reveals that cybercriminals are increasingly using artificial intelligence to more easily identify anonymous social media accounts. Thanks to large language models (such as ChatGPT, Claude, Gemini, etc.), it is possible to link an account to a real human, particularly based on the information they share. This enables sophisticated and profitable attacks, explain Simon Lermen and Daniel Paleka, the two researchers behind the study. who call for a "fundamental" reconsideration of what we consider private information online.

AI tailored to scour the web, designed to cross-reference data

To make this discovery, they first integrated two anonymous accounts into an AI, having it retrieve as much information as possible. One of the two accounts (@anon_user42) talked about his difficulties at school and his tendency to walk his dog Biscuit in a park named "Dolores." Thanks to this information, the AI was then able to scan the web to provide the identity of the person behind the account with a high degree of confidence.

The researchers believe that this method can easily be exploited by hackers, but also by governments wishing to silence dissidents who most often campaign anonymously on social media. Large language models do indeed have a rapid and highly efficient analytical capacity—perhaps too efficient—leading to cross-referencing of information that can compromise anonymity and cause serious security problems. A hacker could thus easily impersonate someone else and lure them into a phishing operation using the details gathered by the AI. Costly processes are no longer necessary to achieve this goal: an AI and an internet connection are all that's needed. While the study shows that artificial intelligence is effective, Peter Bentley, a professor of information science at UCL, disagrees. Interviewed by the British media, he denounces a trend that could lead to "people being accused of things they didn't do." There is also the risk that large language models could rely on public data far beyond social media. Statistical reports, hospital records, and admissions records could be used.

The study primarily shows that online, and especially on social media, the way a user interacts can easily lead observers to their real identity. It is therefore essential to always be careful about what you do online, while taking multiple steps to maintain complete anonymity—for example, by using an email address or phone number separate from the rest of your life. The two researchers call on social media platforms to restrict access to data that AI can access, particularly to prevent "scraping," a feature widely used by language models to gather as much information as possible for training.

Comments

Please Login to leave a comment.

Want to Post Your Topic

Join a global community of creators, monetize your content easily. Start your passive income journey with Digbly today!

Post It Now

Suggested for You