X wants to curb the invasion of bots and AI content
Feb 24
Tue, 24 Feb 2026 at 03:50 PM 6

X wants to curb the invasion of bots and AI content

For several months, the rise of AI-generated content has been disrupting the balance of social networks, and X is not immune to the phenomenon. Between automated bots, controversial deepfakes on Grok, and mass-produced comments, the platform seems to want to regain control.

Indeed, in several recent statements, including on X, its product manager, Nikita Bier, expressed her desire to make X remain a space for exchanges "between humans."

A future mandatory labeling of AI-generated content

Among the options being considered, X is reportedly testing a new label designed to identify posts created with AI tools. In practical terms, a button integrated into the post editor would allow users to indicate that content contains "synthetically generated" elements. Currently, the platform already applies a watermark to images and videos produced by Grok, but in practice, users are not yet required to disclose the use of third-party tools. However, with the proliferation of hyperrealistic visuals like Seedance 2.0, fake terrorist attacks, doctored images of celebrities, and staged political events, confusion is growing. The principle of explicit labeling could thus gradually become the standard. However, this raises the question of responsibility, as it could only work with the engagement of the platform's users.

A delicate position for X

The situation is all the more complex because, in parallel, X encourages the use of generative AI, even for writing posts on the social network… while simultaneously highlighting the publication of articles written by humans in a recent competition.

Furthermore, this stark contrast illustrates the strategic dilemma of platforms. On the one hand, the colossal investments in AI must be profitable, but on the other, excessive automation threatens the founding promise of social networks: to allow everyone to share a personal perspective.

Nikita Bier summed it up bluntly, stating that reading a message thinking it comes from a human, only to discover it originates from a machine or a hidden actor, creates a profound sense of unease. Now, a platform's credibility rests on its ability to clearly distinguish the authentic from the synthetic.

X therefore seems to want to strengthen its bot detection and removal tools, while also imposing greater transparency. A complex undertaking, especially at a time when AI is becoming simultaneously a driver of innovation, a marketing tool, and a source of abuses

Faced with this reality, the “global public square” that X has been claiming since its launch may well be staking part of its future on this ground: that of trust.

Comments

Leave a Comment

Suggested for You