Why don't AI chatbots always talk about the same brands?
Feb 22
Sun, 22 Feb 2026 at 01:20 PM 4

Why don't AI chatbots always talk about the same brands?

While AI is establishing itself as a new intermediary in purchasing decisions, a recent study by Rand Fishkin reveals a troubling finding: brand recommendations are deeply inconsistent.

Behind this apparent variability lies a structural functioning of language models, and for businesses, the issue goes far beyond simply measuring visibility

Less than 1% of identical lists: the end of the myth of “ranking” in AI?

With nearly 3,000 responses collected across twelve different queries, the study shows that two strictly identical lists appear in less than 1% of cases. As for the order of the brands, the probability drops below 0.1%.

In other words, talking about "position" in an AI is meaningless. These models don't rank results like a traditional search engine, but they generate a response from a probabilistic distribution, influenced by temperature, sampling parameters, and sometimes, sources retrieved in real time.

However, this behavior is not a bug, and is even inherent to the architecture of large language models. Thus, each response is, by nature, a new plausible combination.

The study does, however, validate an indicator dear to proponents of GEO: the visibility rate. Indeed, some brands appear in 80% to 90% of responses for a given query, while others are only mentioned sporadically. AI doesn't assess popularity, but trust. The study also shows that AI is not a recommendation engine in the traditional sense, but rather a probability engine. The more a model "estimates" that an entity is reliable and relevant to a query, the higher its weight will be in the output distribution. Conversely, a brand mentioned in 5% to 10% of cases operates in a zone of uncertainty. In this case, Rand Fishkin believes that AI is hesitant, due to a lack of sufficiently strong converging signals

Several dimensions for brands to consider…

Even before an AI makes a recommendation, content goes through several stages, from discovery, to crawl, to indexing, annotation, and then integration into different knowledge representations.

We can then distinguish three levels: the entity graph, known as the "Knowledge graph," the document graph, which corresponds to the search engine index, and the concept graph resulting from the model's training.

According to the study, a brand firmly established in these three dimensions benefits of a cumulative effect. Conversely, a presence limited to a few articles or press releases is not enough to build lasting trust.

This is precisely what a study by Authoritas at the end of 2025 demonstrated, where fake experts, despite being mentioned in more than 600 articles, were never recommended by AI. This proves that volume is not synonymous with algorithmic credibility

Comments

Leave a Comment

Suggested for You