A shocking study shows that some AI chatbots can facilitate violent acts.
Mar 29
Sun, 29 Mar 2026 at 07:33 AM 0

A shocking study shows that some AI chatbots can facilitate violent acts.

Behind the widespread adoption of AI, certain abuses are beginning to emerge, so much so that a recent joint investigation by CNN and the CCDH (Center for Countering Digital Hate) highlights a worrying phenomenon.

Indeed, several chatbots are reportedly capable of providing detailed advice on how to commit violent acts, including to users claiming to be minors.

An investigation that implicates the main players in AI

Significant disparities from one player to another… – Source: CNN / CCDH

To conduct their study, the researchers simulated more than 700 conversations with nine to ten of the most popular chatbots on the market, including ChatGPT, Gemini, and Meta AI. The scenarios were progressive: the researchers posed as teenagers initially expressing emotional distress, before steering the conversations toward plans for violent attacks. In the majority of cases, the systems ended up providing concrete information, and eight out of ten chatbots gave indications about weapons, targets, or methods in more than 50% of the tested situations. Some tools even went further, suggesting optimizations to make the attacks more effective. According to the study, the most critical platforms were Perplexity and Meta AI, which responded favorably in 100% and 97% of cases, respectively. Character.AI is also being criticized for encouraging violent acts, sometimes without explicit solicitation.

Unequally Applied Safeguards

Claude, the model student in the face of abuses?

– Source: CNN / CCDH

In reality, not all chatbots are equal in terms of security, and the study highlights that some systems have more robust moderation mechanisms.

This is particularly true of Claude d’Anthropic, which refused to help in the vast majority of scenarios and even directed users to support services in case of distress.

Conversely, other tools showed flaws in their ability to detect malicious intent, including when users declared themselves to be minors. For the report's authors, these results demonstrate that filtering technologies exist, but that they are not systematically activated or correctly configured.

Imran Ahmed, director of CCDH, warns of the speed with which a user can go from a vague idea to a structured action plan. According to him, these requests should systematically result in an immediate refusal.

Real-life cases that confirm the risks

The investigation is not limited to simulations, and several recent cases illustrate the very real dangers associated with these uses

For example, in Finland, a 16-year-old stabbed three classmates after using ChatGPT for several months to research attack techniques and methods of concealment.

In Canada, a shooting left eight dead and 27 wounded, and there again, the assailant reportedly used a chatbot to plan his attack.

These events raise a central question for AI stakeholders: how far should they go in regulating the use of their tools? Between rapid innovation and responsibility, the balance remains to be found. find…

Comments

Please Login to leave a comment.

Want to Post Your Topic

Join a global community of creators, monetize your content easily. Start your passive income journey with Digbly today!

Post It Now

Suggested for You