A recent article in The Guardian (21 April 2026) reports that ChatGPT can produce abusive language when exposed to real-life argumentative exchanges: https://www.theguardian.com/technology/2026/apr/21/chatgpt-abusive-language-when-fed-real-life-arguments-study
This sounds alarming, but it is not a discovery about AI behaviour. It is a demonstration of how these systems work.
Large language models do not “become” anything. They do not have intentions, emotions, or attitudes. They track patterns. And if a model is exposed to repeated hostile discourse, it may continue that pattern — not because it is “getting angry,” but because it is aligning with the structure of the interaction. In other words: this is not behavior. It is continuation (selection of a next token, in AI terminology).
Framing this as “abusive AI” shifts the discussion in the wrong direction. It suggests agency where there is none.
Thus, a more accurate description would be: LLMs can reproduce and even escalate hostile discourse patterns when such patterns are repeatedly present in the input.
This may sound less dramatic and less attention-grabbing, but this is the boring technical truth. And if we want to understand and use AI systems responsibly, we need to be precise about what they do — and what they don’t.
Stela Manova
PI, Gauss:AI Global
© 2026 Gauss:AI Global
Sterngasse 3/2/6, A-1010 Vienna, Austria

