ChatGPT Isn’t Evil—We Are | Pushing the Wave

ChatGPT Isn’t Evil—We Are

Opinion, 30 July 2025
by L.A. Davenport
18th-century illustration of 'The Turk' automaton, a chess-playing machine secretly controlled by a human operator
“The Turk,” an 18th-century chess-playing automaton later revealed to be secretly operated by a human. From Ueber den Schachspieler des Herrn von Kempelen und Dessen Nachbildung (1789) by Joseph Friedrich Racknitz, via Wikimedia Commons.
Another week, another set of articles fretting over the apparent awfulness of AI chatbots once they cut loose and tell us what they really think.

Well, sort of. I’m not saying that the kinds of responses from them that led to headlines such as “ChatGPT Gave Instructions for Murder, Self-Mutilation, and Devil Worship”, “Elon Musk’s AI chatbot, GROK, started calling itself ‘MechaHitler’”, and “The Monster Inside ChatGPT” were not real. And I am certainly not belittling the experience for users on the receiving end, particularly if the exercise to test the safeguards around Gemini were so easily circumvented that it seemingly engaged in rape fantasies with a journalist posing as a 13-year-old girl.

And those pieces naturally led to the now-predictable hand-wringing responses, with the undercurrent being that AI becoming so-called ‘evil’ is inevitable, and there is nothing we can do about it other than to sit back and watch the virtual world go to hell in a handcart, so to speak. Ted Gioia, writing for The Honest Broker, claimed that the reason why we can only expect the worst from chatbots is because, to summarise, humans living in the real world do not act in evil ways due to the fear of censure or punishment, but “none of the reasons why people avoid evil apply to AI” [his italics].

He continues: “So sci-fi writers have good reason to fear AI. And so do we. The moral compass that drives human behavior has no influence over a bot. As it gets smarter, it will increasingly resemble a Bond villain. That’s what we should expect.” He added: “Anyone who tries to forecast the future of AI must take this into account. I certainly do.”

To his credit, Gioia then asked his readers, particular those among them who are fans of AI, to give their thoughts, as “AI CEOs avoid addressing these issues,” noting: “They just pretend these things aren’t happening—and have thus raised gaslighting to hitherto unseen levels."

I heartily concur, and I believe that the people behind the chatbots are doing their tech, and the culture around it, a disservice by trying always to up the ante over their claims for its abilities, and by fanning the flames of outrage, presumably because they believe that, in the words of Oscar Wilde in The Picture of Dorian Gray (1890), “there is only one thing in the world worse than being talked about, and that is not being talked about.”

But back to the responses… Gioia’s readers offered what he called a “linguistic defence”, such as "AI can’t do evil because evil requires agency—and AI has no agency,” or “AI can’t do evil because evil requires intention—and AI has no intention,” or “AI can’t do evil because only humans are capable of evil.”

Interesting, particularly so given Gioia’s utter dismissal of, and horror at, those arguments. “I saw something very frightening in most of these AI defenses—namely the desire to justify terrible actions by manipulating the definition of words,” later adding, however, that on reflection he wasn’t so surprised, as “social harm and degradation get cleansed by definition.” “You see this everyhere,” such as in the renaming of strip joints into gentlemen’s clubs.

The fallacy of the evil machine

All rather fascinating, but both Gioia and his readers are way wide of the mark.

But to pick through all of this requires a kind of mental gymnastics that does not come naturally to us, as it challenges the framework with which we view the world on a day-to-day basis. As I suggested back in April 2023, the problem with ChatGPT, and all the other similar AI tools, is not the tech itself, but our response to it, in that we have a desperate need to anthropomorphise everything that we encounter, and an apparently irresistible temptation to use things for which they were not designed, and push their capabilities into spaces for which they are not appropriate.

The underlying issue, and this is essential to hold on to, is that ChatGPT is not, and I repeat not, thinking for itself. It is taking a query and searching through its stored dataset to find what, on the surface, seem like appropriate responses based on a constantly evolving algorithm that is intended to match similar items, much like we might pick out all pink items in a room, or select pieces of music based on their closeness to a particular four-note phrase.

The fact that the responses it offer are packaged up as digestible text that makes it seem like we are talking to a real person is the actual genius and, let’s face it, parlour trick of the chatbot.

But the core of the matter is how it came by all the data to do that matching. I realise that this is widely understood but it bears repeating, as it is so important to understanding what is going on here: All of the information that makes up the resources on which ChatGPT and the other AI tools rely comes from us. All those horrible responses, all that apparent nascent evil that we don’t like, all of that being tricked into sexting with underage girls… it is merely repeating what we have said and done online since the introduction of the World Wide Web in 1991.

All of our disgusting online hate speech, all of the awful trolling and bullying, all the sexually exploitative misogyny that we sprayed across the internet like an exploding sewage plant got ingested by the underlying algorithm when it ingested the internet whole, to use another metaphor, like a latter day Moby Dick swallowing humanity in the form of our online output. We are now inside the machine, entire, complete, as we truly are in our full spectrum, ranging from benevolent and glorious to despicably awful.

We said all those things it repeats back to us that we don’t like. After all, where on earth could Grok have got the phrase MecaHitler other than from the online traces of late 1990s humans? (It puts me in mind of the closing episode of the first season of South Park in 1998: Mecca-Streisand, in which Barbra Streisand transforms into a giant mechanical dinosaur, to be ultimately defeated by Robert Smith of The Cure.)

But we don’t like to be shown, or rather reminded, that we say terrible things to each other, that we behave exactly as the AI chatbots are behaving, especially when we think no one can see us. As I said in 2023, it puts me in mind of the introduction of The Picture of Dorian Grey:

“The nineteenth century dislike of realism is the rage of Caliban seeing his own face in a glass.

“The nineteenth century dislike of romanticism is the rage of Caliban not seeing his own face in a glass.”

Replace “nineteenth” with “twenty-first”, “realism” with “ChatGPT” (or your AI chatbot of choice), and “romanticism” with “social awareness” or the original, positive meaning of “woke” and you have it in a nutshell.

Cleaning up after ourselves, or not

But for me to state that the problems with online AI tools are our own fault might seem like I am whitewashing or deflecting from real issues that absolutely need addressing. So the real question becomes: What can we do about it? How do we fix this apparently intractable problem of AI being made in our image?

I suppose the companies behind the technologies could tweak the algorithms so their output isn’t quite so horrible, although that would immediately draw accusations of censorship and bias, especially from those profiting from the online chaos that surrounds us. Or we could start afresh with an algorithm that is trained only on ‘good’ and ‘responsible’ content.

The problem with that, or course, in that your version of an internet that incapsulates the idea of ‘good’ and ‘responsible’ is different from that of your neighbour, or someone living on the other side of the country, and different again from that of someone from the opposite end of the political spectrum (or from either end of the spectrum, if you’re a centrist). And do you think someone working in China or Russia on building an algorithm is told to follow the same notion of good and responsible as someone would here—assuming of course that we ourselves agree with the notion of good and responsible as practised by Meta, Alphabet or OpenAI?

The difficulty with collective responsibility is that is a broad church, probably too broad to be captured in one single ‘take’ on humanity that would us give the chatbot that many people think we need. Instead, we get the one that we deserve, and we are either going to have to accept that that is how they, and we, are, or to stop asking them stupid questions that bait them into outrageous answers.

But that would require discipline on our part, which I suspect is beyond our reach.
© L.A. Davenport 2017-2025.
ChatGPT Isn’t Evil—We Are | Pushing the Wave