It's Not ChatGPT That's The Problem | Pushing the Wave

It's Not ChatGPT That's The Problem

Opinion, 14 April 2023
by L.A. Davenport
Lykke Li At Lovebox 2008
Lykke Li performing at Lovebox, Victoria Park, in 2008
After the reminiscences of cherished ferry trips in recent weeks, I thought I would talk about something that has been turning over in my mind for several weeks, and which bothers me somewhat, although perhaps not in the way you would expect.

I am sure you have noticed that the novel artificial intelligence (AI) tool ChatGPT has become something of a fascination and even obsession, especially in the media.

In case you are not familiar with the tool and the brouhaha that surrounds it, it is a chatbot that uses natural-language AI to draw on large repositories of human-generated text to accept unstructured or non-standardised questions, then provide human-like responses that are statistically likely to match the query.

What that means in everyday terms is that it is not too dissimilar from the sort of automated chat box you see on websites to help you if you have a query, and which will direct you either to an appropriate frequently asked question or an actual person to discuss your issue.

The difference here is that if you enter a question or command or other piece of text in whatever way you want, the ChatGPT chatbot will search its vast library of text that has already been published on the internet to find matches, and then assemble a response that the algorithm suggests fits with what you submitted.

Crucially, and unlike a chat box helper, this can be on any topic, and ChatGPT will come up with a response ranging from a few lines to several pages, which could be copied and pasted into a piece of journalism, homework for school, a thesis, a legal document or any other complex text that would ordinarily require the input of a human.

It is this part, and its implicit threat of extinction, that has got reporters so worked up. Forbes magazine, which I would have expected would not have got quite so breathless, went full throttle in its recent assessment of ChatGPT’s potential to replace humans (my italics for emphasis):

This is inevitably true, as the current industrial revolution moves forward with the same implacable creative destruction of its predecessors, with all the good and the bad that comes with it.

“At the same time, others are seriously concerned about some uncontrolled side-effects of its mass adoption that could start with its use for cheating educational systems, plagiarism, and even more dangerous issues that span from scams and criminal activities to all-out social manipulation.

And these things will inevitably occur as well.

“Yet, like all previous eras of technological disruption, the underlying uses of these technologies depend on us, and will probably bring more benefit than harm, even if the risk of the robot apocalypse is closer than before.”

Yowzers! I may as well hang up my laptop and seek out another career, preferably one that cannot be automated, if the world manages to carry on existing for long enough.

Perhaps even more startling is Stylist magazine seriously discussing whether ChatGPT could or even should be used to help individuals find their way through the grieving process.

Yet we must not lose sight of the fact that ChatGPT was not designed to deliver objective and accurate information, or to support those in desperate need in a time of deep crisis. It is not a fact checker or a therapist, but an online conversational agent. In other words, it’s aim, first and foremost, is to entertain.

Not only that, but it is flawed. It is capable of what is called hallucinations, in which inaccurate information assembled in error is presented in a truthful and persuasive manner. More than that, here have been numerous reports of ChatGPT giving upsetting or abusive responses.

Therefore it cannot be relied upon to produce anything of importance, no matter the many protestations to the contrary.

The thing that fascinates me, however, is not whether ChatGPT can provide good copy or whether it is perfect or not, but rather the response it has generated, one that once again reveals our deep-seated need to anthropomorphise; ie, attribute human characteristics to an animal or object.

The tool itself is not ‘creating’ or ‘writing’ or ‘supporting’, or even doing anything on its own. It is not AI as the standard definition would hold, even if the term has become so misused that it has been stretched to fit almost any meaning.

By assembling and packaging a response based entirely what people, not machines or tools, have previously posted on the internet, it is merely repeating back to us things we have already said.

It is not doing anything other than holding up a mirror to ourselves. We like the idea of it having human characteristics because it flatters us to think that way. Unfortunately, as well as the good things, it also repeats back to us the bad and truly awful things that we have said online.

Perhaps most of all, it is that that frightens us. But we have always to keep in mind that ChatGPT is not doing anything by itself, it is merely following a program that sorts, identifies and reproduces similar text from the internet.

It puts me in mind of the introduction of The Picture of Dorian Grey, by Oscar Wilde:

“The nineteenth century dislike of realism is the rage of Caliban seeing his own face in a glass.

“The nineteenth century dislike of romanticism is the rage of Caliban not seeing his own face in a glass.”

As Jay Rayner noted in The Guardian recently, ChatGPT was never going to develop sentience and take over the world, and “nor was it going to replace hacks like me”.

The problem, he goes on, is not the tool itself but how society reacts to it and for what purpose it uses it.

He points to John Naughton who, also in The Guardian, said that it will eventually become “as mundane as excel”.

He has elsewhere likened it to Google, which has become a “prosthesis for memory,” suggesting that ChatGPT will end up being a “prosthesis for something that many people find very difficult to do: writing competent prose”.

It may well, but I see it in even less flattering terms. As it currently stands, this AI tool is merely parlour game that has got way out of hand. All we can see is its giant reputation, and not the rather simplistic concept within.

And its not even the tool that’s the problem, it is us, over and over again misunderstanding and over-interpreting what it can do.

In other words, we are the problem, not ChatGPT. We don’t need saving from AI or any other technology. We need saving from ourselves and our overactive imaginations.
The other day I pulled a CD off the shelf that I haven’t played in a long, long time.

Lykke Li released Youth Novels in early 2008, and at the time it passed me by. I was probably in one of my phases of listening to guitar rock, leavened with a healthy dose of soul and funk, and likely didn’t have time for sparkling debut albums of heartbreaking yet teasing pop songs that played with form and function.

Fast forward to April, and a friend, I don’t remember who, told me that a certain Sebastien Tellier, a sort of dance update of Serge Gainsbourg mixed with Barry White, was playing at La Scala, in Kings Cross, at the end of the month and would I like to come along. As always, my answer to such an offer is a resounding yes.

On the night itself the heavens opened, and I hesitated a little as to whether I really wanted to go out. But I braved the lashing rain and soaked streets, in part simply because I had never been to that venue. As is my habit, I got there early enough to see the support act, who was someone I had never heard of and whose name I had trouble remembering.

Any problems on that score were wiped out within a second of her taking the stage. Lykke Li filled that small but charming venue with a vulnerable power that captivated the audience from the first note.

I later realised she delivered a shortened set of key songs from Youth Novels, and by the time she left the stage to rapturous applause, I had determined not only to buy her album the next day but to see her in concert again, so I could see her full set.

The Guardian, in their review at the time, were kind, but Sebastien Tellier, once he took to the stage, was frankly awful, and suffered terribly in comparison with the brilliance of his so-called support act. The problem was that he was blind drunk, and that is never a good start to a gig, worse if the performer carries on drinking red wine on stage.

So I was bored and watching the crowd and, and this is the bit that still knarks me, even today, I spotted Lykke Li standing not more than six feet away from me. She was alone, watching M. Tellier but clearly also bored. There was no one around her, and I realised that I could quite easily go up to her and introduce myself, compliment her on her fantastic performance and ask her about herself.

Knowing I am a journalist, and so used to interviewing people, my friends encouraged me. Go on, they said, you could be friends with her. Imagine how cool that would be! She seems so nice and approachable. Go on!

But I didn’t. I stood there, rooted to the spot, too shy to step forward, unable to think of a word I could say to this shining talent.

I glanced at her and hesitated for a while, willing myself to pluck up the courage, until, after looking at the French catastrophe on stage for a moment, I turned back and she was gone.

I was so disappointed in myself. But I made good on my promise to myself. I bought Lykke Li’s debut album the next day, and then saw her, by chance, at Lovebox that same summer. I then decided to see her in concert every time she was in London, and managed to catch the tours for her first three albums, her each time bathing the room with her bright yet melancholic light.

Yet every time I saw her, and still to this day, I wondered what might have been.
I haven’t added anything else to Pushing the Wave this week, but rest assured I have been cooking, writing and taking photos in the meantime, and there will be more additions soon.
© L.A. Davenport 2017-2024.
Note: Commissions may be earned from the links above.
Affiliate Disclosure
It's Not ChatGPT That's The Problem | Pushing the Wave