Unreality check | Pushing the Wave

Unreality check

Opinion, 23 September 2024
by L.A. Davenport
Stacks Image 266
Are we facing the end of journalism?
Since I wrote my last column, in which I discussed our oh-so human temptation to package people up into neat and ultimately limiting boxes, I was knocked out of my philosophical reveries by the fast-changing realities of the working world, particularly that of journalism.

Back in April I talked about the dark underbelly of artificial intelligence (AI), which challenges the easy assumption (one that I had also made) that all this sleek automation comes without a moral dimension, and followed it up with my experiences of how the vast shadow of AI was already having its chilling effect on medical reporting.

I recounted at the time how automation was going not necessarily to take away the the jobs of people currently doing the more mundane tasks involved in producing daily news, but rather those individuals would be freed from their more tedious tasks and be trained up to do the more specialised work. That would leave highly qualified writers such as me and my colleagues high and dry, gasping for air as we discovered that our local ecosystem has changed too quickly for us evolutionary dead-ends to adapt in time before we perish.

When I was writing all of that then, and contemplating the coming end of my career as I have known it for the past 20 years or so, I had no sense of a timeline for when those changes would begin to bite, and so when we would see our work drop off to any great extent.

Of course the world has not stood still. Now, five short months later, it makes me sad to report that AI is now used with abandon to create automated reports on the more straightforward stories that make up the majority of the articles in daily healthcare news feeds. Apart from a small and slightly ambiguous notice at the foot of each piece, and the lack of a credit to a specific writer (replaced by the name of an editor), there is no real indication that what you are reading was generated by a machine.

It is therefore clear that specialist health and medical news, as reported by a qualified journalist with a track record in their given field, is no longer valued, even by the news agencies that base their entire reputation on following the highest standards when covering the latest developments in research and medical practice to healthcare professionals.

It is easy to blame the money men and women, as it were. Indeed, their fingerprints are all over so many of the decisions that have been taken since the end of last year that undermine much of what was once regarded as sacrosanct in the world of journalism. And it is obvious that, once all the talent that went into news stories in the past is lost, it will be nigh-on impossible to return to the standards of old.

Does it all matter?

It is a valid question to ask whether it matters or not that a human did not write a given article, much as one might be forgiven for wondering if a hologram should be allowed to read out a news item on the evening news.

Apart from the fact that, when asked, no one I have ever spoken to would like to see their newsfeed (or paper) filled with stories written by a robot, there is the principle that humans should create news for humans, not machines.

There is also the importance that should be placed on experience. An AI algorithm does not, and cannot, understand what it is reading when it scans a freshly published research paper in a medical journal, because to the algorithm it is just a jumble of letters, punctuation and spaces to be compared with every other jumble of letters, punctuation and spaces it has gobbled whole.

It therefore cannot place the research into context, or judge which elements of the paper are the most important for a medical professional to know for their practice (this cannot be automated, as it depends entirely on the research at hand and its clinical relevance). And there is no use saying that the ‘related articles’ feeds on a journal website are of help in this situation, as they are also automated and therefore depend on metadata, rather than complex human objective and subjective assessment.

Then there is the way in which the AI algorithm is developed. As we have been told over and over again, these are trained on previously published articles, which relies on the twin assumptions (and doubtless many more) that: a) there is a formula to news writing, which is true only in theory, not in practice; and b) that all academic papers can be analysed and categorised in the same way (they cannot; each should be regarded and judged individually on its own merits).

Finally, the articles that the AI system produces based on this imperfect analysis are fed back into the algorithm, with the belief that it will improve the results over time. Instead, this journalistic cannibalism will lead to their degradation, until the algorithm eventually becomes useless.

The parallel here is battery hen farming, when chickens were (and may still be) fed ground-up dead chickens, as they contain all the nutrients that chickens need (the same was done for cows). Coupled with other rather suspicious husbandry practices, this resulted in the presence of antibiotic-resistant bacteria, prions, arsenicals, dioxins and more in animal feed and animal-based food products.

It all reminds me of my mother’s puzzling over her oft-stated observation that, if we can so easily see how fundamentally flawed something is, and yet we are not acknowledged experts in the field, why can’t ‘they’; i.e., the people running the show? The answer is that ‘they’ can but they are working to different priorities.

Indeed, the agencies that use AI to generate news stories do not care about journalism or the rigours and responsibilities of medical news reporting. They are simply using the articles as a framework within which to place lucrative advertising, and so to them it does not matter what the articles contain or whether they are any good. They just need something, anything, to advertise within.

But as more and more people realise that the news they are reading is worthless junk produced by an AI algorithm, they will go elsewhere, and so the house of cards will come crashing down around their ears. Or rather, around the ears of the people who are left working in the company after the investors have cashed in and jumped onto the next valued business that they will drive into the ground for the sake of profit.

Red carpet delights

If that all seems a little depressing, I was this week able to indulge in one of my more lighthearted obsessions; that of award ceremony red carpet fashion.

I enjoyed myself thoroughly at the beginning of the year, gawping at the frocks and suits on display at the various awards ceremonies, and am delighted to see that the celebrities at this year’s Emmy Awards have not let me down.

New York Magazine was my guide, and as usual I don’t simply judge the look based on the outfit itself, but rather its appropriateness for the red carpet and whether or not it suits the wearer.

The choices were typically formal this time around, which I prefer for an awards ceremony, as long as they remain understated.

Saying that, the dazzling yet simple elegance of Ayo Edebiri’s dress stood out for me. I also loved Quinta Brunson’s Georges Chakra, which could have looked ridiculous in the wrong hands but was on the cheeky, and right, side of chic, while Jennifer Aniston’s gown was simply divine, as far as I could tell from the pictures.

Not much else really caught my eye, aside from Moeka Hoshi’s Miu Miu dress, which was delightful in so many ways, and suited her to perfection. Few men’s outfits were even featured on any of the articles I saw, but I loved Bowen Yang’s take on a dress suit. The shirt worked extremely well for me, although I might have chosen different for the shoes.
© L.A. Davenport 2017-2024.
Unreality check | Pushing the Wave