figure 1

In February 2024, the journal Frontiers in Cell and Developmental Biology published a paper entitled ‘Cellular functions of spermatogonial stem cells in relation to JAK/STAT signaling pathway'. The paper itself, unlikely to be of interest to the ordinary person, found a worldwide audience and internet savagery thanks in no small part - and pardon the pun - to a large rat penis.

Let me explain.

Within the paper, the authors featured a figure of a dissected rat penis which - and I exaggerate not - was more than double the size of its body, while labels in the figure read ‘testtomcels' and ‘dck'. No, unfortunately I will not be re-publishing the image, although a quick search will quickly reveal the girthy small mammal. Unfortunately for the rat community, nature had not found a way; the paper credited the image to Midjourney, a popular generative AI tool.

In June this year, the Guardian reported thousands of university students in the UK had been caught misusing ChatGPT and other artificial intelligence tools in recent years, while traditional forms of plagiarism showed a marked decline. In the report, a survey of academic integrity violations found almost 7,000 proven cases of cheating using AI tools in 2023-24, equivalent to 5.1 for every 1,000 students. That was up from 1.6 cases per 1,000 in 2022-23. Figures up to May suggested that number will increase again this year to about 7.5 proven cases per 1,000 students - but recorded cases represented the tip of the iceberg, according to experts.1

The news release featured a pertinent observation - that these data highlight ‘a rapidly evolving challenge for universities: trying to adapt assessment methods to the advent of technologies such as ChatGPT and other AI-powered writing tools.'

These two items - the first being a specific example of the second - shows how difficult it is to police the use of AI. Our publishing partner, Springer Nature, have made excellent progress in preventing undeclared AI-generated work, firstly developing two new tools to protect research integrity, and secondly donating one such tool this year to the STM Integrity Hub, an industry-wide initiative that supports publishers in ensuring the integrity of their published content. Welcome news indeed, but I can't help but feel publishers, journals and editors are significantly behind the 8-ball when it comes to this topic. Of course, there are obvious warning signs - large rat genitalia aside - we're able to spot. Images of people with six fingers, reams of made-up references, data that do not add up are easy to see, but make no mistake about it, this will not always be the case. As large language models (LLM) digest more and more data, these kinks will eventually be ironed out. Even now, it's almost impossible to identify text within an article that's been generated by AI, and that is a dangerous path for us to be on. The quality of research is already being watered down by the demand to publish quantity rather than quality, and AI opens the door for the market to be flooded by make-believe research to back up the point authors wish to make rather than what their research says, which ultimately will only lead to a detrimental effect on the health of patients.

This responsibility is something Co-founders of Kiroku, Hannah Burrow and Jay Shah, discuss in this issue. I found their candid responses to the advantages and disadvantages of AI refreshing; usually adopters of the technology refuse to hear concerns. ‘Clarity and accountability', words used by Burrow, should form the cornerstone of how an individual approaches and uses AI - declarations and being up front about its use is infinitely better than being caught out. Their interview is well-worth a read.

Ultimately, publishing, like dentistry and the wider healthcare profession, will need to embrace and work with AI rather than against it at some point. When that point comes and how it manifests is another discussion entirely. Is it miles down the road, or is it around the corner? Those are unknowns at the time of writing, but the direction of travel is not. One day AI will not generate large rat genitalia (unless requested, I suppose), but until then, we must tread carefully down the AI path, fully in the knowledge we are playing catch-up to its abilities, hoping for clarity, accountability and openness needed to make the marriage between healthcare, science and publishing a happy one. â—†