From education to employment

Responding to robot writing in universities

Jack Goodman

The arrival of automated essay-writing software has sent shockwaves through the global higher education sector. Academics and administrators are urgently debating how to respond to a technology that could make cheating a run-of-the-mill, free, and potentially acceptable behaviour for millions of university students.

Just last year Australia’s higher education regulator, TEQSA, was busy blocking access to scores of essay mills – websites that offer to write essays for students – usually for a few hundred dollars with turnaround times of 24 hours to two weeks. That response now feels like it came from a bygone era, in the face of the game-changing ChatGPT, a new AI algorithm that can respond to nearly any prompt by spitting out original text right before one’s eyes.

At a gathering of education leaders in Sydney last month, the tension in the room was driven entirely from AI. Across academia, from North America to Europe, to Oceania, the spectrum of responses tends to fall into two camps. At one end of the spectrum are a group best described as “the enforcers”. This group sees some form of punishment as the only logical response to any breach of academic integrity rules. Students who break the rules must face consequences, and, if needed, universities should revert to “unhackable” assessments, in the form of face-to-face assessments, featuring pen and paper.

At the other end of the spectrum are “the accommodators” who see the inevitable rise of AI and conclude that fighting it is pointless. Better to accept the arrival of our computer overlords and try to think about ways to collaborate with them for educationally constructive purposes. However, this is a false dichotomy that fails to understand how we should think about ChatGPT, and how we should respond to its arrival.

One useful way to think about technologies is to categorise them based on the type and level of intellectual activity they have sought to enable humans to “offload” from their cognitive burden. A timeline might look something like this:

Time period                           Technology                            Intellectual activity offloaded

1970s-80s                               Calculator                                Arithmetic and calculations

1980s                                      Spell check                              Spelling

1990s                                      Grammar check                      Grammar, sentence structure

                                                Thesaurus                               Vocabulary

2000s                                      Search engines                       Memory, discipline knowledge

2020s                                      AI/Chat GPT                            Deep cognitive work

When viewed through this lens, it is clear that part of the arc of technology over the last 50 years has been the progressive increase in the complexity of the intellectual efforts it has subsumed on behalf of its human masters. Simple arithmetic and spelling are often seen as chores. What child hasn’t moaned about spelling lists and flash cards, and what parent hasn’t secretly (or not so secretly) sympathised with them?

But educators have known for decades – and we all understand intuitively – that having a corpus of facts in one’s head allows us to see patterns, make connections and come up with new ideas and ways of seeing the world. That is why we all have to ask ourselves: Do we want to offload the intellectual burden of writing an essay, or even just a first draft? Writing is how we discover what we think about whatever topic we have been studying. There is nothing more fundamental about learning – and no skill more important to most knowledge-economy careers – than producing a coherent, well-argued, grammatically correct piece of writing.

Writing is also one of the hardest skills to learn, which is why watching ChatGPT produce its writing in real time is incredibly mesmerising and, for most of us who struggle to produce output, jealousy-inducing. It is also deeply troubling as the makers of ChatGPT acknowledge that the tool has no understanding of truth and is unreliable in that it will give different answers to the same question. In short, there is no “intelligence” in ChatGPT. There is only imitation.

How should we respond as educators, then? Already some universities are going down the “enforcer” path, seeking to block access to ChatGPT, defining use of any computer-generated content as a breach of academic integrity rules and signalling that students who do so may be severely punished. And while this is an understandable response, it is only dealing with the consequences, not the cause of academic dishonesty. Perhaps a tool will be developed that can identify AI-generated content. A new bot will surely come along that will defeat such a tool, and the spy-versus-spy arms race will continue ad-infinitum.

Better to start with the causes of academic dishonesty. If we can mitigate them, then students will be far less likely to turn to the dark side – whether that be copying text off the internet, paying a third party for an essay, or using an AI-bot to slap 1,500 words together in a matter of seconds.

Why do students cheat? Tracey Bretag et al identified three factors that influence academic dishonesty.

Students whose native language is not English. International students. Students from culturally and linguistically diverse backgrounds. First generation immigrants.

Dissatisfaction with the teaching and learning environment. Students who feel ignored or who are in large enrolment courses where there is little or no individual or personalised communication.

The perception that there are “lots of opportunities to cheat.” In this regard, there is no greater opportunity than ChatGPT – the contract cheating equivalent of an invisible cloak that makes detection potentially impossible.

Put another way, students who feel cheated by their institution are more likely to cheat.

Enrolments at universities in much of the western world have grown dramatically, and perhaps peaked – a result of widening participation policies and a huge spike in international students. We now have a massified higher education system where investment in the student experience has failed to keep pace with technology and student needs. And we know this as the most reliable student experience data – like Australia’s Quality Indicators of Teaching and Learning (QILT) – show a sector-wide shortfall in learner engagement and student satisfaction.

Why does an improved student learning experience, including satisfaction metrics, staff wellbeing and critical skills scaffolding, matter so much? Because students who feel their teachers know them and care about them are far less likely to take a shortcut to pass a unit or cheat their way to a degree, especially when faced with enormous financial or societal pressures to simply pass.

It is understandable why universities are viewing the arrival of robot writing as an existential crisis for the sector. Given the current size and scale of most of our universities, it is quite possible that enormous numbers of students will be tempted to cross the line with a tool like ChatGPT. The need for the sector to up its investment in teaching and learning, student well-being, and belonging has existed at least since the start of QILT a decade ago. There is no more time to waste.

By Jack Goodman, Founder and Chair of Studiosity, based in Australia and London.

On 28th February Studiosity is hosting a free online UK higher education’s thoughtful response to robot writing symposium featuring leaders from Coventry University, Kingston University and University of Exeter.


Related Articles

Responses