From education to employment

Reading and Righting: Will the Reading Research Result in Righting Assessments?

Neil Wolstenholme

Neil’s article emphasises the urgent need for educational reform to address AI’s impact on academic integrity, ensuring fair assessments and preparing students for AI-integrated workplaces.

Undetectable AI Has Passed the Turing Test

In order to demonstrate that AI can now pass the “Turing Test”, cheeky researchers at the University of Reading conducted an experiment that challenged the integrity of traditional assessments by secretly submitting 33 AI-generated exam answers, disguised as real student submissions, into a batch of Psychology undergraduate papers. They found that the AI-generated responses to the take-home exams generally received higher grades than the average for the pool, with only one of the AI entries being unearthed and flagged as ‘odd’ by unaware seasoned university exam markers.

Implications for Educational Assessments

This ground-breaking “largest and most robust blind study of its kind” shines a light on what we already know, which is that AI is having a huge impact on educational assessments, not just here but around the world. The increasing sophistication of generative models will exacerbate issues around academic integrity relating to take-home tests and unsupervised coursework, each of which are vulnerable to undetectable AI support. Not only is AI cleverer than humans but it is also way too stealthy for the plagiarism detectors!

The implications are profound, and this seismic shift affects all levels of education; for example, teacher-friends of mine tell me they don’t set homework in the sixth form for subjects like history anymore because the students would just use AI to do it for them.

There is an urgent need therefore for the education sector to adapt to the realities of the world with AI and to establish clear guidelines on how students should use and acknowledge AI in their work.

What are the potential consequences?

Erosion of Academic Integrity

One of the authors of the Reading report, Dr Peter Scarfe says:

“Based upon current trends, the ability of AI to exhibit more abstract reasoning is going to increase and its detectability decrease, meaning the problem for academic integrity will get worse.”

It’s only going to get worse…

The rise of AI in education raises ethical questions about its use in preparing and writing academic work, challenging the definition of what constitutes “cheating” in the context of tech assistance. Knowing that AI can outperform humans in exams increases pressure on students to use such technology to remain competitive, leading to higher stress levels and potential ethical dilemmas about whether to engage in such practices. The use of AI in tests and exams erodes academic integrity, fostering a culture where dishonest practices become more common, and undermining the certainty about the source of the work.

Ultimately, in such a scenario, who benefits?

Schools and policymakers need to develop new regulations and guidelines to address AI use in academic settings, ensuring fairness and integrity.

Devaluation of Educational Credentials

If AI can take exams on behalf of students undetected, and do better in the process, then this will result in a situation where grades and qualifications no longer reflect a student’s knowledge or abilities, undermining trust and perceived quality in the education system and the value of educational credentials will diminish.

Deskilling of Students and Skills Gaps

Allowing the use of AI at schools and universities, however, could create its own problems in “deskilling” students as they become overly reliant on tech for critical thinking and analysis. This issue highlights a broader concern about society’s increasing ‘not thinking’ reliance on technology, which impairs one’s ability to think critically and solve problems independently. We witness this every day as delivery drivers drop off packages at the wrong house and then blame the SatNav for the error.

The knock-on effect of deskilling is that it results in an employability skills gap, that is, a workforce less prepared for real-world challenges. This will be exacerbated by digital exclusion, which will increase inequality as access to advanced AI tools might be limited to those who can afford them, creating a wider gap between affluent and less affluent students.

So, what will assessments look and feel like going forward? And, assuming they survive, how do we ensure that assessments remain fair and reflective of students’ genuine abilities, while also preparing them for a future intertwined with AI?

Shift in Assessment Methods

To counteract AI cheating, educational institutions will need to shift away from traditional exam-based assessments to more practical, project-based, or oral examinations that are harder for AI to complete without human intervention. Additionally, universities and schools will have to bite the bullet and design assessments which incorporate AI usage and materials generated by students into the assessments. Ground rules should be developed on how students can legitimately use and acknowledge the role of AI in their work in order to prevent a crisis of trust.

Preparation for the Future Workforce

Students must still be prepared to adapt to and work with and alongside AI in the future workforce.

Elizabeth McCrum, Pro-Vice Chancellor of Education at Reading confirms this:

“Solutions include moving away from outmoded ideas of assessment and towards those that are more aligned with the skills that students will need in the workplace including making use of AI”.

With this in mind, emphasising soft skills and human-centric capabilities will be crucial in education. Universities like Reading are scrapping take-home and online exams and are developing alternative assessment methods which will involve applying knowledge in ‘real-life, often workplace-related’ settings. These methods hopefully will encourage students to use AI responsibly, enhancing their AI literacy and preparing them for modern workplaces.

In a Nutshell

The increasing capability of AI to perform well in exams, outperforming humans and bypassing detection, raises significant concerns for the future of education. The potential consequences highlight the urgent need for educational reform that focuses on integrity, skills development, and ethical considerations to prepare young people for a balanced and equitable future.

As AI continues to advance, universities and society must navigate the balance between maintaining the integrity of human skills and knowledge and leveraging AI’s capabilities, which is crucial in preparing for AI-literacy in employment and life. Some assessments should support students in using AI by teaching them to use it critically and ethically, developing their AI literacy, showing them how to reference, and equipping them with necessary skills for the modern workplace, other assessments should be completed without the use of AI to preserve essential critical thinking and problem-solving skills.

The University of Reading’s response, shifting towards assessments that integrate real-life applications and workplace-related scenarios, is a step in the right direction.

By Neil Wolstenholme, Chairman of Kloodle


Related Articles

Responses