How AI is Turbocharging Qualification Fraud, and What AOs Can Do About It

Anyone who falsely creates a qualification certificate, or alters a genuine one and passes it off as their own, is committing qualification fraud.
Qualification fraud has grown in tandem with the Internet and is prevalent across all education and training sectors, from forged CSCS credentials which are compromising the British construction industry, to global “diploma mills” selling fake degrees from non-existent universities. Unsurprisingly, the wave of disruptive innovation caused by the increasing availability of AI applications over the past two years is both increasing the sophistication of qualification fraud and resulting in more effective digital safeguards against this form of crime.
What does AI Generated Qualification Fraud look like, and what can Awarding Organisations do about it?
AI in the Classroom: Cheating, or Here to Stay?
The award of most qualifications is based on some form of assessment, that verifies that a candidate has earned the credential. The disruption caused by AI starts here. A study by the Higher Education Policy Institute in early 2024 found that more than half of British students were already using AI to generate answers for work on which they were examined; this proportion will be considerably higher now, one year on.
Specialised providers such as Turnitin have incorporated AI in their applications to detect traits that indicate the use of AI in student work. This, though, has resulted in an industry of AI “humaniser” applications that edit AI-generated text to remove or disguise the evidence of automation. An example is StealthGPT, which offers to “reshape AI-generated text to undetectable human writing”. As of December 2024, StealthGPT claimed more than 700 000 active users and 5.6 million processed submissions.
The most appropriate ways of incorporating Artificial Intelligence into the processes of teaching, learning and assessment are still being debated, and range from initial blanket bans on AI in the classroom to more realistic acknowledgements that AI is here to stay. Overall, the outcome is likely to be improved and more authentic assessment processes, which focus more on skills and competencies, rather than just on the acquisition of knowledge.
AI and Qualification Fraud: The Evolution of Cybercrime
Qualification fraud can take the form of hacking and altering the details of a genuine certificate, or of acquiring a fake qualification and using it to gain an undeserved benefit. Generative AI applications are being applied for both purposes.
Firstly, AI is being used to make the theft of a genuine qualification far easier. A report by SlashNext found that phishing – sending emails or other messages purporting to be from reputable organisations in order to steal personal information – increased by 1,265% over the twelve months after the release of the first Generative AI applications. An email message that refers to the recipient’s personal circumstances and local context is far more likely to be taken as genuine, setting it apart from the volumes of crude spam which most people delete without reading. Phishing attacks are also using GenAI to bypass guardrails, enabling the illegitimate use of trusted domains such as Microsoft Sharepoint, Amazon Web Services and Salesforce.
This combination of personalisation, accuracy and relevant localisation, combined with the use of a legitimate distribution platform, significantly increases the probability of a successful phishing hit and the theft of genuine qualifications, which can be altered and repurposed.
Next, AI can be used to manufacture fake certificates that are much more sophisticated than the crude copies that have previously made the headlines when detected. Deepfake certificates can be manufactured using Generative Adversarial Networks (GANs). A GAN comprises two neural networks that compete with one another in producing authentic new data from a defined data training set, for example a cache of genuine certificates. The first network generates new data by taking a certificate from the training set and modifying it as much as possible. The second network then tries to predict whether the new item comes from the pool of genuine data in the training set, or whether it is fake. The automated competition between the two neural networks continues until fake and original can no longer be distinguished from one another. Reported instances of instances of deepfake fraud in general increased tenfold following the launch of ChatGPT.
The power and potential of GANs for the production of perfect fakes is demonstrated in work by PwC’s AI Centre for Excellence in Geneva.
As part of a drive to automate back-office functions, PwC generated high-fidelity synthetic invoice data that could be used to train GenAI applications that can provide bookkeeping and accounting services for clients. These replicas of certified documents incorporated smudges, creases, handwritten notes and other marks – the same kind of details that make degree and diploma certificates seem authentic.
PwC’s experimental research shows that GenAI applications can produce a fraudulent credential that is far more sophisticated than manipulated PDF files, and can do so at scale. As SlashNext’s report puts it, Generative Artificial Intelligence is “opening a new chapter in the evolution of cybercrime”.
Fighting Back: AI versus AI
What can AOs do to Counter this new Level of Threat?
The Further Education sector has the advantage of being far more modular than Higher Education, with qualifications that often comprise a set of self-standing modules that may be taken together in full-time study, or more flexibly as part of work-integrated learning. This enables AOs to take full advantage of digital badges, using them as a form of micro-credentialing that validate the legitimacy of each learner’s journey, module-by-module.
To be effective, digital badge solutions should be cryptographically secure, tamper-proof and issued by the AO itself, rather than from a third-party platform which may be a security risk. When an AO adopts a digital badging solution that meets these criteria vulnerability to all forms of qualification fraud, including fraud that utilises AI, is significantly reduced. Because best-in-class digital badge solutions can be easily verified, AOs can use the record of a learner’s accumulated micro-credentials as an additional security check before certifying the accredited qualification to which they contribute.
In addition to using secure micro-credentialing to prevent fraud at the modular level, AOs can also issue secure digital credentials when providing their graduates with evidence that they have been awarded a legitimate qualification. Today’s leading edge digital credentials are secured using blockchain technology, can only be altered by the AO, and can be easily and instantly verified by authorised third parties, directly with the awarding organsation. AOs that incorporate secure digital credentialing into their cybersecurity strategies and also secure digital credentials to protect the modules that make up a qualification will have gone a long way towards locking their virtual doors against this new wave of AI-enabled qualification fraud.
A remaining risk is from identity theft – a primary objective behind the massive increases in sophisticated phishing attacks. New AI applications can easily generate fake images for false identity documents, but this loophole can be blocked by using biometric identification for students and learners, which allows confirmation that that the person presenting a qualification as their own is the person to whom it was issued.
Of course, AI-enabled fraud is not limited to education provision and is driving transformations in cybersecurity across other service sectors. The general theme is that, to be effective, security must be adaptive, with continual review of standards to allow for new, dark-side, innovations, using AI to fight AI.
Also required is a concerted public awareness campaign, both about the prevalence of qualification fraud and the damage that it can do, and about the need for effective security measures. Despite all that has been written about AI, digital innovation, cybercrime and the dark web, a large proportion of people still use single passwords to protect their personal information assets.
A recent survey has revealed that, in Britain in 2024, the most commonly used password was “password”. If Generative Artificial Intelligence has an autonomous sense of humour, there will be terabytes of laughter reverberating in the ether.
By Martin Hall – Non-Executive Director, Advanced Secure Technologies, Emeritus Professor, University of Cape Town and former Vice-Chancellor, University of Salford
Responses