End-point assessment is complex and high-stakes – how do we achieve fairness and consistency?
Kevin the Teenager might just have had a fair point…
Remember him? One of Harry Enfield’s funnier, best-observed, most recognisable characters – even on film, trying so hard and failing so comically to ‘go large’ in Ibiza with his mate Perry.
Kevin the Teenager’s main catch-phrase was “It’s SOOOO unfair!”, usually deployed to his parents and others when he simply didn’t fancy a request or chore. Parents, teachers and other adults smiled and winced in equal measure. We probably still do, whenever he or that catchphrase come to mind, nowadays.
The stakes involved when that self-pitying howl erupted were always pretty low. We knew he was silly and wrong, pursuing a different outcome that just plain didn’t matter very much.
Now think about it in terms of apprenticeships – in particular, end-point assessment.
No longer ‘low stakes’. Not so easily dismissed as mere adolescent whining. Not confined just to Kevin and his narrow world. And definitely not very funny. The concept of ‘fair’ assessment is crucial to the apprenticeship reforms.
It’s inextricably linked with the idea of an ‘end-point’, after the Gateway – instead of the old frameworks-based idea of continuous on-course assessment with lots of chances to get it right (or at least ‘right-er’).
Fairness has always mattered, of course: just as much for apprenticeships as any other qualifications and their processes. However, this new game is more complex, with multiple structures and hurdles to jump which all bring their own questions of evaluation, rigour and achievement.
The fundamental change from pass-fail assessment to a graded system is highly significant, and involves all sorts of issues around the idea of a fair process.
Add in the business of a terminal stage to a long learning process, with strong echoes of academic examinations, and there’s a much-increased ‘on-the-day’ load of stress for all involved. Then, there’s also the contentious issuesaround extra costs.
So, the stakes are now much higher. That’s not just for the apprentice, too: a wider set of players have more ‘skin in the game’ – employers, training providers and end-point assessors/organisations themselves.
All this is profoundly people-based, even given more technological approaches such as on-line tests of knowledge. That’s right and proper. Nevertheless, the ‘human element’ brings risks – in those creating and administering all the different assessment instruments, those helping learners prepare for, and those assessing (never mind all the learners engaged in demonstrating their knowledge, skills and behaviours for a big prize at the end). And one of the big problems naturally concerns fairness.
Fairness has to be across both time and place.
That fairness has to apply to all learners, in every occupational area/standard, at all levels.
Yet being fair to an apprentice cannot mean being ‘soft’ or ‘sloppy’, in ways that compromise things for everyone. It starts at the individual level, though; and that’s where the impact is most immediate and most deeply felt – either when things are handled fairly, or they’re not.
What does this all mean for those organisations and individual professionals involved, then?
In terms of the ‘machinery’, Gateway and assessment instruments must guarantee both appropriate access and proper rigour for all apprentices. This means not ‘hamstringing’them due to their personal situations under the protected characteristics of the Equality Act. At the same time,any such even-handed, sensitivemethods and criteria also need to foster and maintain stable, reliable ranking and discrimination across the various boundaries of grading.
All this is initially a matter for those who set the rules: trailblazers and end-point assessment organisations. Usefully, there’s a long heritage in the awarding bodies’ experienceswith traditional academic exams. That won’t be sufficient, though.
In the end, it’s the people at the professional sharp end who will have to deal most closely, and influentially, with the business of fair assessment of apprentices, from day-to-day teaching learning, attempting the Gateway and the actual end-point assessment activity.
The practical demands of vocational and occupational training will throw up many issues. These need considerable forethought, effective trialling, careful monitoring, coupled with the constant review and revision arising.
Many of those who have to do the teaching and assessing will need thorough training and regular support in all of this, often from scratch.And to repeat, they’re people. The human element, mentioned above, kicks in no matter what the systems are like.
This can involve – probably quite unintentionally, in most every case, and also often unnoticed – a number of issues in trainers and assessors.
This might be:
- the ‘halo effect’– assessing too positively, on the basis of a single good characteristic
- the ‘horns effect’– bias in the other way, over-emphasising one problem
- the ‘leniency effect’– the assessor’s mental state of mind at the time of assessment is an overly lenient factor in judgements
- the ‘stringency effect’– the reverse of the above
- the ‘recency effect’– greater weight is given to recent or current experience than to earlier performance
All of this can be quite subtle. Superficially, these effects can even appear reasonable and acceptable. They’re ultimately all unfair, however. And they can appear anywhere along the apprenticeship journey, and especially when approaching the Gateway or tackling the assessment instruments.
So, what to do in order to try and ensure fairness for all, throughout?
First and foremost, recognise the potential for risks. This is hard, for both organisations and individuals. It means acknowledging likely weaknesses – professional and personal, existing or possible. Accepting the possibility of this in general, followed by seeing the need to manage such risk and weakness as best as possible, are the interlinked keys for people and institutions.
Managing will usually lead to coping, and thus to minimising the risks of unfairness. There are of course many other kinds of risk and weakness involved in delivering apprenticeships and end-point assessment, to be sure. This one matters just as much, and demands appropriate attention and efforts by all concerned.
In the end, there’s nothing to prevent the Kevin the Teenagers of this world moaning – or anyone of any age or type who is so inclined. “It’s SOOOO unfair!” will no doubt continue to be thought, felt and expressed.
After all, apprentices are human (just like teenagers: really, they are!). But it’s our job to provide the best conditions of fairness ‘right across the patch’ in the delivery of apprenticeships, and the assessment of each individual’s performance against the standards involved.
Then we can justifiably say: “No, Kevin: you’re wrong.”
Mike Cooper, Senior Associate, Strategic Development Network (SDN)
About Strategic Development Network: SDN has worked with over 50 EPAOs (and their assessor teams) to set up and start delivering end-point assessment. Places are now also available on our Level 3 Award in Undertaking End-Point Assessment. We’ve also produced a set of recorded presentations covering the main end-point assessment methods and critical areas of practice. Feel free to join our mailing list or End-Point Assessment LinkedIn Group for more updates.
Responses