Making sense of technical assessment – Is standardisation the biggest challenge?
Anyone who has wrapped a wet towel round their head to devise a technical assessment will recognise the dilemmas:
- the rigour of standardised examinations, or the credibility of variable work-embedded assessment tasks;
- the holistic neatness of end point assessments, or the formative and partial credit functions of assessments in the course of a programme;
- a synoptic perspective on occupational competence in the round, or checking off a list of required competences.
There are few easy answers to these and many other dilemmas, and too little hard evidence on what works, but my new study for the Gatsby Charitable Foundation “A WORLD WITHOUT MAPS? Assessment in Technical Education” draws on international experience to provide some illumination.
Standardisation
Perhaps the biggest challenge is standardisation: is it not a precondition for assessment reliability that we ask all candidates to undertake the same tasks — for example, in a national assessment or examination? And yet, surely the best way of testing occupational competence is to see how a candidate performs in a real-world occupational task, where we test not just knowledge and skills, but also determination and imagination in the face of the unexpected.
And there’s the rub: such real-world, work-embedded tasks are profoundly resistant to standardisation.
Often this dilemma is diminished, if not resolved, through a blend of standardised and work-embedded assessment tasks. In Switzerland, apprenticeship practical assessment includes: first, a set of tasks which are the same for all candidates in the occupational field, and are usually conducted at the same time; and second, an individual practical project completed at the workplace and agreed with the individual employer. Some assessments in the UK follow the same blended model.
Consistent Grading
But for many practitioners, consistent grading is one of the most immediate challenges. Here in the UK, as in many other countries, policy and practice have championed what seems like common sense, that excellence should be recognised and rewarded with grades higher than pass. But implementing what seems like common sense is not so easy.
There is no common currency of meaning for grades like ‘merit’ and ‘distinction’, in contrast with the ‘pass’ threshold which is underpinned by some shared notion of the boundary between the occupationally competent and those lacking such competence.
In England, one apprenticeship assessment body awarded all 116 candidates a distinction. Reviews of experience in both Australia and the UK have found the criteria for higher grades to be elusive.
So while a transparent and accurate measure of excellence might be desirable, an inaccurate measure might be best avoided altogether.
For all of these dilemmas, part of the difficulty is that assessments serve multiple purposes. So a knowledge-based exam may impress a university, but not help to persuade an employer of an individual’s ability to tackle the messy realities of day-to-day working life.
Perhaps the clearest lesson for all is that determining the main purpose of an assessment is the critical aspect of its design.
Simon Field is a leading expert on the comparative analysis of technical education systems, and on how the technical and skills systems of the UK compare.
Simon’s recent work includes studies undertaken for the Gatsby Foundation on higher technical education and apprenticeship in England, and a review of apprenticeship in Scotland undertaken for the OECD. Until 2016 he led the OECD’s work on technical education systems, and has led and taken part in reviews undertaken in more than 30 countries in every inhabited continent.
Responses