As someone involved in building assessment technology, it’s always very important to hear the insights of researchers into the ways technology is impacting our sector, as we did at the recent AEA conference in Lisbon. It’s worth saying why these insights are so valuable to people like me.
Why? Well the tech community generally includes few real measurement specialists, and as technology inexorably becomes more deeply embedded in everything assessment people do, it’s very important that the research community keep an eye on the impact tech has. No-one is better placed to test and challenge the technical revolution, to keep the bar high, and make sure that the new tools, like the old ones, deliver valid and reliable assessment that society can trust.
And this does seem to be the focus of much research. Whatever the new testing innovation, how does it impact measurement? How do new onscreen item types operate? What’s the effect of screen complexity on candidate behaviour? What constitutes an acceptable adaptive testing design? What to make of the ocean of marking data?
In carrying out this important role, of course, the joy is that technology is generating more and more data to study. Right down to key strokes. It feels like the research can go on for ever.
But I’d like to highlight other aspects to technology that are deeply relevant to assessment organisations. While they’re not as data rich, they form part of the puzzle as we try to understand how to harness IT to the best effect.
First, the assessment cycle starts typically with question development. And in many settings, test development begins with a qualitative process and often isn’t informed by neat files of pre-test data. Instead the quality of the test itself rests on processes defined by the assessment organisation and the diligence and skill of the people developing it.
What’s this qualitative process got to do with technology? Well, in common with all publishing processes, the test development cycle is increasingly mediated by technology. The quality of the final test therefore will be deeply impacted by the quality of this technically-enabled experience.
For example, the way authors can view test coverage, access support, see exemplar materials and focus deeply on content will make a difference to final test quality. The extent to which reviewers have clean views of content and can access relevant data about the test will impact the effectiveness of the review process. And the ability of team leads to enforce quality checks on dispersed colleagues will affect the thoroughness of the quality process.
Even though finding hard data is more difficult, I think the test development ‘process’ and the quality of each stage in it is worthy of research. The test development workflow embodies the quality model on which the final test often depends. But does technology help, or hinder? Can technology reduce error and improve the quality of output? Some creativity will help too. In a digital workflow, what new checks can be carried out? Can technology deliver any automated checks which will improve the overall process?
If anyone is writing on this theme, I would be delighted to hear more – please share.
The second thing I’d mention is in the area of organisational change. It’s good that more technology is available. But adopting it entirely depends on the capacity of exam bodies to manage change. If we were investigating assessment from a business perspective, we may be looking at a series of real time change experiments, taking place in dozens of seemingly similar assessment bodies as they modernise their business process with tech. We’d be asking how organisations in different settings use the same tools for different purposes; and why some seemingly similar bodies are good at change while for others it’s much more of a challenge.
So perhaps there are some wider areas for research, beyond the data rich world created by e-testing and onscreen marking. Can technology strengthen and improve internal exam board processes and so benefit final test quality? And what is the alchemy that makes some assessment organisations adept at change, able to turn promising ideas and systems into effective practical solutions? We can all agree that delivering effective innovation in necessarily risk-averse organisations is hard. So what works? Researchers – we need your help!
About the author
David Haggie is the Managing Director of GradeMaker Ltd, a UK-based technology company specialising in the provision on online, enterprise scale exam authoring systems. Prior to GradeMaker he was a leading figure in establishing RM’s assessment business, and previously worked for the BBC. GradeMaker has clients around the world.
0 Comments