By Rebecca Hamer, International Baccalaurate (IB)
What did joining the steering group of the AEA SIG eAssessment do for me and IB? Well, it brought me new ideas and a new collaboration. Co-hosting a SIG branded pre-conference workshop with Caroline Jongkamp (Cito) led to a collaboration towards developing a new map to digital assessment items. In time, we are hoping to derive guidelines on when to use what kind of assessment item. Such a map and set of guidelines may help demystify digital test authoring, especially for educators who are just entering the digital assessment ecosphere.
Why would you even try to make such a map? Well, I can tell you from personal experience, without a map it is a difficult terrain to navigate as so much is going on all at once and in so many different places. It really takes a while to get a general feel for what is out there and what is possible. And it has been great to work with Caroline as a local guide with all her Cito experience. At this time, we are in the process of validating and improving the first version of our IB-Cito taxonomy for digital assessment and we hope to submit a paper later in 2021. For more details and the preliminary results, please watch this 15-minute video . Also, Caroline will be presenting more results at the AEA virtual conference in session F: Wednesday 3 November 2021.
So how did we get here? In 2018, the newly established AEA Special Interest Group on eAssessment wanted to build on the launch of the SIG at the 2017 Prague conference. The SIG steering group agreed that whatever SIG activities we organized, they would have to have a clear link to digital assessment. One slide from another conference presentation, showing Kathleen Scalise’s 2009 taxonomy of item types, sparked an idea. What if we could introduce a more systematic way of looking at the world of digital assessment?
Earlier conversations had shown that for many organisations and educators considering going digital, the mainstay of digital assessment is the omnipresent multiple-choice question. A link so strong that computer-based testing has become almost synonymous with it, sometimes in combination with a response box for a word or number. But there is so much more and once you start looking around the variety of item types and presentation platforms becomes almost overwhelming. Besides various other common item types requiring matching or adding simple objects to a diagram used both on paper and on screen, there are all the exiting things happening such as gamification of learning or embedding assessment in virtual or enhanced reality. But nobody tells you when to use what – or rather everybody tells you to use their platform because it can do everything you need. And it is actually very difficult to see how all these different types of presenting assessment items compare to each other. The most recent attempts to bring some order and system into the wild west of developing digital items was about ten years old. So perhaps it was time for a new attempt.
And so, this idea led to a successful pre-conference workshop, co-hosted by Caroline and myself at the Nijmegen conference. In this workshop Caroline and I introduced two approaches to organize digital item types. Having secured a range of digital items for the participants to play around with, they were asked to allocate to the items in these two taxonomies. However, the real question the participants had was “Are there any guidelines on when to use what?” A question that led Caroline and I to design a follow up workshop in Lisbon in 2019, where we asked participants to match item types to a new IB-Cito taxonomy that included a broader range of state-of-the-art digital item types. In addition, they had to determine the highest level of learning using an intuitive revised Bloom’s taxonomy.
By this time, Caroline and I were actually getting quite interested to see if the new taxonomy worked as we hoped and we asked the pre-conference participants for their consent to use their data, while we repeated the activity at an internal IB conference with participants also providing consent. We now have data for about 400 items, creating links between item types and the assessment objectives and we are preparing a paper on the proof-of-concept of using a taxonomy this way. As it is, the SIG has had lasting impact on us, and perhaps – after sharing our results – even on the digital assessment community.
0 Comments