Archive for June, 2009
There’s been an interesting discussion over the last couple of days on the Computer Assisted Assessment JISCMail list around delivery of multiple choice questions.
The question of how long should be allowed for multiple choice questions produced a consensus of around ‘a minute per question plus a wee bit’ for a ‘typical’ MCQ, but that difficulty level or the use of negative marking or more sophisticated questions would impact on this. Sandra Gibson cited research by Case and Swanson which suggests that
good students know the answers and … select the right one in very little time (seconds), poor students try and reason out the answers which takes longer. It depends how long you want to give the poorer students to try to work it out, which then impacts on the validity, reliability and differentiation of your assessment.
Discussion broadened to cover the issue of sequential delivery, i.e. when a candidate is unable to return to questions and revise their response once they have moved on to the next question in the test. There were some compelling educational arguments in favour of this, for example, a series of questions building on or even containing the answers to previous questions; and less satisfactory justifications such as technical limitations on delivery software. Fascinatingly, a number of posters reported the same (sadly anecdotal) finding that where students revise their response, the likelihood is that they’ve changed a correct answer to a wrong one. It was also noted that tests which do not permit candidates to revise their responses required less maximum time than those that do.
It’s a good discussion that’s still going on, so well worth following or contributing to!
The Guardian is reporting that Single Level Tests, the replacement for the controversial Sats exams which have been piloted over the last eighteen months, are plagued with ’substantial and fundamental’ problems. The exams, which allow pupils to take the exams ‘when ready’ at any age between seven and fourteen as part of the larger personalisation agenda, produced what the Guardian calls ‘extraordinary results’, with primary school pupils consistently outperforming those in secondary school in certain areas.
This variation in performance across age groups is explained by the fact that the tests themselves are based on the primary school curriculum, which younger pupils have freshly been taught while older pupils have forgotten much by the time they sit the tests. This is a fundamental flaw in these tests which raises a number of questions around the area of assessment when ready and assessment on demand; it is ironic that a system intended to recognise individual needs and abilities could actually undermine individual performance.