It’s a job you really want and the first interview went well. Now they’ve got you sitting in front of a computer in a testing room. You have been given the instructions and tried some example questions. The administrator has left the room and the ability test has begun. Seven minutes fifty-nine seconds, seven minutes fifty-eight seconds…there are 30 questions and you now have less than eight minutes in which to attempt them. Should you devote as much time as necessary to trying to answer each question correctly? Or should you try and get through as many questions as possible by spending a limited amount of time on each question before having a guess and moving on?
This is a scenario encountered by untold candidates on a daily basis. How people apportion their time in such circumstances is known as a pacing strategy and it can have a big impact on how well they perform on ability tests.
When I teach people how to interpret ability results I suggest they look at the number of questions answered and proportion correct if the results are lower than expected. This helps identify deflated scores caused by respondents spending too long on early questions and not answering many questions as a result. The problem with this advice is that it can’t account for people who have realised they are short on time and started rapidly guessing the answers to questions. Moreover, research has suggested this is exactly what many people do when running out of time on ability assessments (Wise & DeMars, 2006). The problem with this behaviour is that it reduces the accuracy of test results.
So what can you do to reduce the likelihood that candidates will adopt poor pacing strategies? Most good ability assessments already instruct respondents to focus on both speed and accuracy. Perhaps more explicit pacing instructions would be useful. If candidates have eight minutes in which to answer 30 questions then they have approximately 16 seconds per question. This is approximate, as easier questions may be answered more quickly, freeing up additional seconds for more difficult questions. While there are no guarantees that communicating this information will allow everyone to complete all questions, a general awareness of rough time per question may encourage the use of better pacing strategies. Reasoning tests assess performance in an area believed to be indicative of an underlying ability. They are not direct measures of that ability. Therefore the most accurate assessments of that ability are likely to occur when all candidates are given information allowing them to perform to the best of their ability. Although there will be different and additional considerations for different tests, I would recommend letting all candidates know approximately how much time they will have per question.
This post has barely skimmed the surface of one of many influences that can impact upon the accuracy of ability test results. What are your thoughts on pacing strategies and/or some of the other potential influences on test results (e.g., distractions, nerves, poorly written questions)?
Wise, S. L., & DeMars, C. E. (2006). “An application of item response time: the effort-moderated IRT model.” Journal of Educational Measurement, 43, 19-38.
This post was originally written by OPRA Alumni Paul Wood.