Using Response Times for Item Selection in Computerized Adaptive Testing (RR-06-01)
Wim J. van der Linden
University of Twente, Enschede, The Netherlands

Executive Summary

In a computerized adaptive test, the correct and incorrect responses of test takers to test questions (items) are used to select subsequent items for administration to test takers that are more closely matched to their ability level. The computerized testing mode also affords us the capability of recording the amount of time test takers spend on each item, providing us with enhanced information about the test-taking experience of individuals and the characteristics of the items.

The goal of this research was to explore the possibilities of using the response times on the items in an adaptive test to improve ability estimation and, hence, item selection. We used a version of the hierarchical framework for the modeling of responses and response times on test items that was developed in an earlier project for the Law School Admission Council, with separate first-level models for the responses and response times and a second-level model for the distribution of the ability and speed parameters in the population of test takers.

The framework allowed us to predict the ability level of test takers based on the response times recorded during the test. In an example with an adaptive version of the Law School Admission Test, we showed that, for example, for a modest correlation of .20 between the test taker’s ability and speed, the accuracy of the ability estimator for a 20-item adaptive test without the use of response times was approximately equal to the accuracy for a 10-item test in which the additional information in the response times was used to select the items.

Using Response Times for Item Selection in Computerized Adaptive Testing (RR-06-01)

Research Report Index