Capitalization on Item Calibration Error in Computer Adaptive Testing (CT-98-04)
by Wim J. van der Linden and Cees A. W. Glas, University of Twente, Enschede, The Netherlands
In adaptive testing, each subsequent item is often selected to have maximum information at the current estimate of the ability of the test taker. An important advantage of this procedure, compared to nonadaptive assessments, is that the same measurement precision can be realized at a shorter test length. However, because the properties of the items are estimated from previous response data, the adaptive procedure may capitalize on these estimation errors rather than the true properties of the items.
The problem of capitalization on estimation error has been addressed before in the educational measurement literature in the context of assembling a fixed form of a test. However, the problem has never been addressed for a test with an adaptive format. In this paper, the problem is explored for adaptive testing both through an informal analysis and an empirical study using simulated data.
As expected, adaptive procedures are most sensitive to errors in the estimated discrimination parameters. Also, the simulation study showed a clear preference by the adaptive procedures for items with larger errors in the estimates of the discrimination parameters. However, the effect of this capitalization on estimation error in the item parameters was minor compared to the effects of the relative deficiencies in the item pool corresponding to certain intervals of the ability scale.
The practical conclusion from this study is that good item calibration is an important requisite for adaptive testing. However, once the sample of test takers is large enough, the composition of the item pool takes over as a more important factor for the quality of the adaptive procedure.