Adaptive Mastery Testing Using the Rasch Model and Bayesian Sequential Decision Theory (CT-99-02)
by Cees A. W. Glas and Hans J. Vos, University of Twente, Enschede, The Netherlands
In mastery testing, the core problem is to find optimal rules for deciding whether an examinee has or has not mastered certain subject matter. The decision is based on the examinee’s score on a test. Well-known applications of mastery testing include testing for pass/fail decisions, licensure, and certification. Over the last few decades, many researchers have studied the fixed-length mastery testing problem. With the advent of computers in testing, variable-length mastery testing has become feasible. In this form of testing, after each item is administered the decision is made to declare mastery, declare nonmastery, or to continue testing if the uncertainty is still too high. Another version of variable-length mastery testing uses small sets of items (testlets) rather than single items. The main advantage of variable-length mastery testing is that much shorter tests can be used for examinees who have clearly attained the mastery or nonmastery level; whereas longer tests are used for examinees for whom the decision is not clear-cut.
This project focused on sequential mastery testing and sequential adaptive mastery testing. Two main types of mastery testing have been widely researched; sequential mastery testing and adaptive mastery testing. Both are variable-length forms of mastery testing. The former includes the cost of administration in making the decision to continue or discontinue testing. Adaptive mastery testing is based on optimal item or testlet selection without including the cost of administration in making the same determination. In this paper, a procedure for adaptive variable-length testing using a Bayesian decision-theoretic framework— adaptive sequential mastery testing—is described, allowing for an adaptive method that incorporates cost into the decision-making. In this research study, a number of computer simulations were performed comparing sequential and adaptive sequential mastery testing. Looking first at sequential mastery testing, there was a considerable decrease in cost of administration mainly as a consequence of the decrease in the number of items administered. The number of correct decisions remained stable despite the decrease in cost.
With regard to adaptive sequential mastery testing, several findings were worthy of note. First, if testlets rather than single items are used, the number of items per testlet is important. Large numbers of small testlets produced more favorable results than small numbers of large testlets. However, in any case, the adaptive sequential testing resulted in only minor improvements in terms of increasing the number of correct decisions and decreasing the number of items administered. Summing up, it was shown that a combination of Bayesian sequential decision theory and item response theory (IRT) provided a sound framework for sequential mastery testing, but the additional merits of adaptive testing should not be exaggerated.