An Evaluation of the Impact of Cloning on Item Parameters (CT-99-08)
Kimberly A. Swygert, David J. Scrams, Louis E. Thompson, Deborah E. Kerman

Executive Summary

The feasibility of a computerized version of the Law School Admission Test (LSAT) has been under investigation since 1995. One issue of the feasibility studies is whether or not it is possible to greatly increase the item pool without greatly increasing costs. One method that shows promise is item cloning. The definition of what it means to “clone” an item is very elastic; this method of item production can be defined in an extremely narrow fashion, such as changing only non-essential words in the items, or in a fashion so broad that the line between “cloning” an item and creating a new, entirely different one is blurred. A related possibility is to reuse previously disclosed items. For the purpose of this study, reuse of previously disclosed items is also called cloning.

However new items are created, there is a question as to what the characteristics of these new items are. One reason items are so expensive is that, once created, they must be pretested on a group of test takers similar to the test takers who will see the items operationally. This pretesting phase is necessary so that unacceptable items (e.g., items that are easily guessed, items that do not help discriminate among test takers, items that show differential functioning, etc.) may be identified and not used operationally. If a prediction model existed that showed how a certain type of item clone would behave, the characteristics of the item could be known with some degree of certainty ahead of time, and perhaps the number of test takers required to pretest the item could be reduced.

In this study, previously-administered LSAT items were cloned using several different techniques and then readministered in an operational paper-and-pencil (P&P) setting. The cloning techniques fell into one of three categories, based on the type of change to the item: (1) No change, meaning the item was simply readministered after being previously disclosed; (2) Order changes, meaning the options for each item or the items themselves changed order within the section; and (3) Content change, meaning minor wording changes were made to the items. Cloned forms were created for each of the three LSAT item types: Reading Comprehension (RC), Logical Reasoning (LR), and Analytical Reasoning (AR).

The data on each item consisted of the cloning technique that was used as well as item parameters for both the item's original and cloned incarnations. The analyses consisted of calculating the probability, based on the item parameters, of test takers across the ability range answering the item correctly. These probability functions were visually inspected for differences, and two statistics were calculated to measure the difference between the two functions.

Previously disclosed items of all three LSAT item types were administered in this study. Seven additional RC items were administered; these items had been previously administered but were not disclosed (the reading passage that they referred to had been disclosed, however). Results indicate that reusing previously disclosed items is not promising. For all three item types, there were cloned item parameters that were greatly different from their original counterparts. On the LR and RC sections, items consistently became easier for the test takers to answer. Two of the non-disclosed RC items changed in this way as well. The AR section contained the greatest number of items showing a large change in the parameters—5 out of 23—but the direction of change was inconsistent across the items. Therefore, it seems that disclosed items of all three types may be recognizable to test takers; on the AR section, it is also possible that the item parameter estimates were unstable due to lack of model fit or context effects, adding to the instability of disclosed item parameter estimation.

The other cloning techniques show more promise. Reordering the options of RC and LR items and reordering the items within a set on the RC section did not have an effect on the parameters of most of the items. Moving sets of items around in a section appears to have repercussions if the set is moved to a very different location; swapping the middle two sets on a section had no effect, whereas sets that moved from first to last position, or vice versa, were affected. Consistent with speededness, sets that moved from the beginning to the end of the test became more difficult, while sets from the end that were moved to the beginning became easier. Finally, the minor content changes that were made to AR items resulted in item parameters that were not largely different from the original parameters.

One caveat is necessary for these results. It is not just a question of whether items cloned in this fashion— with the use of minor changes or no changes—are currently unrecognizable to test takers as items that have previously appeared on the LSAT. When these data were collected, the test takers were most likely unaware that any items they were exposed to on the test, even within a variable section, were items that had appeared on a previous test, and had possibly been published as part of test preparation material. Given the zeal with which many test takers and test preparation companies assess the LSAT forms that are used every year, if the use of cloned items with only minor modifications becomes common knowledge, the ways in which the test takers prepare for the LSAT will likely change to a method in which old items are not just studied but instead are memorized. It is possible that not just the disclosed items but all of the items cloned in the ways listed here will be recognizable to the test takers, and the items will become easier to solve.

An Evaluation of the Impact of Cloning on Item Parameters (CT-99-08)

Research Report Index