Survey and Usability Testing Results of the Analytical Reasoning Computerized Prototype at the 2000 Law School Recruitment Forums (CT-02-01)
Kimberly A. Swygert, Susan P. Dalessandro, and Mike Contreras

Executive Summary

This paper reviews the results of the usability testing of two revised versions of an Analytical Reasoning (AR) prototype during the 2000 Law School Forum season, as well as the results of the survey data collection during that season. The first of the two AR prototypes reviewed in this paper, AR Prototype 1, is a slightly revised version of the AR prototype that was developed and tested during the 1999 forum season. The second prototype, AR Prototype 2, has a completely redesigned interface from that of AR Prototype 1.

The AR Prototype 1 follows the same basic format as the AR 1999 prototype. First, a set of AR conditions is presented along with the first five items. Next, the user may move to a second set of five items with a second set of AR conditions, and finally the user moves to a scoring screen. While certain features of the 1999 AR prototype were revised for the AR Prototype 1, several functions are the same in both prototypes, such as: (1) Users move through item sets with tab controls and between sets with labeled buttons, (2) when the user answers an item, a box appears around the option text box, an icon appears on the tab, and the letter button appears depressed, (3) answers can be ruled out by pressing rule-out buttons to the right of the option text box, which produces grayed-out response option text and a depressed rule-out button, (4) the letter and rule-out buttons can be toggled on and off; only one letter button can be depressed at a time, but all five of the rule-out buttons may be depressed at once; each letter button toggles off its corresponding rule-out button and vice versa, (5) text may be highlighted using the mouse, (6) a timer that displays the time remaining may be hidden or shown, and (7) information on the screen orients the user as to what item and set are currently showing.

The AR Prototype 2 design represents a radical shift in appearance, but the functionality remains mostly the same as in AR Prototype 1. The biggest difference is that Prototype 2 presents a mandatory tutorial and optional practice item before the first set of AR conditions appears. The tutorial takes the user through each function of the prototype in order of importance; answering items, ruling-out responses, moving to new items, moving to the next set of items, highlighting text, checking the timer, and moving to the score page. Users must move through every screen of the tutorial in order during their first time through the tutorial. They can repeat any or all pages of the tutorial if they need more practice.

AR Prototype 1 was tested at the 2000 Washington, DC Forum; AR Prototype 2 was tested at the 2000 New York City Forum. Both AR prototypes were put through formal usability testing. The usability tests were designed to discover any problems the users (the forum attendees who volunteered to participate) had when attempting to use the prototypes without outside assistance. In addition, two surveys were used during the 2000 forum season. Each survey asked four demographic questions and five questions regarding the user’s computer usage and comfort with various types of hardware and software. The survey used for the second AR prototype (given at the NYC Forum only) contained three additional computer access and comfort questions, 14 questions regarding computerized tests, and three questions regarding the need for accommodated testing conditions due to a disability.

The demographic results show that there is a slight undersampling of Caucasians in the survey group and an oversampling of Asian Americans, African Americans, and Hispanics as compared to the LSAT test takers in 2000. These larger minority percentages are a plus for the usability tests and survey collection, as this will make it easier to detect any subgroup differences in attitude and computer usage.

The Computer Usage and Comfort Results are encouraging for CAT development. At least 78% of users reported using a computer daily, and less than 5% of users rated themselves as being somewhat or very uncomfortable while using a computer. The respondents indicated less comfort about reading/comprehending lengthy text. There appeared to be a slight sex difference in the comfort levels—female respondents were more likely than male respondents to say they were very comfortable using a personal computer, and females were more likely to be very or somewhat comfortable reading lengthy text onscreen.

Almost half of the NYC Forum respondents have taken a computerized test. However, only about three-fourths of the respondents felt very or somewhat comfortable with the idea of taking the LSAT on computer. A large percentage of these respondents rated the different aspects of a computerized LSAT listed in the survey as extremely useful. If both a paper-and-pencil (P&P) and a computerized version of the LSAT were offered, only 65.9% reported that they would be much more likely, somewhat more likely, or equally likely to take the computerized version, which leaves almost a third of the respondents preferring the P&P test.

The usability tests for AR Prototype 1 showed that the users had no trouble reading the screen, were able to locate the conditions and items, and were able to move between screens. These results, among others, indicate that the interface for AR Prototype 1 was very readable and very easy to navigate. The majority of users were able to select correct answers and rule out incorrect answers, although almost a quarter of the users did not use the most efficient procedure when asked to select a correct answer after having ruled out answers. This is a point that would need to be made clear in a tutorial so users do not waste time while using the optional rule-out function.

While using the AR Prototype 2, a majority of the users thought that three indications were enough to show an item had been answered and that the purpose of the letter button was clear. Usage of the rule-out button was clear to more than half of the users, and more than three-quarters of the users said they would use the rule-out function operationally. The majority of users found it easy to understand that they had moved to another item or to another set. Overall, the responses indicate that the interface for AR Prototype 2 was fairly user-friendly in general, and that none of the basic features were problematic for the users.

The AR Prototype 2 was the only prototype of the two to contain a tutorial. More than half of the users did not need to repeat any tutorial section, and the users overwhelmingly thought the tutorial made things straightforward. Almost a quarter of the users tried the practice question, and 90.3% of those users found it helpful. Although this was not a high-stakes testing situation, the results from the usability test suggest that the tutorial was sufficient to explain the main features of the interface and gave users the confidence to attempt the items.

It is promising to note that the users in these samples who represent our potential LSAT takers appear to be extremely computer-literate and comfortable with computers and reading on computer screens; in fact, over the past five years of research in prototype development, the LSAT CT (computerized test) user group has evolved to the point where they can be expected to be proficient with computer basics and need no instruction on that point. For these users, both AR prototypes appeared to be fairly easy to use. It also appears that the tutorial provided with the AR Prototype 2 usability test did introduce the users to all the necessary functions and did seem to make these as clear as possible. However, users complained that the tutorial was too wordy and a few functions, most notably the rule-out and remove rule-out, were unclear even with lengthy explanation. While a majority of the users did not feel the need to repeat any tutorial sections or work a practice question, an even higher percentage felt the need to work a practice item, which served only to reinforce previously-introduced concepts, before moving on to the test. In addition, the users were split as to whether the tutorial and the practice item should be mandatory in a testing situation, and this may indicate a difference of opinion about the tutorial’s usefulness. These initial results are promising but also indicate that more research is needed in this area.


Survey and Usability Testing Results of the Analytical Reasoning Computerized Prototype at the 2000 Law School Recruitment Forums (CT-02-01)

Research Report Index