Factors in Performance on the Law School Admission Test (SR-93-04)
Kenneth M. Wilson and Donald E. Powers

Executive Summary

Typically, a test—such as the Law School Admission Test (LSAT), the primary focus of this study—employs more than one type of item and/or different kinds of content for a given item type. For these multi-faceted measures, the extent to which the different types of test questions tap different aspects of a particular ability, or in fact tap several different abilities, needs to be assessed. In the case of the LSAT, as presently constituted, this issue does not appear to have been resolved. As described in Section 1 of the main report, this study was undertaken to clarify the internal structure of the LSAT, and shed light on the nature of the ability or abilities measured by the three types of test items that make up the LSAT—that is, reading comprehension, logical reasoning, and analytical reasoning.

The study drew on data for two different forms of the LSAT, namely, the June 1991 form and the October 1991 form. For broader perspective, the study also used data from the same two LSAT administrations, for a subsample composed of LSAT examinees identified through file-matching procedures as having taken the Graduate Record Examinations (GRE) General Test, between October 1988 and December 1991, inclusive. Time interval between GRE and LSAT testing occasions, without regard to order, ranged from five days to 36 months.

Items of the same types as those used in the current version of the LSAT have been included in all editions of the GRE General Test since October 1981. Thus, it was possible to draw on the substantial body of evidence generated in the GRE context regarding relationships among these item types. This research is reviewed in Section 2 of the report. Section 2 also includes information pointing up (a) strong similarities in the “surface characteristics” of the three item types as reflected in descriptions of the three item types in testing program publications, and illustrative items from the LSAT and the GRE, as well as (b) differences between the LSAT and the GRE with respect to internal organization of test items, number of sections, and so on.

Within the joint LSAT/GRE sample it was possible to

(a) conduct parallel within-test analyses of correlations among the three item types,

(b) assess time-related attenuating effects in patterns and levels of between-test correlations involving scaled scores and specially computed item-type subscores for the three common item types, and ultimately, by using combined data from both the LSAT and the GRE,

(c) assess the extent to which patterns of correlations involving parcels of items of three types common to both tests in the combined LSAT/GRE sample were similar to those identified in the two separate within-test analyses.

A series of related analyses was undertaken.

First, separate analyses were made of within-test correlations involving primarily scores on item-type parcels (sets of four to six items of the same type) using

(a) LSAT data for a general sample of LSAT examinees and for the selected sample, alluded to above, of LSAT examinees who also took the GRE General Test (see Section 3), and

(b) GRE data for the selected LSAT/GRE sample, as reported in Section 4.

A unique feature of the study was the method of pooling different items of the same type and position across multiple forms to create parcels used to generate correlations for analysis.

Findings were generally parallel for the separate exploratory within-test factor analyses based on parcels of items of the three types common to both tests. In each analysis, when two factors were extracted, LSAT logical reasoning and reading comprehension items defined one factor, while LSAT analytical reasoning parcels defined the other. In the case of the LSAT, findings for the selected sample who took the GRE were in all essential respects similar to those for the general sample.

These parallel within-test findings suggested in both tests these item types measure psychometrically distinguishable aspects of reasoning ability: aspects of general or informal reasoning, defined by reading comprehension items and logical reasoning items, on the one hand, and aspects of formal, deductive reasoning, defined by analytical reasoning items, on the other.

Next, analyses were made of between-test correlations involving reported, scaled scores and specially computed item-type section scores. These analyses (described in detail in Section 5) were designed in part to assess effects associated with the fact that the LSAT observations and the GRE observations were collected on different testing occasions separated by intervals ranging from less than 10 days to 36 months. The between-test analyses included assessment of time-related effects on between-test correlations involving the three item types common to both tests.

In these analyses, profiles of correlations involving LSAT item types and their GRE counterparts, computed for shorter interval (between tests) and longer interval subgroups (<10 days versus 19–36 months) were found to be strikingly similar with respect to pattern. They differed only with respect to level.

Results of the analysis of between-test correlations—observed and corrected for attenuation due to the presence of measurement error—involving item types common to both tests were consistent with the findings of the separate within-test factor analyses. In both instances findings suggested psychometrically distinguishable differences between aspects of general or informal reasoning measured by reading comprehension and logical reasoning item types, on the one hand, and aspects of formal, deductive reasoning tapped by the analytical reasoning items, on the other.

Last in the series of analyses, as described in Section 6, intercorrelations involving the combined set of LSAT and GRE logical reasoning, reading comprehension, and analytical reasoning item-type parcels were analyzed. The sets of LSAT and GRE parcels used in the two separate analyses were combined to produce intercorrelations for these analyses. The eigenvalues suggested that two factors, one defined primarily by LSAT and GRE logical reasoning and reading comprehension parcels and the other by LSAT and GRE analytical reasoning parcels, sufficiently characterized the correlational structure for the combined parcels.

Some Implications

In essence, the study findings suggest a common underlying structure for logical reasoning, reading comprehension, and analytical reasoning item type regardless of the test (LSAT or GRE) in which they are used. The structure appears to involve two dimensions. One dimension is represented by logical reasoning and reading comprehension item types measuring general reasoning skills that appear to be associated with the analysis of extended discourse. The other dimension represents a more narrowly constrained, formal-deductive aspect of reasoning, measured by the analytical reasoning item type.

For the LSAT, which currently reports only a single score to summarize performance involving three different item types, perhaps the most central conclusion supported by the findings is that

  • the logical reasoning, reading comprehension, and analytical reasoning item types included in the LSAT, have potential to generate more information than is now being conveyed by the single LSAT scaled score.

That potential is suggested by the finding that the LSAT item types measure psychometrically distinguishable aspects of reasoning ability. This raises the attendant possibility that the information provided by item-type subscores might prove to be useful for predictive or diagnostic purposes in the LSAT context.

Questions concerning differential and/or incremental validity of subscores that might be computed are of immediate interest. For example, one score based on logical reasoning and reading comprehension items and a second score based on the analytical reasoning items would be consistent with the basic two-factor outcomes. Would the use of two scores, or perhaps a score for each LSAT item type, result in improved prediction of first-year law school grades generally, grades in particular courses or clusters of courses, grades in second-year courses?

A study of the comparative validity of LSAT subscores such as those noted above, for predicting such criteria—in general samples, and in samples defined by ethnic group membership, age, gender, undergraduate major, and so on—would contribute toward resolution of academically, psychometrically, and socially important “differential validity” questions in the current LSAT context.

Closely related to the foregoing are questions concerning incremental validity. For example, does differential weighting of item-type subscores result in better prediction of pertinent criteria (for example, grades in successive years of legal education), than is provided by a total scaled score based on a simple sum of section scores, in general samples of law students? In subgroups defined by ethnicity, gender, age, undergraduate major, and so on?

Study findings indicate that between-test correlations involving LSAT reading comprehension, logical reasoning, and analytical reasoning subscores and corresponding GRE subscores are differentially resistant to time-related attenuating influences. These findings suggest the possibility of differences in relative stability for the abilities involved. In the present study, inferences about relative stability, of course, are based on between-test correlations for the respective item-type subscores. It seems important to make a direct assessment of test-retest stability, short- and long-term, for LSAT item-type subscores.

These and other study findings have incidental implications for the GRE context. For example, the findings tend to confirm and extend conclusions based on GRE studies, namely, that logical reasoning items and analytical reasoning items are measuring psychometrically distinguishable aspects of reasoning ability. Thus, research questions such as those raised above for the LSAT, also have implications for continued research in the GRE context.

That LSAT logical reasoning, reading comprehension, and analytical reasoning items and their GRE counterparts have a common factor structure is important because this finding suggests that future research involving these item types in the LSAT context can draw on relatively extensive GRE research findings (such as those summarized briefly in Section 2 of the main report) both for formulating working hypotheses and for evaluating LSAT research outcomes. It also follows that as LSAT research findings involving these item types accrue, the LSAT findings in turn can usefully inform research in the GRE context.

Jointly planned research projects involving item types common to both tests might expedite attainment of objectives common to both testing programs—for example, clarifying distinctions between logical reasoning and reading comprehension.

In this connection, given the observed affinity between logical reasoning and reading comprehension—combined with hints of distinctiveness—it is noteworthy that the version of the logical reasoning item type considered in this study for both the LSAT and the GRE involves heavy reading comprehension requirements.

Accordingly, logical reasoning and reading comprehension are “linked” to some degree by heavy reading demands. To the extent that it is possible to measure “logical reasoning” using item types with limited reading demands, progress may be made in clarifying distinctions between “logical reasoning” and “reading comprehension,” by cooperative research projects involving experimental logical
reasoning and analytical reasoning items, and operational items from both tests, along lines followed in GRE research.

In reaching decisions regarding score definition and score reporting, the results of this study involving data from both tests, suggest that both testing programs might benefit from research projects capitalizing on the common structure that appears to underly performance on the three types of items that are common to both the LSAT and the GRE General Test.


Factors in Performance on the Law School Admission Test (SR-93-04)

Research Report Index