Speed as a Variable on the LSAT and Law School Exams (RR-03-03)
William D. Henderson, Indiana University School of LawBloomington

Executive Summary

This study examines the hypothesis that test-taking speed is a variable that affects performance on both the Law School Admission Test (LSAT) and actual law school exams. The methodology of this project is similar to LSAT validity studies, with one important exception. Student performance is disaggregated into three distinct testing methods with varying degrees of time pressure: (1) in-class exams, (2) take-home exams, and (3) assigned papers. Correlation coefficients for both the LSAT and undergraduate grade-point average (UGPA) are then calculated. If test-taking speed affects both the LSAT and law school exams, the LSAT should be a relatively robust predictor of scores on in-class exams, with lower correlations on take-home exams and papers. A second phase of the study uses the disaggregated data to construct a model of law school performance (disaggregated model) that includes testing method as a variable. The predictive power of the disaggregated model is then compared to the traditional LSAT/UGPA regression equation (aggregated model). This methodology offers a preliminary assessment of whether testing method (defined by time allowed) is a variable that affects law school performance and the predictive power of the LSAT.

The sample is composed of recent graduates from two U.S. law schools: an elite national law school with high LSAT and UPGA scores and a regional law school with mid-range LSAT and UGPA scores. The primary difference between the two samples is a relatively low correlation between grades and LSAT scores in the national law school (.194) sample versus a fairly strong correlation between grades and LSAT scores in the regional law school sample (.446). A more robust LSAT correlation among transfer students at the national law school (.476) suggests that the weak LSAT correlation for the sample as a whole is probably the result of range restriction. Notwithstanding this difference, the most significant commonality between the data sets is that the LSAT has its greatest predictive power on in-class exams. The correlation coefficients for LSAT were significantly lower as we moved from in-class exams to take-homes and papers. Although the predictive power of the UPGA also varied, its predictive power was generally more stable for all three testing methods. Moreover, in both samples, the LSAT and UGPA correlation coefficients for each testing method were fairly stable between year 1 and years 1–3.* Overall, these results suggest that testing method is a variable that affects the ordinal ranking of law school performance and, therefore, the predictive power of the LSAT and UGPA. They are also consistent with the study’s hypothesis on test-taking speed. 

In the second phase of analysis, the disaggregated model was a better predictor of law school performance only in the national law school sample. Testing method actually accounted for a larger share of the variance than the LSAT (3.6% versus 2.5%). In contrast, the disaggregated model provided virtually no improvement in the regional law school sample. A careful examination of both samples suggests that the disparity between the two schools may be partially explained by significant differences in the proportion of each testing method. For example, the national law school utilized a much larger percentage of take-home exams and papers than the regional law school (35.2% versus 19.9%). Because the LSAT had its greatest predictive power on in-class exams, a larger proportion of take-home exams and papers reduced the importance of the LSAT as a predictor and increased the relative importance of UPGA. In both samples, UPGA emerged as a stronger predictor of performance on take-home exams and assigned papers. However, because of the large proportion of in-class exams in the regional law school sample (74.6% for years 1–3), the LSAT tended to dominate both the disaggregated and aggregated models. 

The results of phases 1 and 2 of this study provide persuasive evidence that testing method is a variable that affects law school performance. Further, the pattern of divergence of LSAT and UGPA correlation coefficients suggests that test-taking speed may explain differential performance on in-class exams versus take-home exams and papers. 

Additional analyses of the data set corroborate the hypothesis on test-taking speed. For example, variables were created that reflected the change in ordinal ranking between (1) grade averages on less time-compressed grading methods (GPAtake-homes, GPApapers, and GPAtake-homes & papers), and (2) grade averages on in-class exams (GPAin-class). In both samples, these variables were consistently correlated with the LSAT and poorly correlated or uncorrelated with UGPA. The most plausible explanation for this outcome is that the LSAT is a measurement (in varying proportions) of both reasoning ability and test-taking speed. In an analysis that focused on the variable of page length of exam, which was included in the national law school sample, the LSAT had a weaker correlation with short in-class exams ( 4 pages) versus long in-class exams ( 5 pages). Moreover, the difference between the two correlations was statistically significant at p = .01. Another subsidiary analysis found limited evidence that age was positively correlated with performance on take-home exams and papers. Although this finding may be explained by a separate writing ability construct that generally improves with experience (and thus age), it is also consistent with age-related differences in test-taking speed.

Earlier research has suggested that ethnic subgroups may be disproportionately affected by time pressure on the LSAT. This study found inconclusive evidence of subgroup differences based on test-taking speed. In both the national and regional law school samples, white students had higher ordinal rankings on in-class exams vis-a-vis ordinal rankings on take-home exams and papers. Although these findings suggest that in-class exams may marginally favor white students, the patterns among ethnic subgroups varied between the two samples. Moreover, the divergence between white and minority student performance was only statistically significant for a handful of comparisons. More conclusive evidence of subgroup differences will inevitably require a larger sample size of minority students.

Finally, drawing on the results on this study, the discussion section outlines a theoretical framework for predicting and interpreting law school performance. This framework identifies at least five factors that affect performance on either the LSAT or UGPA: (1) reasoning ability, (2) test-taking speed, (3) motivation and persistence, (4) writing ability, and (5) grading criteria. The discussion then applies these factors to in-class exams, take-home exams, required legal writing papers, and papers in elective courses and seminars. This framework offers a preliminary theoretical basis for explaining why the predictive power of the LSAT and UPGA varies according to testing method. The discussion ends with a cautionary note that the sample size of this study is small and that more definitive conclusions will require an expanded sample and/or replicated studies.


* An important exception is paper assignments in the regional law school between year 1 and years 1-3. First-year legal writing at the regional law school was the paper category with a strong correlation with the LSAT (.347). This finding is addressed in the main text under Results. Furthermore, the theoretical framework outlined in the report, under the section, Discussion, attempts to explain this pattern.


Speed as a Variable on the LSAT and Law School Exams (RR-03-03)


Research Report Index