An Analysis of Differential Validity and Differential Prediction for Black, Mexican American, Hispanic, and White Law School Students (RR-90-03)
by Linda F. Wightman and David G. Muller

Executive Summary

This study was designed to address questions of differential validity and questions of differential prediction in the law school admission process. The former are evaluated by comparing the magnitude of validity coefficients resulting from both simple and multiple correlations between first-year performance in law school and the traditional predictor variables-LSAT score and undergraduate grade-point average. The latter are evaluated by testing the regression systems for the different subgroups.

The sample used in this study is drawn from 1986, 1987, and 1988 entering law school classes, using data that were available from the LSAC-sponsored Correlation Studies. Data from 54 law schools, each of which enrolled 30 or more first-year students who identified themselves as Black, Mexican American, or Hispanic, are analyzed and reported.

The results are presented in four sections: descriptive data about the minority and nonminority first-year students; validity coefficients derived using minorities, nonminorities, and combined groups; results from the Gulliksen and Wilks tests comparing regression systems based on minority and nonminority test takers within each school; and results from applying the prediction equations derived using the total-group data (minority and nonminority first-year students) to minority test takers. These validity data use first-year average in law school as the criterion variable and UGPA alone, LSAT alone, and UGPA and LSAT in combination as predictors.

The validity data do not support the concern that the LSAT score or the traditional combination of LSAT score and undergraduate grade-point average are less valid for any of the minority groups than they are for the white group. The data suggest one exception. The use of UGPA alone as a predictor seems to be significantly less valid for black students than for white students.

Law schools typically evaluate validity by developing prediction equations based on the total group of first-year students. The major question related to this practice is whether use of the combined equation predicts first-year performance for minority students in a systematically biased way. Separate regression systems are developed for each of the three minority groups and are compared with a regression system based on white students from the same institution to determine the reasonableness of using a single equation based on the combination of the two groups. If the slopes, intercepts, and prediction errors are the same for the two separate regression systems, the data can be combined and a single prediction equation can be used for the total group. The results of these tests show few significant differences in slopes between the two groups, but a substantial number of differences in standard errors of estimate and in intercepts. As was true for the earlier studies on this topic, the prediction bias that is a consequence of significantly different slopes and intercepts does not fit the traditional definition of prediction bias. That is, when differences in slope are observed, the differences tend to be greater for white students than for minority students. Likewise, in the majority of cases, the intercept for the white students is larger than the intercept for minority students.

The practical consequence of these differences in slope and intercept are highlighted in the final section of the report, where differences between predicted and actual first-year performance are presented. When a regression equation is developed using combined data from white and minority students, the equation tends to overpredict law school performance for minority students. There is nothing in these data to suggest that using the traditional predictors disadvantages minority law school applicants in the admission process. Indeed, using a prediction system based only on minority student data would present a bleaker picture of minority applicants than is presented using the combined data. However, the data in this study also demonstrate that overprediction is not true for every applicant. Identifying the number of students who are underpredicted, along with the number who are overpredicted, highlights the critical message that admission committees need to continue to evaluate each individual on his or her complete application portfolio.

An Analysis of Differential Validity and Differential Prediction for Black, Mexican American, Hispanic, and White Law School Students (RR-90-03)

Research Report Index