In the world of academic admission tools, the reference letter is an old favourite. For the better part of a century, college and university applicants have been asked to supply a letter written by a mentor, boss or teacher that testifies to the applicant’s intelligence, work ethic, kindness, ambition and creativity. It’s a nice tradition, but is it an effective one?
Academic admission assessment tools change infrequently, for two reasons: newer tools are rarely developed; and there’s institutional inertia that exists within admissions departments. Because of the lack of innovation, there are often few opportunities to examine current legacy assessment tools, like reference letters, and ask: is this a tool that we should continue using?
At least since the 1970s, research has suggested these reference letters are of little value, and more recent research suggests the same. When it comes to letters of recommendation, the interrater reliability — the degree of agreement amongst those doing the rating or assessing — is only about 0.40. That’s below the recommended threshold of 0.7-0.9 for high-stakes testing. A low reliability rating means that two different people looking at the same reference letter are unlikely to have the same rating of the applicant — not exactly helpful when it comes to academic admissions.
Low reliability tools like reference letters provide almost no truly useful information about the applicant. In fact, one study found that there’s more agreement between two letters of recommendation written for two different applicants by the same referee than there is for two different referees penning recommendations for one applicant. This suggests that the review process has more to do with the referee than the applicant. As one researcher summarized: “Put another way, if letters were a new psychological test they would not come close to meeting minimum professional criteria (i.e., Standards) for use in decision making (AERA, APA, & NCME, 1999).”
Another study looked at almost 500 letters of recommendation for medical students and found that out of 76 characteristics or types of information contained in the letters, only three were significantly associated with graduation status. This suggests that these letters are poor predictors of students’ future success, even though the time spent creating and reviewing reference letters is substantial.
There are also other concerns with reference letters that make their value questionable. For example, many professors are asked to write 50 or more letters per year, leading some to hire ghost writers or to simple sign letters that the students write themselves. There are also concerns that these letters contribute to bias in admissions, with specific groups of people (e.g. white males) being consistently favored over others.
Despite all of the evidence and more recent concerns with these letters, they still remain popular in the academic admissions world because there’s a desire and need to evaluate more than just academic performance. Personal and professional characteristics, like communication, ethics and empathy, are, in many ways, just as important as cognitive measures that predict for the academic performance of an applicant. Those are the characteristics that Casper reliably identifies, through rigorously tested and refined methods — not a nice letter from an old teacher.
Interested in learning more about how to determine if the selection tools you use are reliable and valid? Check out:
Measurements for the success of assessment tools part I: reliability
Measurements for the success of assessment tools part II: validity
Original article written in 2017 by: Patrick Antonacci, M.A.Sc., Data Scientist