4 Key Differences between CASPer® and the Multiple Mini-Interview

Research has found that CASPer® scores are moderately correlated with the Multiple Mini-Interview (MMI), demonstrating that there are some similarities between the tools. In our last blog post, we highlighted how both tools utilize numerous independent observations of a single applicant to dilute the effects of interviewer bias and specificity of the context. However, the moderate correlation also demonstrates that the tools are different from one another. In this article, we highlight some of the key differences.

1. Standardization

CASPer®, like the MCAT, is a tool that is standardized across all medical programs, which allows for direct comparison of scores across institutions. Every applicant takes the same CASPer® test with slight modifications in the content to preserve test security. In contrast, the MMI is a specific interview format that is implemented in different ways across institutions. Some programs may implement 8 stations, some program may implement 10. Some programs may incorporate a clinical skills component, while other programs may restrict their scenarios to be more general. Some programs may include standard interview questions (“Why do you want to be a physician?”), while other programs may restrict themselves to hypothetical scenarios (“Imagine you’re put in this position…”). The large variations in MMIs across institutions may result in vastly different psychometric properties of the tool – where one program found a jump from a reliability of .58 to .71 after replacing just one out of 12 sections.

The lack of standardization with the MMI can be problematic, as it is unclear whether the reliability and validity of MMIs at one institution would apply to another MMI used at another institution. Though we cannot ensure that the MMIs at the various institutions are all following the best practices, any changes made to CASPer® are coordinated centrally, and automatically apply to all participating programs. We continually examine our internal quality metrics and scan the academic literature to ensure that our test is supported by empirical research and follows the best practices laid out by academic and industry experts. 

2. Communication Skills

Both CASPer® and the MMI naturally assess communication skills, as candidates are required to be able to express their thoughts clearly and effectively. However, the MMIs are conducted in person and thus require candidates to tap into their oral communication skills. In comparison, CASPer® is administered online, which require written communication skills. While these two skills are different, both are necessary for the profession. Effective verbal communication skills are necessary when communicating directly with patients and other colleagues. Written communication skills are important when providing written instructions, creating educational materials, and writing reports.

It is important to note that CASPer® scores are minimally influenced by superficial characteristics of the response, such as minor spelling and grammatical errors. Raters are trained to focus on the content and clarity of the response and disregard any typographical errors as long as the information is clearly communicated. This way, we try to ensure that non-native English speakers are not disadvantaged by the test, as long as they are proficient enough to communicate their ideas. In fact, our internal data has shown that CASPer® scores do not differ once responses reach a 7th-grade reading level – lower than the difficulty of Time magazine and Wikipedia. Additionally, we also find no difference in the frequency of spelling errors across CASPer® scores, suggesting that spelling errors make no impact on ratings.   

3. Anonymity

The MMI is conducted in person, whereas CASPer® is administered online with raters blinded to the candidate’s identity. CASPer® raters are only shown the written responses; they don’t receive the applicant’s name, or any information about the respondent’s race, ethnicity, location. However, the demographic characteristics of applicants can usually be picked up in a face-to-face situation, where categorical information about race and gender are more apparent – and we know from years of research that this can impact the way we conduct and evaluate the interview. The demographic characteristics of applicants can’t directly influence CASPer® evaluations, as they are essentially impossible to pick up on from short text responses.

One way the MMI attempts to overcome this problem is by utilizing multiple independent interviewers to dilute the effects of individual interviewer bias, where an individual’s superficial preferences about candidates are greatly minimized. However, the MMI is not able to dilute the effects of any systematic interviewer biases that are shared across all interviewers. Many types of interviewer bias are unconscious and mostly develops from the culture around us – and thus, are likely shared across many people who share the same culture. Unconscious bias training may help reduce some of these biases, but unfortunately, most of them don’t work.

4. Cost

The biggest difference between CASPer® and the MMI is cost, with the MMI putting a substantially heavier burden on applicants and programs alike. In our previous blog post, we outline the potential differences in cost to applicants, where CASPer® requires little to no prep from applicants with a substantially smaller cost (~$80). In comparison, the MMI can cost applicants about $2,500 for travel fees with potential hours of investment into its preparation, so it would be in the applicant’s best interest to ensure that they actually have a good shot at getting admitted, prior to attending the interview.

CASPer® also requires no additional investment from programs, in financial or human resources. The test is administered from start to finish by Altus Assessments. We support the applicants in registering for a test, handle the grading of responses, and ensure the scores are seamlessly integrated with the current admission tools used by the program. This is in stark contrast to the MMI, which requires substantial financial and time investment from the programs (see below for a detailed breakdown of estimated costs). From the construction of individual stations, recruitment of interviewers, and miscellaneous expenses like breakfast for interviewees, programs end up spending thousands of dollars. Additionally, the appropriate infrastructure needs to be available to the schools to implement a 12-station MMI, which requires considerably more physical space than traditional interviews.

A sample breakdown of costs from McMaster’s MMI (from Rosenfeld et al., 2008, Advances in Health Sciences Education):

RequirementCost of MMI   
Creation of Stations72 hours of creation time for 24 MMI stations
$50 per station = $1,200
AssessorsN = 192 (96 per each of 2 days of interviews)
66.7 hours per 400 candidates, 800 observer hours per 400 candidates
Staff (planning, implementation, and data entry)Admission coordinator 154 to 278 staff hours (11 to 19 staff spread over 2 days of interviews)
8 actors for each of 2 days
Miscellaneous expenses (breakfast/lunch/coffee/parking)~$4,000
Infrastructure8 clinics with 12 rooms provided in kind 4,800 scoresheets ($240)

Due to its exorbitant costs, it is often not feasible for programs to invite every applicant to the MMI. However, the minimal cost associated with CASPer® makes it a viable tool to implement in the pre-screening phase, where a large number of applications need to be winnowed down for a more extensive file review or personal interview.


In summary, though CASPer® and MMI share some similarities, they also differ in a number of key ways. Their similarities make both a powerful tool for the assessment of personal and professional characteristics, but their differences highlight their independent contributions in the admissions process. The two tools do not substitute for one another but can work in conjunction to identify the strongest candidates with the necessary ingredients for success, while maximizing the returns on cost.

Published: November 13, 2017
By: Christopher Zou, Ph.D.
Education Researcher at Altus Assessments