Skip to content

Why Admission Tools Need to Care about “Political Validity”

Admissions into academic programs are high stakes, medical school admissions is certainly no exception.

Whether a medical school applicant is admitted or not affects not only their future but also the future staffing of our hospitals, clinics and healthcare centers. That’s why it’s essential that academic programs use the highest quality tools for assessing their applicants — tools that are psychometrically reliable and valid.

Most academic programs do an excellent job of making sure that incoming students have a strong ability to both learn and apply knowledge. But cognitive abilities aren’t the only qualities that matter. Incoming students also need to have the right personal and professional characteristics so that they become not only skilled medical professionals but also great communicators, team members and caregivers who conduct themselves professionally and with integrity. Schools need and want to be doing a better job of assessing these non-cognitive abilities.

In our earlier blog posts, we’ve explored the importance of reliability and validity when it comes to assessment tools, but these qualities alone don’t make a great assessment tool. If a tool isn’t acceptable to test-takers or institutions, it will be difficult to implement and see widespread adoption. For example, reliability and validity can be increased by adding more questions to a test, but that will make the test longer — and less palatable for both administrators and test-takers.

Other factors that affect acceptability include test location; for example, a test that’s written in a computerized test centre rather than online could present a barrier for rural applicants, as well as increase costs for any applicants who don’t live nearby. It is also important to remain as transparent as possible when conducting research supporting the use of the test, especially when studies are conducted internally with few or no third-party collaborators, to ensure that the research is objective and unbiased. These examples illustrate another quality of selection tools that need to be considered: “political validity.” In order to be implemented, tools need to be considered appropriate and acceptable, by all the various stakeholders involved.

Not all stakeholders are directly involved in selection, but they still have an impact on the admissions process, especially in the context of high-stakes admissions. For example, selection procedures for medical schools in the U.S. need to balance the perspectives not only of applicants and their respective medical programs, but also of current employees (e.g., physicians, nurses), medical associations (e.g., American Medical Association), regulators (e.g., Department of Health and Justice), government (e.g., state and federal regulations), the public, and so on. Not only must selection committees keep these multiple perspectives in mind, these perspectives are often in competition with one another, compounding the challenge for medical programs.

CASPer® is a high-stakes admission test that has been used by a variety of institutions around the world to assess for non-cognitive or non-academic traits. As such, we’re always working to ensure that the test is effective (reliable and valid), but also that it’s politically valid (appropriate and acceptable) to programs and applicants alike.

Applicant feedback is requested after every test session to examine test-taker perceptions about CASPer® and to ensure that students had a smooth test-taking experience. That feedback has been overwhelmingly positive. Based on a cohort of 3,500 nursing school applicants in 2016, 76.4% of respondents “hoped that other programs adopt CASPer® for their future admission cycles,” and 95.2% of respondents rated CASPer® as being a “better assessment of personality than reference letters and testimonials.” Among post-graduates, 84.4% of general surgery residency applicants reported that taking CASPer® would have no bearing or would make them more likely to apply to the program.

We also constantly look for opportunities to involve our academic program partners and their respective communities in the CASPer® test process. At the earliest stages of entering a new market (academic or geographic), we involve those partners to help define competencies that then guide the construction of the test. In the case of generating test content, we involve programs in creating scripts or questions for scenarios. We also actively recruit from staff members who are affiliated with an academic institution, as well as community members to join our market-specific rating pools to help score responses.

Why is all of this important?

While our current markets (U.S., Canada, Australia, UK, and New Zealand) might seem very similar at a high level, important cultural differences exist that need to be reflected in each CASPer® test. Without involvement from our partners in each country, it would be difficult to deliver a high-quality test that best suits the unique qualities of each region. For example, our tests in Australia are not the same tests delivered in the U.S. or Canada. The rater pools are also completely distinct between countries, and even between distinct academic programs.

We take great care in how we approach developing and delivering CASPer® tests so it continues to be an effective and acceptable assessment of the personal characteristics of applicants. But public opinions and government policies change over time, and political validity can too. That’s why maintaining political validity needs to be a part of all aspects of our operations, and why we need to continue to find ways to involve our academic program partners and the communities they serve.

Patients want doctors and nurses who are not only medically competent, but who are also good listeners who show empathy, to provide personalized care. To become an effective medical professional in 2017, it’s no longer enough to simply be smart, and our admission tools need to start reflecting that.

Published: September 8, 2017
By: Christopher Zou, Ph.D.
Education Researcher at Altus Assessments