Skip to content

Why the Response Format Matters for Situational Judgement Tests

There is an abundance of evidence demonstrating the utility of situational judgment tests (SJTs) in predicting future performance in healthcare and in promoting diversity in medical education. Currently, there are two different formats of the SJTs that are used: the traditional closed-response SJTs and the recently developed constructed-response (or open-response) SJTs, similar to the distinction educators make between multiple choice and short answer questions. Take the following example:
“This morning, you found a fax in your inbox that seems to concern a colleague’s home business. She normally does not use the fax for business purposes.”
In the closed-response version, the scenario would typically then be followed by:
“From the following options, The most effective response to this situation would be: The least effective response to this situation would be: 1. Politely, tell your co-worker that you will inform the manager the next time you catch her using office resources for private business. 2. Report the incident to the manager. 3. Put the fax in the manager’s mailbox without saying anything to anyone. 4. Put the fax in your co-worker’s mailbox without saying anything to anyone. 5. Give the fax to your co-worker and remind her that office equipment is not supposed to be used for personal use.”

From The Situational Judgement Test Sample Questions in the Public Service Commissions Test (2015)

(Note: The most effective response is 5, and the least effective response is 4) In the constructed-response version, the scenario would then be followed by:
“What would you do under these circumstances?”
The respondents would then be required to construct their own solution to the problem within a restricted amount of time. An independent rater(s) would then evaluate the response based on structured guidelines set by the test administrator. In the medical education literature, both versions of SJTs have been shown to be useful in predicting future medical performance. This is true even after a significant amount of time has passed since the students have completed the test. Researchers from Belgium have found that the closed-response format SJT administered during the admissions process can predict medical internship ratings and practitioner performance even 9 years after taking the test. Recent research from the U.K. has shown that scores on the closed-response SJT section of the UK Clinical Aptitude Test (e.x., UK-CAT SJT) can predict supervisory ratings in tutorials during medical school. Similarly, research from Canada has shown that scores on a constructed-response SJT (i.e., CASPer®) can predict future scores on the personal and professional characteristic subsections of the medical licensing exam taken 3 to 6 years after taking the test during admissions. As SJTs have become a popular tool in both personnel and student selection, researchers and practitioners alike are becoming increasingly interested in understanding which kinds of SJTs may be particularly effective for selection. Comparing the two SJT response formats, a recent review paper summarized that the constructed-response SJTs tend to have higher predictive validity than the closed-response SJTs. Additionally, constructed-response formats are found to lead to lower subgroup differences when compared with the closed-response formats because they use up less of a cognitive load from test-takers. Applicants also tend to prefer the constructed response format as they are perceived to be more job-relevant. Furthermore, it is challenging to balance test difficulty for closed-response SJTs. Oftentimes, the response options provided in closed-response SJTs can cue test-takers to the correct answer. Surprisingly, one study even found that students were able to correctly answer a large proportion of closed-response SJT questions without seeing the actual item stem! This poses a problem for closed-response SJTs as their test difficulty tends to be a little too easy. As discussed in a previous blogpost, this can be problematic as a large proportion of the scores are clustered around the top, making it difficult for programs to identify their superstar students from the rest of the applicant pool. Despite the advantages of constructed-response SJTs, the closed-response SJTs continue to be more popular choice among researchers and practitioners. This is because constructed-response SJTs require the additional step of recruiting a large pool of reliable raters, along with the implementation of a structured rater training program to ensure that each rater is evaluating candidates to the same standards. These additional steps would greatly overburden admission committees who are already constrained by resources. At Altus Assessments, we alleviate this additional burden on admissions by recruiting and training our own pool of high-quality raters. We also have a number of quality assurance checks in place to ensure that our raters are consistently providing accurate and thoughtful evaluations for every test session. Until recently, it was often not feasible for admission programs to implement a constructed-response SJT. With CASPer®, programs now have the ability to administer an SJT that shows better results for quality and diversity without overburdening their admission committees.   Regardless of the response format of the SJT, both closed-response and constructed-response SJTs will improve the selection process of medical students and help increase diversity in the applicant pool. They are a low-cost and accessible method to help reassure medical schools that their incoming students are not only intelligent, but also possess the personal character strengths necessary in becoming an effective health care practitioner.

Published: October 5, 2017 By: Christopher Zou, Ph.D. Education Researcher at Altus Assessments