Holistic admissions are complicated. An applicant’s experiences, attributes, and academic successes are all weighed against an institutions mission and goals, and a decision is made. In theory, this method builds a more diverse and unique student base.
CASPer is one piece of this holistic process. Instead of relying on just an interview, which can be costly for applicants and institutions, and unreliable, CASPer offers programs and their applicants a data-driven and objective platform to assess people skills. This helps programs make more informed decisions and improve the quality of students and graduates. It also gives applicants a unique platform to demonstrate their non-academic skills.
But, what is CASPer and how does it help admissions? How can an open-ended questions assessment be data-driven? How reliable are the results? Who’s marking the assessment and scoring the answers?
How does CASPer help me?
Applicants to professional programs need to be strong academically and professionally. Admissions officers have the challenge of deciding who fits this criterion. Unreliable and non-data-driven admissions tools make it almost impossible to know if an applicant will be successful. This is why the Altus Assessments founders spent twelve years developing CASPer.
CASPer helps partner institutions by providing an objective evaluation method, showing an applicant’s overall people skills. Most importantly, CASPer results can be replicated, proving the reliability of this people skills assessment tool.
For admissions officers, CASPer offers a reliable tool for your holistic assessment approach. How you fit the data into your overall process is flexible, but you know the results can be trusted.
For applicants, this is a true measure of people skills and professionalism. Acting as a complement to the interview process, CASPer gives a platform to show your reaction to different situations and fully explain your reasons, focusing on the “why” of the answer, not just the “what”.
What is CASPer?
CASPer falls under the umbrella of situational judgement testing and is made up of twelve sections. Or, more simply, eight videos and four written of diverse, everyday scenarios, with three open-ended questions attached. You can find a full rundown of the format here.
Applicants watch a video, are given three situational judgement-based questions, and five minutes to respond. The time limit is to encourage authenticity. The questions and situations are based on research that started in 2005. No answer is wrong, but every answer is measurable.
It’s just that simple. For applicants, there is nothing to worry about or to study for as the assessment is truly about the “why” of their answer. Their empathy, professionalism, judgement, communication, and other soft-skills are given a chance to shine.
For admissions, there is a chance to validate, change, and improve your applicant rankings based on a true, and reliable view of their personal skills.
Can Open Ended Questions be Reliable?
Yes, open ended responses offer reliable data when properly administered and analyzed.
CASPer continuously reviews questions, raters, and processes to make sure the questions provide reliable results that can be replicated. CASPer scores well in both reliability and predictive validity.
CASPer also follows three priorities for development to ensure trustworthy results.
- Validation- Using each set of results to maximize the reliability and meaningfully impact the prediction of future performances.
- Access- Ensuring financial backgrounds, geographics, and differentials are not barriers (CASPer has been completed in over 170 different countries and has a low fee to increase accessibility).
- Data-driven- Continuing research in collaboration with our partner institutions.
Who are the raters?
Each of the twelve sections on the CASPer assessment are marked by a separate rater. The raters only see their given section and are blind to an applicant’s other responses, name and demographics. This ensures their ratings are genuine and unbiased.
But, who are they? Well, our team of raters is almost as diverse as our test takers. They come from many different backgrounds and professions. However, we take steps to continuously train and evaluate our raters, looking to increase efficiency and ensure our reliability scores stay high.
How do Partners use the Results?
Each partner uses the results in a way that best suits their admissions needs. Some prefer to set a minimum threshold, while others use a weighted formula with other cognitive applicant measures.
Either way, the results give insights into the people skills of applicants and have a significant impact on applicant rankings when considered. These results help to ensure a high quality of student in the institution, and sets students up for success before they enter the classroom.
How can I Prepare for the Test?
For applicants, the only preparation necessary is to be authentic. Consider CASPer as one piece of an institutions overall view of your application. Your people skills are being tested to ensure that they align with an institutions unique mission and values.
CASPer allows applicants to demonstrate that they are more than just book smart in a cost effective and reliable way for both applicants and partners.
To discover more about CASPer, take a look here.