End of Rotation™ Exam: Grading Case Studies

End of Rotation banner

Grading Case Studies

There are as many ways to provide grades for End of Rotation™ exams and overall Supervised Clinical Practice Experiences (SCPEs) as there are programs that deliver the exams. Below are three examples, provided by very different programs, that may help your program develop a strategy that works for your faculty. These case studies walk you through their whole process, from data analysis to decision-making discussions, and from communicating with students to plans for follow-up analysis.

Jump to sections:

Grading Curve Method

Large accelerated program in a health professions college which has been using End of Rotation™ exams since 2016

Please describe the model your program uses to grade End of Rotation exams. Include how this fits into your broader SCPE evaluation.

We grade student End of Rotation exams by setting performance bands based on a calculation using the standard deviation and national average provided by PAEA for each exam:

A = +1.0 SD above the PAEA national average
A- = +0.5 SD above the PAEA national average
B+ = at the PAEA national average
B = -0.5 SD below the PAEA national average
B- = -1.0 SD below the PAEA national average
C+ = -1.5 SD below the PAEA national average
C = -2.0 SD below the PAEA national average
F = > -2.0 SD below the PAEA national average

When we release the scores to them, they are able to determine how they did immediately.

The final SCPE course grade is determined by successfully completing multiple course elements including preceptor evaluations of the student, successful completion of the PAEA End of Rotation exam, completion of specialty-specific review questions from a question bank, submission of all assignments and paperwork, attendance at and participation in all SCPE activities, and professional demeanor. All students must pass the End of Rotation exam. A failing grade for the End of Rotation exam is a grade less than two standard deviations below the national average. Students who do not pass the End of Rotation exam will be given a remediation assignment and a second opportunity to take the exam. If a student does not pass the second attempt, a final grade of “F” will be assigned. The student will be required to repeat the SCPE in its entirety at a date and time determined by the program.

How long has your program utilized this method?

We’ve used this method since our students started taking the PAEA End of Rotation exams in 2016.

If you changed methods to account for scale scores, how did you adjust? What rationale did you use to justify the change, both within your program and with students?

Because we publish standard deviations rather than actual numbers, we didn’t have to change our methods. Once we had the standard deviation and the national average, it was easy to convert the new scale scores to letter grades just as we did with the raw scores. We informed the students that this change would be occurring before the exams, and that we could convert the scale score back to the raw score if they wanted that information. We told the students that PAEA had made the change to scale scores, so the exams were similar to other national standardized exams. There was no big change at their end, and their exams went smoothly. No students were concerned about the scores or asked for their raw score.

As part of our ongoing self-assessment, we have a system in place by which we review trends of multiple assessment methods on student performance, one of which is on the End of Rotation exams. We use these trends to help establish our program benchmark on End of Rotation exam performance. Also, we informally correlate End of Rotation exam performance with the students’ performance on their didactic year exams to compare consistency in performance. Now that we have the ability to convert older scores, we’re going to take some time to look over all of the years and do a long-term assessment.

(Editor’s Note: This program gave Version 6 of the End of Rotation exams on the first day of publication. If you have any questions about this process in particular, you may contact Nicole Dettmann at Massachusetts College of Pharmacy and Health Sciences – Manchester/Worcester.)

Non-Compensatory Pass/Fail Using Z Scores to Calculate Passing Bar

A mid-size program in an academic medical center that has used End of Rotation™ exams since 2014

Please describe the model your program uses to grade End of Rotation exams. Include how this fits into your broader SCPE evaluation.

Our clinical faculty sets a passing bar for each PAEA End of Rotation exam at the beginning of the clinical year by looking at the prior cohort’s scores on all seven End of Rotation exams and PANCE together. We convert the scores on each of those exams to Z-scores so we can compare the results on a single metric. We then evaluate these data points together on an Excel spreadsheet. The passing bar is usually very clear and has been stable, around 1.4 standard deviations below the mean, for the last several years. Based on cohort trends, our clinical faculty has been considering a higher bar and will likely move the passing bar this coming year closer to 1.0 standard deviation below the mean.

We provide the passing scores for each exam to students at the beginning of the clinical year so they know what standard they need to meet. When PAEA publishes its national means and standard deviations ahead of each new version, we use that information to calculate and update the table of passing scores that corresponds to each unique End of Rotation exam form. The passing score is currently calculated using the unique means and standard deviations for each End of Rotation exam form (mean minus 1.4 x SD).

We have a proactive remediation program for students who fall below the passing bar. Because we publish the passing bar at the beginning of each year, students know right away if they failed an exam. If a student fails an End of Rotation exam, they meet with our clinical evaluation faculty to go over their results, including keyword feedback, to determine the most appropriate remediation plan. In addition to study, the plan typically includes using the End of Rotation exam’s keyword feedback to build two 60-question self-assessment quizzes using our online question bank service and achieve an agreed-upon score. Once they have completed the remediation, the student has to take the second form of the End of Rotation exam within 10 days.

How did your program decide on this method?

We did not want to set an arbitrary passing bar or use the mean as a passing bar. We chose the Z-score method (and the deliberative process that goes with it) as it tells us how far from the mean the score is (the number of standard deviations) and allows us to look at exams with different means on the same metric. We do a combined grade for the student’s SCPE assessment using a number of components (such as the preceptor evaluation and their End of Rotation exam). Students must pass every component of the course grade to pass the SCPE.

How long has your program utilized this method?

We’ve used this method from the very beginning (2014). The method doesn’t change, but the passing bar could be different year-to-year.

If you changed methods to account for scale scores, how did you adjust? What rationale did you use to justify the change, both within your program and with students?

We are using the scale score calculator to allow us to stay with the previously set raw-score passing bar for the rest of this class, as there are only two SCPEs left. We’ll switch to using scale scores once we can implement the new scale-based passing bar, with the start of the next cohort’s clinical year. We will use the exact same method to determine and publish the passing bar, but we will publish scale scores going forward. This will be easier for both faculty and students since there is only one passing score for each exam, as opposed to a scale score for each individual exam form. We also believe it will help students understand scale scores as they approach their PANCE exam.

(Editor’s Note: For any questions, please contact Jennie Coombs at University of Utah.)

Non-Compensatory Pass/Fail

A program located at an academic medical center that has been using End of Rotation™ exams since 2015

Please describe the model your program uses to grade End of Rotation exams. Include how this fits into your broader SCPE evaluation.

(Editor’s Note: In practice, the model this program uses is very similar to the previous pass/fail case. The differences are in the process and implementation plan.)

We grade End of Rotation exams using a pass/fail system. Our goal is to set a passing bar for each End of Rotation exam that is just high enough to identify students who have not yet picked up adequate knowledge from a core rotation to be on track to pass the PANCE. We validated this approach in part through a regression analysis using Version 1 (and later adding Version 2) scores to predict PANCE scores. We found that End of Rotation exam scores account for a substantial portion of the variance in PANCE scores for students. (That analysis, which included data from several PA programs, has been published in JPAE.)

Starting with Version 2 of the End of Rotation exams, we have chosen the passing scores for each exam by comparing the score distribution of our own students with the national score distribution. Our academic coordinator chooses a multiple of the national standard deviation (usually between 0.5 and 1.5) that is subtracted from the national mean to calculate the passing score for each End of Rotation exam version. This bar is calibrated so that only “outlier” scores will be identified as failing. (Note that our goal is to identify students whose scores are local outliers based on our previous years of graduated students, not necessarily national outliers — that’s why we consider both internal and national score distributions.)

Additionally, our passing scores for End of Rotation exams taken in the first half of the clinical year are intentionally set lower than our passing scores for exams taken in the second half of the clinical year, since most students’ scores on all core End of Rotation exams increase as they build their comprehensive fund of knowledge throughout the clinical phase of their education. (For example, we might set the first-half passing score for an End of Rotation exam at the national mean minus 1.5 SD, and the second-half passing score for the same exam at the national mean minus 1.0 SD, which we have found identifies the lowest performers.)

Then, in order to pass each SCPE, our students must pass the End of Rotation exam for that subject area and also must meet a number of other specific requirements, such as completing logging requirements, online interactive cases (for certain rotations), and evaluations. All SCPE’s are graded pass/fail. Preceptor evaluations of student performance provide us with valuable information, but are not used by our program as part of the pass/fail decision because that approach introduces a degree of bias and subjectivity that we consider unacceptable for a high-stakes assessment.

How did your program decide on this method?

This method is an extension of the method our program has used for over 20 years. Before the PAEA End of Rotation exams with their national score distributions were available, we used student data from the most recent five to eight years of scores on internally-developed rotation exams to set the passing scores. We continue to use our internal score distribution data to set passing scores for elective rotation exams.

If you changed methods to account for scale scores, how did you adjust? What rationale did you use to justify the change, both within your program and with students?

We used the same general method for setting specific passing scores for the scale scores. The distributions for the scale scores are different from those of the raw scores, so we chose a practical “trial and error” approach to calibrating the passing scores. Specifically, we used the online scale score conversion tool to ensure that the standard deviation calculation we set for the new scale corresponds as closely as possible with appropriate “raw” failing scores on Versions 2 through 4 of the End of Rotation exams that had been taken by students who have already graduated and taken the PANCE. In other words, we identified passing bars with the new scale scores that would have resulted in failing the “correct” number of students on each exam, in retrospect.

Communicating with students about the change:

The new scale score passing bars were added to the table of passing scores that we had already published for our current clinical phase students. The table has the raw score passing bars for Version 5, and we added the scale score passing bars for Version 6. We notified students by email that the PAEA End of Rotation exams were moving to scale scores, but that we were keeping the same general approach for identifying any given score as “pass” or “fail.” We consciously tried not to make a big deal out of the change, as it really shouldn’t affect students’ lives very much. The methodology is the same, but the numbers are different, just as they would be for moving between different versions in any other year.

Future plan:

We will re-evaluate how the scale score passing bars function in three or four months, at which point we may decide to adjust them. When PANCE scores are available for this cohort of students, we will have a better sense of whether the pass/fail point is set appropriately in light of our goal of only “catching” students who are at-risk.

If the system is working well, we will be able to keep the same scale score passing bars across multiple years of exams and multiple student cohorts (since all the adjustments for difficulty differences and differently-shaped score distributions are done behind the scenes by the psychometricians who calculate the scale scores).

More Questions? Exam Support is available 8:00 a.m. – 8:00 p.m. ET, Monday – Friday.

866-749-7601exams@PAEAonline.org

 

PACKRATEnd of RotationEnd of Curriculum