PACKRAT®: Scoring and Interpretation

PACKRAT is administered more than 20,000 times each year. National comparative data are available for each version of PACKRAT once 300 students have taken it, and those data are updated weekly as more students take the exam. Data are available for students > 10 months to graduation, students < 10 months to graduation, and all students who took each version.

PACKRAT is designed as a self-assessment tool for students. The exam may be given proctored or unproctored and/or as an open or closed book examination. Programs can also determine their own time limit for the exam. For these reasons, it is recommended that the PACKRAT be used as a self-assessment tool; PAEA does not recommend or intend to recommend a passing grade or interpretation method for the PACKRAT.

Interpreting Cohort-Level Data

Detailed program reports are designed to help programs evaluate trends in knowledge, strengths, and deficits across entire classes of students and to be used with other data points to inform program-level curricular decision-making. PACKRAT data can be analyzed from cohort to cohort over time and, from an assessment standpoint, provide valuable information for program faculty and administration to reflect upon.

Interpret with Caution

Data on specific content and task areas (subscales) should be interpreted with caution because these subscales represent a small number of questions with varying levels of difficulty. However, the data have potential value if used to assess a content or task area on multiple exams. For example, pulmonology scores on the PACKRAT, End of Rotation exams, program summative evaluations, and PANCE can be compared. When programs triangulate the scores, if they appear to be lower than expected — or in the case of the PACKRAT and PANCE, lower than the national mean — program faculty may want to review the curriculum on pulmonary diseases.

Make Data-Driven Decisions

Another valuable utilization of PACKRAT data is to analyze scores from cohort to cohort. With careful interpretation programs can use PACKRAT data to set benchmarks at the individual student level, enabling programs to compare a student’s score to the average of previous cohort scores. This can help individual students develop a study plan focusing on their areas of weakness. Reports can be used as program assessment tools, as well. At a program level, you can see if students’ scores are steadily increasing or decreasing and use this information as one part of the curricular evaluation process. However, interpreting PACKRAT data over time is not the same as interpreting End of Rotation and PANCE data, as those exams are standardized using scales. PACKRAT results should be compared over time relative to each version’s national mean and standard deviation.

Student Performance Data

The student score report allows students and programs to see where individual students fall compared to other PA students taking the same standardized exam nationwide.

  • Scoring breakdown. Students are provided a report that indicates their individual overall score (out of 225), a breakdown of individual performance by content area and task area, and national comparative data for overall scores and for each category.
  • Keyword feedback. The keyword feedback provides information on the concept missed. This can be used by the student to identify trends in knowledge deficits as well as to serve as a tool to create a focused study plan. Students can use the content categories, task areas, and diagnosis keywords to develop individualized learning objectives to focus their studying.
    Note: If there are multiple questions with the same diagnosis and task area coding, the keyword will only appear once. Thus, the number of keywords may not match exactly the number of missed questions.

Important Considerations

A range of circumstantial factors may influence student performance on PACKRAT exams, leading to differences in performance within or among programs. Some of these factors apply to any examination, while others may apply more specifically to PACKRAT and comparisons to national data. Consider:

  • Student motivation. PACKRAT is a self-assessment exam that is not to be used for grading purposes. If an exam has no weight relative to a student’s overall grade, it could impact a student’s efforts and exam performance.
  • Timing of the administration. PACKRAT may be administered multiple times and at any time. It is intended to mark important transition times for PA students, most commonly the end of the didactic phase and the end of the clinical phase of training. PACKRAT scores reflect the number correct out of 225 and the percent of questions answered correctly compared to first-time PACKRAT takers. Thus, students and programs should use caution comparing second-time test takers to the national averages.
  • Testing administration. PACKRAT may be given proctored or unproctored, closed book or open book. With more than 20,000 administrations a year, these differences should not impact the national data by the end of the year, but should be considered when reviewing student or cohort performance, particularly in the first few months of administration when administration numbers are still low.
  • Comparative data are version-specific. National comparative data are reported by specific PACKRAT version. Although the exam is built to the same blueprint specifications and targets using the same process year over year, one version may be slightly harder or easier for a variety of reasons. For example, a 130 on one version and a 130 on the next version mean something slightly different.

National Historical Data

As PACKRAT is written in its entirety each year, rather than using a bank of items as in End of Rotation or End of Curriculum exams, interpretive statistics can vary from year to year. A table of historical PACKRAT national statistics can be found here.

Contact us

Do you suspect someone has engaged in inappropriate behavior on a PAEA Assessment exam? Report to EthicsPoint.

https://paeaonline.ethicspoint.com        844-920-1190

 

Questions? Exam Support is available 8:00 a.m. – 8:00 p.m. ET, Monday – Friday