Title: Medical Education
Session Type: Abstract Submissions (ACR)
There is debate about whether an objective structured clinical examination (OSCE) should be part of musculoskeletal ultrasound (MSUS) competency testing in MSUS, and the reliability and validity of this approach has not been established. We aim to determine the reliability and validity of an OSCE for MSUS.
A 9-station OSCE was administered to a group of 35 rheumatology fellows, following an 8 month training program in MSUS, and to 3 expert faculty members as a control group. The participants were unaware of whether the joints were abnormal (n=5; wrist, ankle, elbow, finger, toe) or normal (n=4; wrist, ankle, knee, shoulder). Expert faculty in MSUS (n=9) graded image quality at OSCE stations using a predefined checklist and global rating (0-5 scale where 2 is barely passing) as both proctors and assessors. At each station a proctor witnessed and graded the studies being performed. Later, each resulting ultrasound image was also graded by two assessors blinded to who performed the study. Identical assessors graded the normal and abnormal wrist and ankle stations. Inter-rater reliability for assessors and proctors was estimated using the intraclass correlation coefficient (ICC). The borderline group method was used to set the overall passing score. A summative, 76 item multiple choice test (MCQ) assessed fellow knowledge necessary to interpret ultrasound images. Correlation between MCQ and OSCE performance (concurrent validity) was assessed using the Pearson correlation coefficient. Construct validity was established by comparing fellow OSCE results with that of the faculty (gold standard).
Inter-rater reliability was good (ICC=0.7) between the assessors, but was poor (ICC=0.3) between the assessors and the proctors. Reliability of the assessor scores was good in the normal wrist and ankle stations (ICC=0.7), and moderate in the abnormal wrist and ankle stations (ICC=0.4).
MCQ grades correlated strongly with OSCE grades from both the assessors (r=0.52; p=<0.01) and from the proctors (r=0.58; p<0.01). The average MCQ score for the 5 fellows who failed the OSCE was less than that for the 30 who passed (60% vs. 71%, p= 0.04, Wilcoxon rank sum).
The fellows in the bottom quartile of the MCQ scored 3.07 on the OSCE, significantly worse than the top quartile fellows (3.32), and the faculty (3.29) (p<0.01, Wilcoxon signed rank). Scores also significantly discriminated bottom quartile fellows from faculty in the normal wrist and ankle stations (3.38 vs. 3.78, p<0.01), but not in the abnormal stations (3.37 vs. 3.49, p=0.08).
MSUS OSCE is a reliable and valid method for evaluation of MSUS skill when assessed by blinded examiners. Proctor grading is less reliable and adds to the cost of a practical MSUS examination. Normal joint assessment stations are more reliable than abnormal joint assessment stations and better discriminate poorly performing fellows from faculty, likely because assessors are less certain about the optimal appearance of abnormal joints and have more difficulty accurately scoring the resulting images.
E. Y. Kissin,
P. C. Grayson,
A. C. Cannella,
A. M. Evangelisto,
J. R. Goyal,
R. Al Haj,
J. B. Higgs,
D. G. Malone,
M. J. Nishio,
G. S. Kaeley,
« Back to 2012 ACR/ARHP Annual Meeting
ACR Meeting Abstracts - https://acrabstracts.org/abstract/musculoskeletal-ultrasound-objective-structured-clinical-examination-an-assessment-of-the-test/