TY - JOUR
T1 - Reliability of participant classification in sport and exercise science
T2 - application of McKay et al.’s (2022) framework
AU - Wilkins, Luke
AU - Broadbent, David
AU - Bruce, Lyndell
AU - Champion, Luke
AU - Kittel, Aden
AU - MacMahon, Clare
AU - Pickering, Todd
AU - Steel, Kylie A.
AU - Wirtz, Svenja
PY - 2025
Y1 - 2025
N2 - Accurately classifying samples within sports and exercise science (SES) research has significant implications for how findings are interpreted and applied. Key to this is clear and sufficiently detailed “Participants” sections of manuscripts and frameworks that provide structure for the classification process. The primary aim of this study was to evaluate the inter- and intra-rater reliability of sample classifications made by four experienced academics who applied McKay et al’.s (2022) Participant Classification Framework (PCF) to 130 SES manuscripts. Weighted Cohen’s kappa analyses found inter-rater reliabilities ranging from 0.34 (fair agreement) to 0.74 (substantial), and intra-rater reliabilities ranging from 0.54 (moderate) to 0.90 (almost perfect), evidencing strong internal reliability and reproducible PCF classifications. Tier “0” papers had the highest inter-rater agreement, whilst “Tier 5” and papers with multiple classifications had the lowest. Studies that failed to report sample size and sport type were more frequently classified as “unclear”, whilst ambiguous sex distribution also proved problematic. The findings suggest that current participant reporting standards in the field are insufficient to support consistent application of the PCF. To facilitate the future utility of the PCF and improve the clarity and comparability of SES research, we propose nine “Key Criteria for Classifying SES Research Samples”.
AB - Accurately classifying samples within sports and exercise science (SES) research has significant implications for how findings are interpreted and applied. Key to this is clear and sufficiently detailed “Participants” sections of manuscripts and frameworks that provide structure for the classification process. The primary aim of this study was to evaluate the inter- and intra-rater reliability of sample classifications made by four experienced academics who applied McKay et al’.s (2022) Participant Classification Framework (PCF) to 130 SES manuscripts. Weighted Cohen’s kappa analyses found inter-rater reliabilities ranging from 0.34 (fair agreement) to 0.74 (substantial), and intra-rater reliabilities ranging from 0.54 (moderate) to 0.90 (almost perfect), evidencing strong internal reliability and reproducible PCF classifications. Tier “0” papers had the highest inter-rater agreement, whilst “Tier 5” and papers with multiple classifications had the lowest. Studies that failed to report sample size and sport type were more frequently classified as “unclear”, whilst ambiguous sex distribution also proved problematic. The findings suggest that current participant reporting standards in the field are insufficient to support consistent application of the PCF. To facilitate the future utility of the PCF and improve the clarity and comparability of SES research, we propose nine “Key Criteria for Classifying SES Research Samples”.
KW - key criteria for classifying SES research samples
KW - Participant classification
KW - reliability analysis
KW - sample reporting
UR - https://www.scopus.com/pages/publications/105018775953
UR - https://go.openathens.net/redirector/westernsydney.edu.au?url=https://doi.org/10.1080/02640414.2025.2567783
U2 - 10.1080/02640414.2025.2567783
DO - 10.1080/02640414.2025.2567783
M3 - Article
AN - SCOPUS:105018775953
SN - 0264-0414
VL - 43
SP - 2914
EP - 2926
JO - Journal of Sports Sciences
JF - Journal of Sports Sciences
IS - 23
ER -