Intertester reliability : reporting on assessment methods in interdisciplinary units

David Newlyn, Liesel Spencer

Research output: Contribution to journalArticle

Abstract

This paper reports on a study of 'intertester reliability'. On of the aims of the study is to identify a marking method of optimum reliability so as to improve consistency between multiple markers. A brief survey of the literature suggests that criterion-referenced assessment implemented by means of a marking rubric is a superior form of managing formal assessment. The matters under investigation are confined to a comparison of the (intertester) reliability of marking results obtained using three different marking methods, a finding as to which method produces the greatest range of results, and a comparison of the time taken by markers using those methods. The results could, however, have secondary utility in considering the reliability of the use of marking rubrics as a means of implementing criterion-referenced assessment, particularly in large groups with multiple markers. The findings of this paper show that there is no statistically significant (p<0.05) difference in reliability of results, range of results or time taken when utilising any of the 3 methods employed.
Original languageEnglish
Number of pages19
JournalJournal of the Australasian Law Teachers Association: JALTA
Publication statusPublished - 2008

Keywords

  • criterion-referenced tests
  • education, higher
  • educational tests and measurements
  • examinations
  • grading and marking (students)
  • scoring

Fingerprint

Dive into the research topics of 'Intertester reliability : reporting on assessment methods in interdisciplinary units'. Together they form a unique fingerprint.

Cite this