Abstract
This paper reports on a study of 'intertester reliability'. On of the aims of the study is to identify a marking method of optimum reliability so as to improve consistency between multiple markers. A brief survey of the literature suggests that criterion-referenced assessment implemented by means of a marking rubric is a superior form of managing formal assessment. The matters under investigation are confined to a comparison of the (intertester) reliability of marking results obtained using three different marking methods, a finding as to which method produces the greatest range of results, and a comparison of the time taken by markers using those methods. The results could, however, have secondary utility in considering the reliability of the use of marking rubrics as a means of implementing criterion-referenced assessment, particularly in large groups with multiple markers. The findings of this paper show that there is no statistically significant (p<0.05) difference in reliability of results, range of results or time taken when utilising any of the 3 methods employed.
| Original language | English |
|---|---|
| Number of pages | 19 |
| Journal | Journal of the Australasian Law Teachers Association: JALTA |
| Publication status | Published - 2008 |
Keywords
- criterion-referenced tests
- education, higher
- educational tests and measurements
- examinations
- grading and marking (students)
- scoring