The inter-rater reliability of the Oral Language Proficiency Scale

Date of Award




Degree Name

Doctor of Philosophy (Ph.D.)

First Committee Member

John H. Croghan - Committee Chair


This dissertation studied the inter-rater reliability of the Oral Language Proficiency Scale used by Dade County Public Schools to assess the level of language proficiency of students in the English for Speakers of Other Languages program. Twenty-two students' pre-taped responses to one interviewer were used for the study in which Cohen's Kappa for rater agreement, Fleiss' intraclass correlation coefficient, and Kendall's tau coefficient were conducted for statistical analysis of the data. Sixteen raters were randomly selected using a stratified random sampling plan. Six of the sixteen raters participated in the study.The raters assigned scores (levels 1-5) to each response using a frame of reference (Description of ESOL levels). The average of the assigned scores was used to determine the students' level following a chart on the scoring sheet.The inter-rater reliability measure obtained was $r=.63,$ which is considered to be a low value. Consequently, the hypothesis of the study: the inter-rater reliability for placement, level assignment, and exiting the English for Speakers of Other Languages program will be higher than.84 correlation coefficient was rejected. Discussions concerning the reasons for the low correlation coefficient focused on the raters. Recommendations for improving rater skills and a format for further research were suggested.


Education, Language and Literature; Education, Tests and Measurements

Link to Full Text


Link to Full Text