Quality Measurements of Error Annotation - Ensuring Validity Through Reliability
The European English Messenger, Vol. 24.1, pp. 36-42.
9 Pages Posted: 28 Mar 2015
Date Written: 2015
Major obstacles for achieving high levels of reliability (and by extension validity) of error annotation of learner corpora range from defining errors in general, the lack of an error taxonomy sufficiently applicable in corpus annotation, insufficiencies of any set linguistic norm as background for tagging, to the lack of well-defined measurements of quality of annotation. The paper first looks at the theoretical issues behind the definition of an error. It expands the discussion by focusing on a more practically applicable account of errors aimed at error annotation. It goes on to offer a more robust error taxonomy which could help address issues of interpretability inherent in linguistic categorization and could ensure more consistency. In the end, the paper suggests an alternative definition of an error applicable for corpus annotation, based on inter-annotator agreement and aimed at being the primary indicator of validity.
Keywords: reliability, validity, consistency, error analysis, error taxonomy, learner corpora, scope, substance, dependency annotation, inter-annotator agreement, intra-annotator agreement
Suggested Citation: Suggested Citation