Quality Measurements of Error Annotation - Ensuring Validity Through Reliability

The European English Messenger, Vol. 24.1, pp. 36-42.

9 Pages Posted: 28 Mar 2015

See all articles by Nikola Dobric

Nikola Dobric

Alpen-Adria-University Klagenfurt - Institut für Anglistik und Amerikanistik

Date Written: 2015

Abstract

Major obstacles for achieving high levels of reliability (and by extension validity) of error annotation of learner corpora range from defining errors in general, the lack of an error taxonomy sufficiently applicable in corpus annotation, insufficiencies of any set linguistic norm as background for tagging, to the lack of well-defined measurements of quality of annotation. The paper first looks at the theoretical issues behind the definition of an error. It expands the discussion by focusing on a more practically applicable account of errors aimed at error annotation. It goes on to offer a more robust error taxonomy which could help address issues of interpretability inherent in linguistic categorization and could ensure more consistency. In the end, the paper suggests an alternative definition of an error applicable for corpus annotation, based on inter-annotator agreement and aimed at being the primary indicator of validity.

Keywords: reliability, validity, consistency, error analysis, error taxonomy, learner corpora, scope, substance, dependency annotation, inter-annotator agreement, intra-annotator agreement

Suggested Citation

Dobric, Nikola, Quality Measurements of Error Annotation - Ensuring Validity Through Reliability (2015). The European English Messenger, Vol. 24.1, pp. 36-42.. Available at SSRN: https://ssrn.com/abstract=2585416

Nikola Dobric (Contact Author)

Alpen-Adria-University Klagenfurt - Institut für Anglistik und Amerikanistik ( email )

Universitätsstrasse 65-67
Klagenfurt, Corinthia 9020
Austria

HOME PAGE: http://www.uni-klu.ac.at/iaa/inhalt/2512.htm

Register to save articles to
your library

Register

Paper statistics

Downloads
42
Abstract Views
309
PlumX Metrics