header

Measuring Expert Performance at Manually Classifying Domain Entities Under Upper Ontology Classes

19 Pages Posted: 12 Sep 2018 First Look: Accepted

See all articles by Robert Stevens

Robert Stevens

University of Manchester - School of Computer Science

Phillip Lord

University of Newcastle - School of Computing

James Malone

Cambridge Science Park - FactBio

Nicolas Matentzoglu

University of Manchester - School of Computer Science

Abstract

Background. Classifying entities in domain ontologies under upper ontology classes is a recommended task in ontology engineering to facilitate semantic interoperability and modelling consistency. Integrating upper ontologies this way is difficult and, despite emerging automated methods, remains a largely manual task. Problem. Little is known about how well experts perform at upper ontology integration. To develop methodological and tool support, we first need to understand how well experts do this task. We designed a study to measure the performance of human experts at manually classifying classes in a general knowledge domain ontology with entities in the Basic Formal Ontology (BFO), an upper ontology used widely in the biomedical domain.Method. We recruited 8 BFO experts and asked them to classify 46 commonly known entities from the domain of travel with BFO entities. The tasks were delivered as part of a web survey.Results. We find that, even for a well understood general knowledge domain such as travel, the results of the manual classification tasks are highly inconsistent: the mean agreement of the participants with the classification decisions of an expert panel was only 51%, and the inter-rater agreement using Fleiss’ Kappa was merely moderate (0.52). We further follow up on the conjecture that the degree of classification consistency is correlated with the frequency the respective BFO classes are used in practice and find that this is only true to a moderate degree (0.52, Pearson).Conclusions. We conclude that manually classifying domain entities under upper ontology classes is indeed very difficult to do correctly. Given the importance of the task and the high degree of inconsistent classifications we encountered, we further conclude that it is necessary to improve the methodological framework surrounding the manual integration of domain and upper ontologies.

Keywords: OWL, ontologies, upper ontologies, ontology engineering, empirical study

Suggested Citation

Stevens, Robert and Lord, Phillip and Malone, James and Matentzoglu, Nicolas, Measuring Expert Performance at Manually Classifying Domain Entities Under Upper Ontology Classes (September 12, 2018). Journal of Web Semantics First Look . Available at SSRN: https://ssrn.com/abstract=3248494 or http://dx.doi.org/10.2139/ssrn.3248494

Robert Stevens (Contact Author)

University of Manchester - School of Computer Science ( email )

Kilburn Building, Oxford Road
Manchester M13 9GH, M13 9PL
United Kingdom

Phillip Lord

University of Newcastle - School of Computing ( email )

5 Barrack Road
Newcastle, NE1 7RU
United Kingdom

James Malone

Cambridge Science Park - FactBio ( email )

Cambridge, CB4 0EY
United Kingdom

Nicolas Matentzoglu

University of Manchester - School of Computer Science ( email )

Oxford Road
Manchester, M13 9PL
United Kingdom

Register to save articles to
your library

Register

Paper statistics

Abstract Views
118
PlumX Metrics
Downloads
5
!

Under construction: SSRN citations will be offline until July when we will launch a brand new and improved citations service, check here for more details.

For more information