Ontology Revision Based on Pre-Trained Language Models: Algorithms and Experiments

41 Pages Posted: 19 May 2025

See all articles by Qiu Ji

Qiu Ji

Nanjing University of Posts and Telecommunications

Guilin Qi

Southeast University - School of Computer Science and Engineering

Xiaoping Zhang

China Academy of Chinese Medical Sciences

Yuxin Ye

Jilin University (JLU)

Jiaye Li

affiliation not provided to SSRN

Keyu Wang

University of Tübingen

Yang Sheng

Nanjing University of Posts and Telecommunications

Abstract

Ontology revision aims to seamlessly incorporate a new ontology into an existing ontology and plays a crucial role in ontology evolution, ontology maintenance, and ontology alignment. Similar to single-ontology repair, addressing logical incoherence in ontology revision is essential, as incoherence can lead to inconsistency, which in turn results in meaningless reasoning outcomes. To tackle this issue, various ontology revision approaches have been proposed, focusing on defining revision operators and designing axiom ranking strategies. However, most existing approaches overlook axiom semantics, which provides critical information for distinguishing axioms. Meanwhile, pre-trained language models (PLMs) have demonstrated strong capabilities in encoding semantic information and have been widely applied in natural language processing and ontology-related tasks. In this paper, we explore how PLMs can be leveraged for ontology revision. We first define four scoring functions to rank axioms based on a PLM, incorporating various ontology-related information. We then propose an ontology revision algorithm to resolve unsatisfiable concepts holistically. To further enhance efficiency, we introduce an adapted revision algorithm that processes unsatisfiable concepts in groups. We conduct experiments on 19 ontology pairs, comparing our algorithms  with existing methods. The results demonstrate that the adapted revision algorithm significantly improves efficiency comparing with existing methods, reducing processing time by up to 90% for certain ontology pairs. Among the adapted algorithms, reliableOnt_cos_adp is the most efficient and removes the fewest axioms. Additionally, we provide a detailed discussion of the experimental results and offer guidelines for selecting the most suitable revision algorithm based on their semantics and performance.

Keywords: Ontology revision, Inconsistency handling, Ontology matching, Pre-trained language models, Knowledge reasoning

Suggested Citation

Ji, Qiu and Qi, Guilin and Zhang, Xiaoping and Ye, Yuxin and Li, Jiaye and Wang, Keyu and Sheng, Yang, Ontology Revision Based on Pre-Trained Language Models: Algorithms and Experiments. Available at SSRN: https://ssrn.com/abstract=5260764 or http://dx.doi.org/10.2139/ssrn.5260764

Qiu Ji

Nanjing University of Posts and Telecommunications ( email )

China

Guilin Qi

Southeast University - School of Computer Science and Engineering ( email )

Sipailou 2#
Nanjing, Jiangsu Province 210096
China

Xiaoping Zhang (Contact Author)

China Academy of Chinese Medical Sciences ( email )

Dongcheng District
Beijing 100700
China

Yuxin Ye

Jilin University (JLU) ( email )

China

Jiaye Li

affiliation not provided to SSRN ( email )

Keyu Wang

University of Tübingen ( email )

Wilhelmstr. 19
72074 Tuebingen, 72074
Germany

Yang Sheng

Nanjing University of Posts and Telecommunications ( email )

China

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
13
Abstract Views
42
PlumX Metrics