Ontology Revision Based on Pre-Trained Language Models: Algorithms and Experiments
41 Pages Posted: 19 May 2025
Abstract
Ontology revision aims to seamlessly incorporate a new ontology into an existing ontology and plays a crucial role in ontology evolution, ontology maintenance, and ontology alignment. Similar to single-ontology repair, addressing logical incoherence in ontology revision is essential, as incoherence can lead to inconsistency, which in turn results in meaningless reasoning outcomes. To tackle this issue, various ontology revision approaches have been proposed, focusing on defining revision operators and designing axiom ranking strategies. However, most existing approaches overlook axiom semantics, which provides critical information for distinguishing axioms. Meanwhile, pre-trained language models (PLMs) have demonstrated strong capabilities in encoding semantic information and have been widely applied in natural language processing and ontology-related tasks. In this paper, we explore how PLMs can be leveraged for ontology revision. We first define four scoring functions to rank axioms based on a PLM, incorporating various ontology-related information. We then propose an ontology revision algorithm to resolve unsatisfiable concepts holistically. To further enhance efficiency, we introduce an adapted revision algorithm that processes unsatisfiable concepts in groups. We conduct experiments on 19 ontology pairs, comparing our algorithms with existing methods. The results demonstrate that the adapted revision algorithm significantly improves efficiency comparing with existing methods, reducing processing time by up to 90% for certain ontology pairs. Among the adapted algorithms, reliableOnt_cos_adp is the most efficient and removes the fewest axioms. Additionally, we provide a detailed discussion of the experimental results and offer guidelines for selecting the most suitable revision algorithm based on their semantics and performance.
Keywords: Ontology revision, Inconsistency handling, Ontology matching, Pre-trained language models, Knowledge reasoning
Suggested Citation: Suggested Citation