Learning multiple concepts in description logic through three perspectives
Abstract
An ontology formalises a number of dependent and related concepts in a domain, encapsulated as a terminology. Manually defining such terminologies is a complex, time-consuming and error-prone task. Thus, there is great interest for strategies to learn terminologies automatically. However, most of the existing approaches induce a single concept definition at a time, disregarding dependencies that may exist among the concepts. As a consequence, terminologies that are difficult to interpret may be induced. Thus, systems capable of learning all concepts within a single task, respecting their dependency, are essential for reaching concise and readable ontologies. In this paper, we tackle this issue presenting three terminology learning strategies that aim at finding dependencies among concepts, before, during or after they have been defined. Experimental results show the advantages of regarding the dependencies among the concepts to achieve readable and concise terminologies, compared to a system that learns a single concept at a time. Moreover, the three strategies are compared and analysed towards discussing the strong and weak points of each one.