mOeX bibliography sorted by areas (2020-10-31)
Artificial intelligence/Intelligence artificielle
Jérôme Euzenat
,
François Schwarzentruber
(éds),
Actes Conférence NationaleAFIA sur d'Intelligence Artificielle et Rencontres Jeunes Chercheurs en Intelligence Artificielle (CNIA+RJCIA)
,
Nancy (FR), 133p., 2018
Armen Inants
,
Jérôme Euzenat
,
So, what exactly is a qualitative calculus?
,
Artificial intelligence
289:103385, 2020
The paradigm of algebraic constraint-based reasoning, embodied in the notion of a qualitative calculus, is studied within two alternative frameworks. One framework defines a qualitative calculus as "a non-associative relation algebra (NA) with a qualitative representation", the other as "an algebra generated by jointly exhaustive and pairwise disjoint (JEPD) relations". These frameworks provide complementary perspectives: the first is intensional (axiom-based), whereas the second one is extensional (based on semantic structures). However, each definition admits calculi that lie beyond the scope of the other. Thus, a qualitatively representable NA may be incomplete or non-atomic, whereas an algebra generated by JEPD relations may have non-involutive converse and no identity element. The divergence of definitions creates a confusion around the notion of a qualitative calculus and makes the "what" question posed by Ligozat and Renz actual once again. Here we define the relation-type qualitative calculus unifying the intensional and extensional approaches. By introducing the notions of weak identity, inference completeness and Q-homomorphism, we give equivalent definitions of qualitative calculi both intensionally and extensionally. We show that "algebras generated by JEPD relations" and "qualitatively representable NAs" are embedded into the class of relation-type qualitative algebras.
Cultural knowledge evolution/Évolution culturelle de la connaissance
Jérôme Euzenat
,
Replicator-interactor in experimental cultural knowledge evolution
,
in: Proc. 2
nd
JOWO workshop on Interaction-Based Knowledge Sharing (WINKS), Graz (AT), 2019
Cultural evolution may be studied at a `macro' level, inspired from population dynamics, or at a `micro' level, inspired from genetics. The replicator-interactor model generalises the genotype-phenotype distinction of genetic evolution. Here, we consider how it can be applied to cultural knowledge evolution experiments. In particular, we consider knowledge as replicator and the behaviour it induces as interactor. We show that this requires to address problems concerning transmission. We discuss the introduction of horizontal transmission within the replicator-interactor model and/or differential reproduction within cultural evolution experiments.
Data interlinking/Liage de données
Marie-Christine Rousset
,
Manuel Atencia
,
Jérôme David
,
Fabrice Jouanot
,
Olivier Palombi
,
Federico Ulliana
,
Datalog revisited for reasoning in linked data
,
in:
Giovambattista Ianni
,
Domenico Lembo
,
Leopoldo Bertossi
,
Wolfgang Faber
,
Birte Glimm
,
Georg Gottlob
,
Steffen Staab
(eds), Proc. 13
th
International summer school on reasoning web (RW),
Lecture notes in computer science
10370, 2017, pp121-166
Linked Data provides access to huge, continuously growing amounts of open data and ontologies in RDF format that describe entities, links and properties on those entities. Equipping Linked Data with inference paves the way to make the Semantic Web a reality. In this survey, we describe a unifying framework for RDF ontologies and databases that we call deductive RDF triplestores. It consists in equipping RDF triplestores with Datalog inference rules. This rule language allows to capture in a uniform manner OWL constraints that are useful in practice, such as property transitivity or symmetry, but also domain-specific rules with practical relevance for users in many domains of interest. The expressivity and the genericity of this framework is illustrated for modeling Linked Data applications and for developing inference algorithms. In particular, we show how it allows to model the problem of data linkage in Linked Data as a reasoning problem on possibly decentralized data. We also explain how it makes possible to efficiently extract expressive modules from Semantic Web ontologies and databases with formal guarantees, whilst effectively controlling their succinctness. Experiments conducted on real-world datasets have demonstrated the feasibility of this approach and its usefulness in practice for data integration and information extraction.
Evaluation/Évaluation
Jérôme David
,
Jérôme Euzenat
,
Pierre Genevès
,
Nabil Layaïda
,
Evaluation of query transformations without data
,
in: Proc. WWW workshop on Reasoning on Data (RoD), Lyon (FR), pp1599-1602, 2018
Query transformations are ubiquitous in semantic web query processing. For any situation in which transformations are not proved correct by construction, the quality of these transformations has to be evaluated. Usual evaluation measures are either overly syntactic and not very informative ---the result being: correct or incorrect--- or dependent from the evaluation sources. Moreover, both approaches do not necessarily yield the same result. We suggest that grounding the evaluation on query containment allows for a data-independent evaluation that is more informative than the usual syntactic evaluation. In addition, such evaluation modalities may take into account ontologies, alignments or different query languages as soon as they are relevant to query evaluation.
Multi-agent systems/Systèmes multi-agents
Jérôme Euzenat
,
Interaction-based ontology alignment repair with expansion and relaxation
,
in: Proc. 26
th
International Joint Conference on Artificial Intelligence (IJCAI), Melbourne (VIC AU), pp185-191, 2017
Agents may use ontology alignments to communicate when they represent knowledge with different ontologies: alignments help reclassifying objects from one ontology to the other. These alignments may not be perfectly correct, yet agents have to proceed. They can take advantage of their experience in order to evolve alignments: upon communication failure, they will adapt the alignments to avoid reproducing the same mistake. Such repair experiments had been performed in the framework of networks of ontologies related by alignments. They revealed that, by playing simple interaction games, agents can effectively repair random networks of ontologies. Here we repeat these experiments and, using new measures, show that previous results were underestimated. We introduce new adaptation operators that improve those previously considered. We also allow agents to go beyond the initial operators in two ways: they can generate new correspondences when they discard incorrect ones, and they can provide less precise answers. The combination of these modalities satisfy the following properties: (1) Agents still converge to a state in which no mistake occurs. (2) They achieve results far closer to the correct alignments than previously found. (3) They reach again 100% precision and coherent alignments.
Jérôme Euzenat
,
Crafting ontology alignments from scratch through agent communication
,
in: Proc. 20
th
International Conference on Principles and practice of multi-agent systems (PRIMA), Nice (FR), (
Bo An
,
Ana Bazzan
,
João Leite
,
Serena Villata
,
Leendert van der Torre
(eds), Proc. 20
th
International Conference on Principles and practice of multi-agent systems (PRIMA),
Lecture notes in computer science
10621, 2017), pp245-262, 2017
Agents may use different ontologies for representing knowledge and take advantage of alignments between ontologies in order to communicate. Such alignments may be provided by dedicated algorithms, but their accuracy is far from satisfying. We already explored operators allowing agents to repair such alignments while using them for communicating. The question remained of the capability of agents to craft alignments from scratch in the same way. Here we explore the use of expanding repair operators for that purpose. When starting from empty alignments, agents fails to create them as they have nothing to repair. Hence, we introduce the capability for agents to risk adding new correspondences when no existing one is useful. We compare and discuss the results provided by this modality and show that, due to this generative capability, agents reach better results than without it in terms of the accuracy of their alignments. When starting with empty alignments, alignments reach the same quality level as when starting with random alignments, thus providing a reliable way for agents to build alignment from scratch through communication.
Jérôme Euzenat
,
Knowledge diversity under socio-environmental pressure
,
in:
Michael Rovatsos
(ed), Investigating diversity in AI: the ESSENCE project, 2013-2017, Deliverable, ESSENCE, 62p., 2017, pp28-30
Experimental cultural evolution has been convincingly applied to the evolution of natural language and we aim at applying it to knowledge. Indeed, knowledge can be thought of as a shared artefact among a population influenced through communication with others. It can be seen as resulting from contradictory forces: internal consistency, i.e., pressure exerted by logical constraints, against environmental and social pressure, i.e., the pressure exerted by the world and the society agents live in. However, adapting to environmental and social pressure may lead agents to adopt the same knowledge. From an ecological perspective, this is not particularly appealing: species can resist changes in their environment because of the diversity of the solutions that they can offer. This problem may be approached by involving diversity as an internal constraint resisting external pressure towards uniformity.
Jérôme Euzenat
,
De la langue à la connaissance: approche expérimentale de l'évolution culturelle
,
Bulletin de l'AFIA
100:9-12, 2018
Line van den Berg
,
Epistemic alignment repair
,
in: Proc. 31
st
ESSLLI student session, Riga (LV), 2019
Ontology alignments enable interoperability between heterogeneous information resources. The Alignment Repair Game (ARG) specifically provides a way for agents to simultaneously communicate and improve the alignment when a communication failure occurs. This is achieved through applying adaptation operators that provide a revision strategy for agents to resolve failures with minimum information loss. In this paper, we explore how closely these operators resemble logical dynamics. We develop a variant of Dynamic Epistemic Logic called DEOL to capture the dynamics of ARG by modeling ontologies as knowledge and alignments as belief with respect to the plausibility relation. The dynamics of ARG are then achieved through announcements and conservative upgrades. With the representation of ARG in DEOL, we formally establish the limitations and the redundancy of the adaptation operators. More precisely, that for a complete logical reasoner, replace, addjoin and refine are redundant for one or both agents in the game and that add would be replaced by addjoin in all cases.
Jérôme Euzenat
,
A map without a legend: the semantic web and knowledge evolution
,
Semantic web journal
11(1):63-68, 2020
The current state of the semantic web is focused on data. This is a worthwhile progress in web content processing and interoperability. However, this does only marginally contribute to knowledge improvement and evolution. Understanding the world, and interpreting data, requires knowledge. Not knowledge cast in stone for ever, but knowledge that can seamlessly evolve; not knowledge from one single authority, but diverse knowledge sources which stimulate confrontation and robustness; not consistent knowledge at web scale, but local theories that can be combined. We discuss two different ways in which semantic web technologies can greatly contribute to the advancement of knowledge: semantic eScience and cultural knowledge evolution.
Line van den Berg
,
Manuel Atencia
,
Jérôme Euzenat
,
Agent ontology alignment repair through dynamic epistemic logic
,
in:
Bo An
,
Neil Yorke-Smith
,
Amal El Fallah Seghrouchni
,
Gita Sukthankar
(eds), Proc. 19
th
ACM international conference on Autonomous Agents and Multi-Agent Systems (AAMAS), Auckland (NZ), pp1422-1430, 2020
Ontology alignments enable agents to communicate while preserving heterogeneity in their information. Alignments may not be provided as input and should be able to evolve when communication fails or when new information contradicting the alignment is acquired. In the Alignment Repair Game (ARG) this evolution is achieved via adaptation operators. ARG was evaluated experimentally and the experiments showed that agents converge towards successful communication and improve their alignments. However, whether the adaptation operators are formally correct, complete or redundant is still an open question. In this paper, we introduce a formal framework based on Dynamic Epistemic Logic that allows us to answer this question. This framework allows us (1) to express the ontologies and alignments used, (2) to model the ARG adaptation operators through announcements and conservative upgrades and (3) to formally establish the correctness, partial redundancy and incompleteness of the adaptation operators in ARG.
Line van den Berg
,
Manuel Atencia
,
Jérôme Euzenat
,
Unawareness in multi-agent systems with partial valuations
,
in: Proc. 10
th
AAMAS workshop on Logical Aspects of Multi-Agent Systems (LAMAS), Auckland (NZ), 2020
Public signature awareness is satisfied if agents are aware of the vocabulary, propositions, used by other agents to think and talk about the world. However, assuming that agents are fully aware of each other's signatures prevents them to adapt their vocabularies to newly gained information, from the environment or learned through agent communication. Therefore this is not realistic for open multi-agent systems. We propose a novel way to model awareness with partial valuations that drops public signature awareness and can model agent signature unawareness, and we give a first view on defining the dynamics of raising and forgetting awareness on this framework.
Line van den Berg
,
Malvin Gattinger
,
Dealing with unreliable agents in dynamic gossip
,
in: Proc. 3
rd
International workshop on dynamic logic (DaLi), (
Manuel Martins
,
Igor Sedlár
(eds), Proc. 3
rd
International workshop on dynamic logic (DaLi),
Lecture notes in computer science
12569, 2020), 2020
Gossip describes the spread of information throughout a network of agents. It investigates how agents, each starting with a unique secret, can efficiently make peer-to-peer calls so that ultimately everyone knows all secrets. In Dynamic Gossip, agents share phone numbers in addition to secrets, which allows the network to grow at run-time. Most gossip protocols assume that all agents are reliable, but this is not given for many practical applications. We drop this assumption and study Dynamic Gossip with unreliable agents. The aim is then for agents to learn all secrets of the reliable agents and to identify the unreliable agents. We show that with unreliable agents classic results on Dynamic Gossip no longer hold. Specifically, the Learn New Secrets protocol is no longer characterised by the same class of graphs, so-called sun graphs. In addition, we show that unreliable agents that do not initiate communication are harder to identify than agents that do. This has paradoxical consequences for measures against unreliability, for example to combat the spread of fake news in social networks.
Ontology matching/Alignement d'ontologies
Manel Achichi
,
Michelle Cheatham
,
Zlatan Dragisic
,
Jérôme Euzenat
,
Daniel Faria
,
Alfio Ferrara
,
Giorgos Flouris
,
Irini Fundulaki
,
Ian Harrow
,
Valentina Ivanova
,
Ernesto Jiménez-Ruiz
,
Kristian Kolthoff
,
Elena Kuss
,
Patrick Lambrix
,
Henrik Leopold
,
Huanyu Li
,
Christian Meilicke
,
Majid Mohammadi
,
Stefano Montanelli
,
Catia Pesquita
,
Tzanina Saveta
,
Pavel Shvaiko
,
Andrea Splendiani
,
Heiner Stuckenschmidt
,
Élodie Thiéblin
,
Konstantin Todorov
,
Cássia Trojahn dos Santos
,
Ondřej Zamazal
,
Results of the Ontology Alignment Evaluation Initiative 2017
,
in:
Pavel Shvaiko
,
Jérôme Euzenat
,
Ernesto Jiménez-Ruiz
,
Michelle Cheatham
,
Oktie Hassanzadeh
(eds), Proc. 12
th
ISWC workshop on ontology matching (OM), Wien (AT), pp61-113, 2017
Ontology matching consists of finding correspondences between semantically related entities of different ontologies. The Ontology Alignment Evaluation Initiative (OAEI) aims at comparing ontology matching systems on precisely defined test cases. These test cases can be based on ontologies of different levels of complexity (from simple thesauri to expressive OWL ontologies) and use different evaluation modalities (e.g., blind evaluation, open evaluation, or consensus). The OAEI 2017 campaign offered 9 tracks with 23 test cases, and was attended by 21 participants. This paper is an overall presentation of that campaign.
Semantic web/Web sémantique
Michelle Cheatham
,
Isabel Cruz
,
Jérôme Euzenat
,
Catia Pesquita
(eds),
Special issue on ontology and linked data matching
,
Semantic web journal
(special issue) 8(2):183-251, 2017
Michelle Cheatham
,
Isabel Cruz
,
Jérôme Euzenat
,
Catia Pesquita
,
Special issue on ontology and linked data matching
,
Semantic web journal
8(2):183-184, 2017
Pavel Shvaiko
,
Jérôme Euzenat
,
Ernesto Jiménez-Ruiz
,
Michelle Cheatham
,
Oktie Hassanzadeh
(eds),
Proc. 12
th
ISWC workshop on ontology matching (OM)
,
Wien (AT), 225p., 2017
Jomar da Silva
,
Fernanda Araujo Baião
,
Kate Revoredo
,
Jérôme Euzenat
,
Semantic interactive ontology matching: synergistic combination of techniques to improve the set of candidate correspondences
,
in:
Pavel Shvaiko
,
Jérôme Euzenat
,
Ernesto Jiménez-Ruiz
,
Michelle Cheatham
,
Oktie Hassanzadeh
(eds), Proc. 12
th
ISWC workshop on ontology matching (OM), Wien (AT), pp13-24, 2017
Ontology Matching is the task of finding a set of entity correspondences between a pair of ontologies, i.e. an alignment. It has been receiving a lot of attention due to its broad applications. Many techniques have been proposed, among which the ones applying interactive strategies. An interactive ontology matching strategy uses expert knowledge towards improving the quality of the final alignment. When these strategies are based on the expert feedback to validate correspondences, it is important to establish criteria for selecting the set of correspondences to be shown to the expert. A bad definition of this set can prevent the algorithm from finding the right alignment or it can delay convergence. In this work we present techniques which, when used simultaneously, improve the set of candidate correspondences. These techniques are incorporated in an interactive ontology matching approach, called ALINSyn. Experiments successfully show the potential of our proposal.
Jérémy Vizzini
,
Data interlinking with relational concept analysis
,
Master's thesis, Université Grenoble Alpes, Grenoble (FR), 2017
Vast amounts of RDF data are made available on the web by various institutions providing overlapping information. To be fully exploited, different representations of the same object across various data sets have to be identified. This is what is called data interlinking. One novel way to generate such links is to use link keys. Link keys generalise database keys by applying them across two data sets. The structure of RDF makes this problem much more complex than for relational databases for several reasons. An instance can have multiple values for a given attribute. Moreover, values of properties are not necessarily datatypes but instances of the graph. A first method has been designed to extract and select link keys from two classes of objects which deals with multiple values but not object values. Moreover, the extraction step has been rephrased in formal concept analysis (FCA) allowing to generate link keys across relational tables. Our aim is to extend this work so that it can deal with multiple values. Then, we show how to use it to deal with object values when the data set is cycle free. This encoding does not necessarily generate the optimal link keys. Hence, we use relational concept analysis (RCA), an extension of FCA taking relations between concepts into account. We show that a new expression of this problem is able to extract the optimal link keys even in the presence of circularities. Moreover, the elaborated process does not require information about the alignments of the ontologies to find out for which pairs of classes to extract link keys. We implemented these methods and evaluated them by reproducing the experiments made in previous studies. This shows that the method extracts the expected results as well as (also expected) scalability issues.
Kemo Adrian
,
Jérôme Euzenat
,
Dagmar Gromann
(eds),
Proc. 1
st
JOWO workshop on Interaction-Based Knowledge Sharing (WINKS)
,
Bozen-Bolzano (IT), 42p., 2018
Jérôme David
,
Jérôme Euzenat
,
Jérémy Vizzini
,
Linkky: Extraction de clés de liage par une adaptation de l'analyse relationnelle de concepts
,
in: Actes 29
e
journées francophones sur Ingénierie des connaissances (IC), Nancy (FR), pp271-274, 2018
Pieter Pauwels
,
María Poveda Villalón
,
Alvaro Sicilia
,
Jérôme Euzenat
,
Semantic technologies and interoperability in the built environment
,
Semantic web journal
9(6):731-734, 2018
The built environment consists of plenty of physical assets with which we interact on a daily basis. In order to improve not only our built environment, but also our interaction with that environment, we would benefit a lot from semantic representations of this environment. This not only includes buildings, but also large infrastructure (bridges, tunnels, waterways, underground systems), and geospatial data. With this special issue, an insight is given into the current state of the art in terms of semantic technologies and interoperability in this built environment. This editorial not only summarizes the content of the Special Issue on Semantic Technologies and interoperability in the Built Environment, it also provides a brief overview of the current state of the art in general in terms of standardisation and community efforts.
Pavel Shvaiko
,
Jérôme Euzenat
,
Ernesto Jiménez-Ruiz
,
Michelle Cheatham
,
Oktie Hassanzadeh
(eds),
Proc. 13
th
ISWC workshop on ontology matching (OM)
,
Monterey (CA US), 227p., 2018
Alvaro Sicilia
,
Pieter Pauwels
,
Leandro Madrazo
,
María Poveda Villalón
,
Jérôme Euzenat
(eds),
Special Issue on Semantic Technologies and Interoperability in the Build Environment
,
Semantic web journal
(special issue) 9(6):729-855, 2018
Jomar da Silva
,
Kate Revoredo
,
Fernanda Araujo Baião
,
Jérôme Euzenat
,
Interactive ontology matching: using expert feedback to select attribute mappings
,
in:
Pavel Shvaiko
,
Jérôme Euzenat
,
Ernesto Jiménez-Ruiz
,
Michelle Cheatham
,
Oktie Hassanzadeh
(eds), Proc. 13
th
ISWC workshop on ontology matching (OM), Monterey (CA US), pp25-36, 2018
Interactive Ontology Matching considers the participation of domain experts during the matching process of two ontologies. An important step of this process is the selection of mappings to submit to the expert. These mappings can be between concepts, attributes or relationships of the ontologies. Existing approaches define the set of mapping suggestions only in the beginning of the process before expert involvement. In previous work, we proposed an approach to refine the set of mapping suggestions after each expert feedback, benefiting from the expert feedback to form a set of mapping suggestions of better quality. In this approach, only concept mappings were considered during the refinement. In this paper, we show a new approach to evaluate the benefit of also considering attribute mappings during the interactive phase of the process. The approach was evaluated using the OAEI conference data set, which showed an increase in recall without sacrificing precision. The approach was compared with the state-of-the-art, showing that the approach has generated alignment with state-of-the-art quality.
Nacira Abbas
,
Jérôme David
,
Amedeo Napoli
,
Linkex: A tool for link key discovery based on pattern structures
,
in: Proc. ICFCA workshop on Applications and tools of formal concept analysis, Frankfurt (DE), pp33-38, 2019
Links constitute the core of Linked Data philosophy. With the high growth of data published in the web, many frameworks have been proposed to deal with the link discovery problem, and particularly the identity links. Finding such kinds of links between different RDF data sets is a critical task. In this position paper, we focus on link key which consists of sets of pairs of properties identifying the same entities across heterogeneous datasets. We also propose to formalize the problem of link key discovery using Pattern Structures (PS), the generalization of Formal Concept Analysis dealing with non binary datasets. After providing the proper definitions of link keys and setting the problem in terms of PS, we show that the intents of the pattern concepts correspond to link keys and their extents to sets of identity links generated by their intents. Finally, we discuss an implementation of this framework and we show the applicability and the scalability of the proposed method.
Kemo Adrian
,
Jérôme Euzenat
,
Dagmar Gromann
,
Ernesto Jiménez-Ruiz
,
Marco Schorlemmer
,
Valentina Tamma
(eds),
Proc. 2
nd
JOWO workshop on Interaction-Based Knowledge Sharing (WINKS)
,
Graz (AT), 48p., 2019
Manuel Atencia
,
Jérôme David
,
Jérôme Euzenat
,
Amedeo Napoli
,
Jérémy Vizzini
,
A guided walk into link key candidate extraction with relational concept analysis
,
in:
Claudia d'Amato
,
Lalana Kagal
(eds), Proc. on journal track of the International semantic web conference, Auckland (NZ), 2019
Data interlinking is an important task for linked data interoperability. One of the possible techniques for finding links is the use of link keys which generalise relational keys to pairs of RDF models. We show how link key candidates may be directly extracted from RDF data sets by encoding the extraction problem in relational concept analysis. This method deals with non functional properties and circular dependent link key expressions. As such, it generalises those presented for non dependent link keys and link keys over the relational model. The proposed method is able to return link key candidates involving several classes at once.
Manuel Atencia
,
Jérôme David
,
Jérôme Euzenat
,
Several link keys are better than one, or extracting disjunctions of link key candidates
,
in: Proc. 10
th
ACM international conference on knowledge capture (K-Cap), Marina del Rey (CA US), pp61-68, 2019
Link keys express conditions under which instances of two classes of different RDF data sets may be considered as equal. As such, they can be used for data interlinking. There exist algorithms to extract link key candidates from RDF data sets and different measures have been defined to evaluate the quality of link key candidates individually. For certain data sets, however, it may be necessary to use more than one link key on a pair of classes to retrieve a more complete set of links. To this end, in this paper, we define disjunction of link keys, propose strategies to extract disjunctions of link key candidates from RDF data, and apply existing quality measures to evaluate them. We also report on experiments with these strategies.
Manuel Atencia
,
Jérôme Euzenat
,
Khadija Jradeh
,
Chan Le Duc
,
Tableau methods for reasoning with link keys
,
Deliverable 2.1, ELKER, 32p., 2019
Data interlinking is a critical task for widening and enhancing linked open data. One way to tackle data interlinking is to use link keys, which generalise keys to the case of two RDF datasets described using different ontologies. Link keys specify pairs of properties to compare for finding same-as links between instances of two classes of two different datasets. Hence, they can be used for finding links. Link keys can also be considered as logical axioms just like keys, ontologies and ontology alignments. We introduce the logic ALC+LK extending the description logic ALC with link keys. It may be used to reason and infer entailed link keys that may be more useful for a particular data interlinking task. We show that link key entailment can be reduced to consistency checking without introducing the negation of link keys. For deciding the consistency of an ALC+LK ontology, we introduce a new tableau-based algorithm. Contrary to the classical ones, the completion rules concerning link keys apply to pairs of individuals not directly related. We show that this algorithm is sound, complete and always terminates.
Manuel Atencia
,
Jérôme David
,
Jérôme Euzenat
,
Amedeo Napoli
,
Jérémy Vizzini
,
Candidate link key extraction with formal concept analysis
,
Deliverable 1.1, ELKER, 29p., October 2019
A link key extraction procedure using formal concept analysis is described. It is shown to extract all link key candidates.
Nacira Abbas
,
Jérôme David
,
Amedeo Napoli
,
Discovery of link keys in RDF data based on pattern structures: preliminary steps
,
in:
Francisco José Valverde-Albacete
,
Martin Trnecka
(eds), Proc. 15
th
International conference on Concept Lattices and their Applications (CLA), Tallinn (EE), pp235-246, 2020
In this paper, we are interested in the discovery of link keys among two different RDF datasets based on FCA and pattern structures. A link key identifies individuals which represent the same real world entity. Two main strategies are used to automatically discover link keys, ignoring or not the classes to which the individuals belong to. Indeed, a link key may be relevant for some pair of classes and not relevant for another. Then, discovering link keys for one pair of classes at a time may be computationally expensive if every pair should be considered. To overcome such limitations, we introduce a specific and original pattern structure where link keys can be discovered in one pass while specifying the pair of classes associated with each link key, focusing on the discovery process and allowing more flexibility.
Manuel Atencia
,
Jérôme David
,
Jérôme Euzenat
,
Liliana Ibanescu
,
Nathalie Pernelle
,
Fatiha Saïs
,
Élodie Thiéblin
,
Cássia Trojahn dos Santos
,
Discovering expressive rules for complex ontology matching and data interlinking
,
in:
Pavel Shvaiko
,
Jérôme Euzenat
,
Oktie Hassanzadeh
,
Ernesto Jiménez-Ruiz
,
Cássia Trojahn dos Santos
(eds), Proc. 14
th
ISWC workshop on ontology matching (OM), Auckland (NZ), pp199-200, 2020
Ontology matching and data interlinking as distinguished tasks aim at facilitating the interoperability between different knowledge bases. Although the field has fully developed in the last years, most works still focus on generating simple correspondences between entities. These correspondences are however insufficient to fully cover the different types of heterogeneity between the knowledge base and complex correspondences are therefore required. Compared to simple matching, few approaches for complex matching have been proposed, focusing on correspondence patterns or exploiting common instances between the ontologies. Similarly, unsupervised data interlinking approaches (which do not require labelled data samples) have recently been developed. One approach consists in discovering linking rules such as simple keys or conditional keys on unlabelled data. The results have shown that the more expressive the rules, the higher the recall. Even more expressive rules (referential expressions, graph keys, etc.) are rather required, however naive approaches to the discovery of these rules can not be envisaged on large data sets. Existing approaches presuppose either that the data conform to the same ontology or that all possible pairs of properties be examined. Complementary, link keys are a set of pairs of properties that identify the instances of two classes of two RDF datasets. Such, link keys may be directly extracted without the need for an alignment. We introduce here an approach that aims at evaluating the impact of complex correspondences in the task of data interlinking established from the application of keys.
Manuel Atencia
,
Jérôme David
,
Jérôme Euzenat
,
Amedeo Napoli
,
Jérémy Vizzini
,
Link key candidate extraction with relational concept analysis
,
Discrete applied mathematics
273:2-20, 2020
Linked data aims at publishing data expressed in RDF (Resource Description Framework) at the scale of the worldwide web. These datasets interoperate by publishing links which identify individuals across heterogeneous datasets. Such links may be found by using a generalisation of keys in databases, called link keys, which apply across datasets. They specify the pairs of properties to compare for linking individuals belonging to different classes of the datasets. Here, we show how to recast the proposed link key extraction techniques for RDF datasets in the framework of formal concept analysis. We define a formal context, where objects are pairs of resources and attributes are pairs of properties, and show that formal concepts correspond to link key candidates. We extend this characterisation to the full RDF model including non functional properties and interdependent link keys. We show how to use relational concept analysis for dealing with cyclic dependencies across classes and hence link keys. Finally, we discuss an implementation of this framework.
Jérôme Euzenat
,
Marie-Christine Rousset
,
Semantic web
,
in:
Pierre Marquis
,
Odile Papini
,
Henri Prade
(eds), A Guided tour of artificial intelligence research, Springer, Berlin (DE), 575p., 2020, pp181-207
The semantic web aims at making web content interpretable. It is no less than offering knowledge representation at web scale. The main ingredients used in this context are the representation of assertional knowledge through graphs, the definition of the vocabularies used in graphs through ontologies, and the connection of these representations through the web. Artificial intelligence techniques and, more specifically, knowledge representation techniques, are put to use and to the test by the semantic web. Indeed, they have to face typical problems of the web: scale, heterogeneity, incompleteness, and dynamics. This chapter provides a short presentation of the state of the semantic web and refers to other chapters concerning those techniques at work in the semantic web.
Pavel Shvaiko
,
Jérôme Euzenat
,
Ernesto Jiménez-Ruiz
,
Oktie Hassanzadeh
,
Cássia Trojahn dos Santos
(eds),
Proc. 14
th
ISWC workshop on ontology matching (OM)
,
Auckland (NZ), 210p., 2020
Pavel Shvaiko
,
Jérôme Euzenat
,
Ernesto Jiménez-Ruiz
,
Oktie Hassanzadeh
,
Cássia Trojahn dos Santos
(eds),
Proc. 15
th
ISWC workshop on ontology matching (OM)
,
253p., 2020
Jomar da Silva
,
Kate Revoredo
,
Fernanda Araujo Baião
,
Jérôme Euzenat
,
Alin: improving interactive ontology matching by interactively revising mapping suggestions
,
Knowledge engineering review
35:e1, 2020
Ontology matching aims at discovering mappings between the entities of two ontologies. It plays an important role in the integration of heterogeneous data sources that are described by ontologies. Interactive ontology matching involves domain experts in the matching process. In some approaches, the expert provides feedback about mappings between ontology entities, i.e., these approaches select mappings to present to the expert who replies which of them should be accepted or rejected, so taking advantage of the knowledge of domain experts towards finding an alignment. In this paper, we present Alin, an interactive ontology matching approach which uses expert feedback not only to approve or reject selected mappings, but also to dynamically improve the set of selected mappings, i.e., to interactively include and to exclude mappings from it. This additional use for expert answers aims at increasing in the benefit brought by each expert answer. For this purpose, Alin uses four techniques. Two techniques were used in previous versions of Alin to dynamically select concept and attribute mappings. Two new techniques are introduced in this paper: one to dynamically select relationship mappings and another one to dynamically reject inconsistent selected mappings using anti-patterns. We compared Alin with state-of-the-art tools, showing that it generates alignment of comparable quality.
Manuel Atencia
,
Jérôme David
,
Jérôme Euzenat
,
On the relation between keys and link keys for data interlinking
,
Semantic web journal
, 2021
Both keys and their generalisation, link keys, may be used to perform data interlinking, i.e. finding identical resources in different RDF datasets. However, the precise relationship between keys and link keys has not been fully determined yet. A common formal framework encompassing both keys and link keys is necessary to ensure the correctness of data interlinking tools based on them, and to determine their scope and possible overlapping. In this paper, we provide a semantics for keys and link keys within description logics. We determine under which conditions they are legitimate to generate links. We provide conditions under which link keys are logically equivalent to keys. In particular, we show that data interlinking with keys and ontology alignments can be reduced to data interlinking with link keys, but not the other way around.
Semantic web queries/Interrogation du web sémantique
Melisachew Wudage Chekol
,
Jérôme Euzenat
,
Pierre Genevès
,
Nabil Layaïda
,
SPARQL Query containment under schema
,
Journal on data semantics
7(3):133-154, 2018
Query containment is defined as the problem of determining if the result of a query is included in the result of another query for any dataset. It has major applications in query optimization and knowledge base verification. The main objective of this work is to provide sound and complete procedures to determine containment of SPARQL queries under expressive description logic schema axioms. Beyond that, these procedures are experimentally evaluated. To date, testing query containment has been performed using different techniques: containment mapping, canonical databases, automata theory techniques and through a reduction to the validity problem in logic. In this work, we use the latter technique to test containment of SPARQL queries using an expressive modal logic called mu-calculus. For that purpose, we define an RDF graph encoding as a transition system which preserves its characteristics. In addition, queries and schema axioms are encoded as mu-calculus formulae. Thereby, query containment can be reduced to testing validity in the logic. We identify various fragments of SPARQL and description logic schema languages for which containment is decidable. Additionally, we provide theoretically and experimentally proven procedures to check containment of these decidable fragments. Finally, we propose a benchmark for containment solvers which is used to test and compare the current state-of-the-art containment solvers.