MOSIG Master 2ND YEAR Research
YEAR 2022–2023

Knowledge graph evolution

Master topic / Sujet de master recherche

Knowledge graphs are very useful for various purposes e.g. search, disambiguation, machine learning. But knowledge is not immutable and has to be updated continuously. The goal of this intership is to study how to make evolve knowledge graphs by using experimental cultural evolution techniques.

Knowledge graphs (or knowledge bases) represent facts on which computers can reason thanks to inference engines Knowledge graphs are widely spread on the web represented with semantic web languages such as RDF and OWL. Example of such graphs are Google knwoledge graph, DBpedia or Wikidata. Knowledge graphs mainly formalize human knowledge that is useful for various purposes, e.g. search, disambiguation, machine learning. They for instance allows to improve accuracy of predictions and also facilitate interpretation models [Bordes et al., 2011]. A lot of human effort has been dedicated to build knowledge graphs. But knowledge is not immutable and all this knowledge has to evolve.

Our ambition is to understand and develop general mechanisms by which a society evolves its knowledge. For that purpose, we adapted experimental cultural evolution to the evolution of the way agents represent knowledge [Euzenat, 2014, 2017; Anslow & Rovatsos, 2015; Chocron & Schorlemmer, 2016]. We showed that cultural repair is able to converge towards successful communication and improves the objective correctness of alignments.

The goal of this internship its to study such kind of evolution in the case of agents exchanging RDF data. More precisely, we will focus on the exchange of RDF instance descriptions between two agents. Each agent has his own ontology, data and alignments between his ontology and those of the other agents. Ontologies are private in the sense that they are not fully disclosed to others.

We will experiment with an interaction game that works as follows: a source agent provides an instance description to a target agent. The target agent translates its RDF description in her own representation using her alignments. She classifies this instance in her ontology, enriches the description (with everything she can deduce from her ontology) and then gives it back to the source agent. The source agent translates this description using her alignment and compare this result with the original description.

With this game we expect that the source agent will able to identify the information/knowledge that has been lost, incoherency/inconsistency, some new information/knowledge. For instance, if it often happens that during games resource descriptions come back with an extra type, then we could induce that the original and extra types are in relation of subsumption. We could also obtain resource descriptions that are extended with properties values. In addition to enrich the data, we could also induce that the types of the resource are part of the domain of the property. This will guide adaptation that the agent will make on her knowledge graph.

The work will consist of modelling this game, implementing in our Lazy lavender platform, to define the notions of lost, gain of information/knowledge and incoherency/inconsistency, and propose accurate adaptation operators that react to these situations.

This work is part of an ambitious program towards what we call cultural knowledge evolution. It is part of the MIAI Knowledge communication and evolution chair and as such may lead to a PhD thesis.

References:

[Anslow & Rovatsos, 2015] Michael Anslow, Michael Rovatsos, Aligning experientially grounded ontologies using language games, Proc. 4th international workshop on graph structure for knowledge representation, Buenos Aires (AR), pp15-31, 2015 [DOI:10.1007/978-3-319-28702-7_2]
[Bordes et al., 2011] Antoine Bordes, Jason Weston, Ronan Collobert, and Yoshua Bengio, Learning structured embeddings of knowledge bases. Proc. the 25th AAAI Conference on Artificial Intelligence (AAAI'11). AAAI Press 301-306, 2011.
[Chocron & Schorlemmer, 2016] Paula Chocron, Marco Schorlemmer, Attuning ontology alignments to semantically heterogeneous multi-agent interactions, Proc. 22nd European Conference on Artificial Intelligence, Der Haague (NL), pp871-879, 2016 [DOI:10.3233/978-1-61499-672-9-871]
[Euzenat & Shvaiko, 2013] Jérôme Euzenat, Pavel Shvaiko, Ontology matching, 2nd edition, Springer-Verlag, Heildelberg (DE), 2013
[Euzenat, 2014] Jérôme Euzenat, First experiments in cultural alignment repair (extended version), in: Proc. 3rd ESWC workshop on Debugging ontologies and ontology mappings (WoDOOM), Hersounisos (GR), LNCS 8798:115-130, 2014 https://exmo.inria.fr/files/publications/euzenat2014c.pdf
[Euzenat, 2017] Jérôme Euzenat, Communication-driven ontology alignment repair and expansion, in: Proc. 26th International joint conference on artificial intelligence (IJCAI), Melbourne (AU), pp185-191, 2017 https://moex.inria.fr/files/papers/euzenat2017a.pdf
[Mesoudi 2006] Alex Mesoudi, Andrew Whiten, Kevin Laland, Towards a unfied science of cultural evolution, Behavioral and brain sciences 29(4):329–383, 2006 http://alexmesoudi.com/s/Mesoudi_Whiten_Laland_BBS_2006.pdf
[Steels, 2012] Luc Steels (ed.), Experiments in cultural language evolution, John Benjamins, Amsterdam (NL), 2012

Links:


Reference number: Proposal n°3017

Master profile: M2R MOSIG, Artificial intelligence and the web profile.

Advisor: Jérôme Euzenat (Jerome:Euzenat#inria:fr) and Jérôme David (Jerome:David#inria:fr).

Team: The work will be carried out in the mOeX team common to INRIA & Université Grenoble Alpes. mOeX is dedicated to study knowledge evolution through adaptation. It gather permanent researchers from the Exmo team which has taken an active part these past 15 years in the development of the semantic web and more specifically ontology matching.

Laboratory: LIG.

Procedure: Contact us and provide vitæ and possibly motivation letter and references.