Ontologies are representations of entities that can be found in the world. As the world and our standpoint on it change, ontologies cannot remain static. We aim at evolving ontologies while they are used by agents for communicating about the world they inhabit. This may be achieved by exchanging pieces of ontologies or by adapting those ontologies which prevent efficient communication.
These problems may be approached either theoretically or experimentally, through the framework of cultural evolution. Experimental cultural evolution provides a population of agents with interaction games that are played randomly. In reaction to the outcome of such games, agents adapt their knowledge. It is possible to test hypotheses by precisely crafting the rules used by agents in games and observing the consequences.
Our ambition is to adapt the successful cultural language evolution approach [Steels, 2012] to the evolution of the way agents represent knowledge [Euzenat, 2014; Anslow & Rovatsos, 2015; Chocron & Schorlemmer, 2016]. We have applied this approach to ontology alignment repair, i.e., the improvement of incorrect alignments [Euzenat, 2014; 2017]. For that purpose, we performed a series of experiments in which agents react to mistakes in ontology alignments —expressing relations across ontology concepts [Euzenat & Shvaiko, 2013]. Agents only know about their ontologies and alignments with others and they act in a fully decentralised way. We showed that cultural repair is able to converge towards successful communication through improving the objective correctness of alignments.
The topic of this thesis proposal is to develop the acquisition of ontologies through this approach more systematically. This involves learning ontologies from the environment and from communicating with other agents, as well as using them for communicating with others.
There are many aspects on which these types of experiments may be systematically developed. This involves designing knowledge games and operators by which agents adapt their knowledge with respect to the environment and interaction with others. For instance, agents may arbitrate between maintaining alignments or adopting the ontologies of other agents. The environment in which agents live may change so that they have to evolve their ontologies or new agent populations may be introduced dynamically.
This should contribute establishing if agents are able to reach perfect understanding, if they are able to describe their environment at the level of precision they perceive it or if they are able to develop all the same or logically equivalent knowledge.
This work is part of an ambitious program towards what we call cultural knowledge evolution. Its results may be of experimental or theoretical nature and it may provide practical, e.g., new adaptation operators, or methodological, e.g., better experimental procedures, contributions.
[Anslow & Rovatsos, 2015] Michael Anslow, Michael Rovatsos, Aligning experientially grounded ontologies using language games, Proc. 4th international workshop on graph structure for knowledge representation, Buenos Aires (AR), pp15-31, 2015 [DOI:10.1007/978-3_319-28702-7_2]
[Chocron & Schorlemmer, 2016] Paula Chocron, Marco Schorlemmer, Attuning ontology alignments to semantically heterogeneous multi-agent interactions, Proc. 22nd European Conference on Artificial Intelligence, Der Haague (NL), pp871-879, 2016 [DOI:10.3233/978-1-61499-672-9-871]
[Euzenat & Shvaiko, 2013] Jérôme Euzenat, Pavel Shvaiko, Ontology matching, 2nd edition, Springer-Verlag, Heildelberg (DE), 2013
[Euzenat, 2014] Jérôme Euzenat, First experiments in cultural alignment repair (extended version), in: Proc. 3rd ESWC workshop on Debugging ontologies and ontology mappings (WoDOOM), Hersounisos (GR), LNCS 8798:115-130, 2014 ftp://ftp.inrialpes.fr/pub/exmo/publications/euzenat2014c.pdf
[Euzenat, 2017] Jérôme Euzenat, Communication-driven ontology alignment repair and expansion, in: Proc. 26th International joint conference on artificial intelligence (IJCAI), Melbourne (AU), 2017 (to appear) ftp://ftp.inrialpes.fr/pub/moex/papers/euzenat2017a.pdf
[Steels, 2012] Luc Steels (ed.), Experiments in cultural language evolution, John Benjamins, Amsterdam (NL), 2012
Qualification: Master or equivalent in computer science.
Doctoral school: Doctoral school MSTII, Université Grenoble Alpes.
Advisor: Jérôme Euzenat (Jerome:Euzenat#inria:fr) and Manuel Atencia (Manuel:Atencia#inria:fr).
Group: The work will be carried out in the mOeX team common to INRIA & LIG. mOeX is dedicated to study knowledge evolution through adaptation. It gather permanent researchers from the Exmo team which has taken an active part these past 15 years in the development of the semantic web and more specifically ontology matching.
Place of work: The position is located at INRIA Grenoble Rhône-Alpes, Montbonnot a main computer science research lab, in a stimulating research environment.
Hiring date: Fourth quarter 2017 (October 1st in principle).
Duration: 36 months
Salary: From 1600 EUR/month (benefits included, net before income tax), i.e., 2000 EUR/month gross.
Contact: For further information, contact us.
Procedure: Visit INRIA's presentation (including FAQ and forms)
File: Provide Vitæ, motivation letter and references. It is very good if you can provide a Master report and we will ask for your marks in Master, so if you have them, you can join them.