Constraining machine learning with society-induced symbolic knowledge

PhD position / Sujet de thèse

Agent knowledge can be learnt directly from their environment, but in a social context, it can also be learn from interaction with other agents. This is even supposed to be faster. These two types of learning do constrain each others. Although knowledge from the environment naturally constrain social knowledge, the way this happens in the opposite direction is less clear. The goal of this proposal is to study, in the context of cultural knowledge evolution, the mechanisms by which machine learning can be constrained by symbolic knowledge socially acquired.

Cultural knowledge evolution deals with the evolution of knowledge representation in a group of agents. For that purpose, cooperating agents adapt their knowledge to the situations they are confronted to and the feedback they receive from others. This framework has been considered in the context of evolving natural languages [Steels, 2012]; We have applied it to ontology alignment repair, i.e. the improvement of incorrect alignments [Euzenat, 2014; 2017]. We have shown that cultural repair is able to converge towards successful communication through improving the objective knowledge quality.

We want to consider how adaptation of knowledge, resulting from agent communication, may be articulated with learnt knowledge. Machine learning induce knowledge from examples and have proved useful in a variety of tasks. Agent ontologies may be acquired from the environment in which they evolve. However, the learnt knowledge is used for communicating with other agents and this may lead to adaptation constraining this knowledge. This is important when adaptation concerns social norms, e.g. enforcing non discrimination.

If agents continuously learn from their environment, there is a conflict between these two types of knowledge acquisition, because adaptation will be cancelled by the results of relearning. Thus adapting learnt knowledge is not sufficient and it is necessary to adapt the learning process. The question to be addressed by this proposal is how adaptation may be taken into account to influence learning.

Various solutions are possible and depend on the situation: Adding biais to the process may be achieved by generating new examples or modifying the training set. Reward may be integrated within a reinforcement mechanism. More classically, backfeeding the learning process and adapting weights is classical but not always possible, in particular when knowledge is learnt from different stimuli. It may also be possible to control the features from which learning is performed.

We are seeking answers both at the general level, independently from the type of learning and the form of feedback, and at more precise levels, in case of particular types of learning and knowledge representation. For that purpose, experiments will have to be set up, showing the impact of the proposed solutions on the quality, and in particular the fitness, of the generated knowledge.

This work is part of an ambitious program towards what we call cultural knowledge evolution partly funded by the MIAI Knowledge communication and evolution chair.

References:

[Euzenat, 2014] Jérôme Euzenat, First experiments in cultural alignment repair (extended version), in: Proc. 3rd ESWC workshop on Debugging ontologies and ontology mappings (WoDOOM), Hersounisos (GR), LNCS 8798:115-130, 2014 https://exmo.inria.fr/files/publications/euzenat2014c.pdf
[Euzenat, 2017] Jérôme Euzenat, Communication-driven ontology alignment repair and expansion, in: Proc. 26th International joint conference on artificial intelligence (IJCAI), Melbourne (AU), pp185-191, 2017 https://moex.inria.fr/files/papers/euzenat2017a.pdf
[Steels, 2012] Luc Steels (ed.), Experiments in cultural language evolution, John Benjamins, Amsterdam (NL), 2012

Links:


Qualification: Master or equivalent in computer science.

Researched skills:

Doctoral school: MSTII, Université Grenoble Alpes.

Advisor: Jérôme Euzenat (Jerome:Euzenat#inria:fr) and Nabil Layaïda (Nabil:Layaida#inria:fr).

Group: The work will be carried out in the mOeX and Tyrex teams common to INRIA & LIG. mOeX is dedicated to study knowledge evolution through adaptation. It gathers researchers which have taken an active part these past 15 years in the development of the semantic web and more specifically ontology matching and data interlinking.

Place of work: The position is located at INRIA Grenoble Rhône-Alpes, Montbonnot a main computer science research lab, in a stimulating research environment.

Hiring date: as soon as possible.

Duration: 36 months

Salary: 1760€/month (gross, before social contributions and taxes).

Deadline: as soon as possible.

Contact: For further information, contact us.

Procedure: Contact us.

File: Provide Vitæ, motivation letter and references. It is very good if you can provide a Master report and we will ask for your marks in Master, so if you have them, you can join them.