MIAI Knowledge communication and evolution chair

First workshop on Knowledge communication and evolution (IMAG building, February 19th).

Carrier: Jérôme Euzenat (INRIA)

Participants: Manuel Atencia (UGA), Jérôme David (UGA), Yves Demazeau (CNRS), Jérôme Euzenat (INRIA), Pierre Genevès (CNRS), Nabil Layaïda (INRIA)

The MIAI Knowledge communication and evolution chair aims at understanding and developing mechanisms for seamlessly improving knowledge. It studies the evolution of knowledge in a society of people and AI systems by applying evolution theory to knowledge representation.

This chair develops studies in knowledge evolution both experimentally, through multi-agent simulation, and theoretically, through logical modelling. For that purpose, it involves participants with both backgrounds.

Knowledge evolution in an AI-enhanced society In a mixed society of people and AI-based systems, knowledge is acquired and developed by both humans and machines. The development of knowledge in a society is the result of a continuous evolution which may result from acquisition (learning), from direct communication (explicit explanation, book reading) or from interoperation (day-to-day cooperation, perceived behaviour). Knowledge depends on the society that uses it and has to evolve with the changing environment it represents, without disruption. The viability of societies involving both people and artificial intelligence systems relies on the ability of systems to adapt their knowledge, and thus their behaviour, in reaction to communicated knowledge.

Goals We aim at understanding and developing general mechanisms by which a society evolves its knowledge through continuous adaptation to its environment and to other societies. More specifically, we assess under which conditions local knowledge adaptation operators lead to global epistemic properties.

Example Let us take as an example a robot designed to both entertain and look after an elderly person. Such a robot would have to acquire knowledge about the physical environment of the person. It would also have to interact with this person and her relatives. The robot will have to adapt its knowledge to the points of concern of the cared person, e.g. taking prescribed medicines, greeting relatives for their birthdays, and holding a conversation. Moreover, through time, both the material environment and the intellectual capabilities of the person will evolve. The robot will have to respond to this seamlessly, by adapting its knowledge and behaviour. However, this raises challenging questions. Should the robot adapt its knowledge to that of the helped person to achieve better communication or should it preserve it and challenge her when capabilities are declining? It will have to maintain knowledge diversity to achieve both goals!

Approach We adopt a radical approach by combining local adaptation techniques comming from the knowledge representation field —such as knowledge and belief revision [Euzenat 2015a] or agent negotiation and argumentation [Jiménez-Ruiz 2016a]— with evolutionary modelling [Steels 2012a]. We build on both evolutionary epistemology and experimental cultural evolution that we adapt to knowledge representation. Evolutionary epistemology considers natural selection principles as a general purpose evolution mechanism that can be applied to various fields, and knowledge in particular [Plotkin 1993a]. Experimental cultural evolution [Steels 2012a] simulates the evolution of a culture, e.g. language, and measures its properties (convergence, fitness, parsimony). Both approaches provide insights of how knowledge can evolve.

In [Euzenat 2017a], we adapt experimental cultural evolution to knowledge representation. In particular, we consider ontology alignments —expressing relations across two ontologies' concepts— as knowledge. For that purpose, we perform experiments by playing games using such alignments in which agents react to mistakes in instance classification by applying adaptation operators to their knowledge. Agents only know about their knowledge (here ontologies and alignments with others) and they act in a fully decentralised way. We showed that this cultural repair approach is able to converge towards successful communication through improving the objective knowledge quality (correctness and completeness). In particular, it performs better than direct logical approaches.

The initial focus on symbolic knowledge is justified by its prominent use in explicit communication. However, knowledge is also grounded on the perceived environment, so the whole approach is to be combined in a consistent way with machine learning. More precisely, (symbolic) knowledge can guide learning and learning can generate new knowledge in an evolving and distributed setting. This requires to consider specifically designed adaptation operators.

We have PhD proposals open in relation to the chair.

Program This chair is organised in four broad work packages (each bullet point may be a topic of study):

WP1: Variety and combination of representation and adaptation
We will broaden the scope of the approach by considering evolving several types of knowledge representation (symbolic or non-symbolic), games and operators.
WP2: Generalised evolution
We study the relationship with generalised evolution, in particular the replicator/interactor approach in which knowledge is the replicator and agent behaviour the interactor. This will challenge the status of adaptation operators: if they become knowledge, from instrument of selection, they become object of longer-term selection.
WP3: Extension to non-symbolic knowledge
Adopting non-symbolic, or hybrid, knowledge representation mandates to define adequate representations (not necessarily explicit) and adaptation operators.
WP4: Evolution facing disruptive events
This will study the robustness of designed operators with respect to disruptive events (in the environment or in the society). This also involves considering the role of representation diversity in this context, and specifically if and how knowledge diversity provides a benefit in such a context.
Finally, we plan to confront our results to models developed in social sciences and the humanities, through partnerships to be further developed during the course of the project.

Impact AI acceptability by society has to be elaborated through knowledge communication, e.g. for explaining machine decisions, or for integrating human constraints, e.g. ethical constraints. The important challenge for AI is to ensure that knowledge actioned by artificial systems within a society is able of seamless evolution resulting from actors feedback, in complement to design-time enforcement. Our work may be directly tested within closed societies such as the one developed around patients cared by helper robots, families and the medical team.

Relation with other chairs This chair is thus a consumer of techniques developed by learning, complementary to explainable AI, and produces results to be exploited, in particular, by social robotics.

International collaboration International researchers reputed on relevant topics will take part in this work: Luciano Serafini (learning and reasoning, FBK, Trento), Terry Payne, Valentina Tamma, (meaning negotiation, U. Liverpool), Marco Schorlemmer (distributed knowledge coordination, IIIA-CSIC, Barcelona).