We consider how existing disciplines deal with the problem of knowledge evolution. This state of the art is divided into three broad parts: Knowledge representation, cultural evolution and multi-agent systems.
Knowledge representation has been studied in artificial intelligence for decades leading to semantically well-defined formalisms. Shared representations have been promoted in the context of the semantic web as ontologies. We will consider ontologies and data expressed using the vocabulary of these ontologies, alignments between these ontologies and links between their data. This provides a well integrated framework, designed for sharing knowledge at a wide scale and for which both formal semantics and tools exist [Baader 2003a, Antoniou 2012a]. In particular, they are available for reasoning with these ontologies [Antoniou 2012a], for matching them [Euzenat 2013c] and for revising ontologies and alignments [Euzenat 2015a, Meilicke 2011a]. Nonetheless, our results should apply to wider classes of representation formalisms.
Sharing knowledge leads to problems of heterogeneity: different individuals or organisations will develop and share different representations. For dealing with these problems, we have developed ontology matching [Euzenat 2013c]. Ontology matching finds relations between ontology entities and expresses them as sets of correspondences, called alignments.
When different knowledge representations are confronted, they may be compatible or incompatible; in logical terms, merging them may be consistent or inconsistent. In the latter case, belief and knowledge revision [Alchourron 1985a] has been developed for adopting a consistent theory minimising the changes brought for recovering consistency. This has recently been adapted to alignment repair [Meilicke 2011a] and network of ontology revision [Euzenat 2015a]. However, revision only characterises the set of possible solutions: it does not take into account the situation in which representations are used for selecting the revision to apply. Hence, applying revision blindly may lead to consistent knowledge irrelevant to the context in which agents live. Through the introduction of adaptation operators, we will allow for selecting the revision according to the situation [Euzenat 2014c]. Taking advantage of the context has been studied for ontology matching: Interaction-situated semantic alignment [Atencia 2012a] considers ontology matching as framed by interaction protocols that agents use to communicate. Agents induce alignments between the different ontologies that they use depending on the success expectation of each correspondence with respect to the protocol. Failing dialogues lead them to revise their expectations and associated correspondences. This approach has recently been studied under the angle of cultural evolution providing encouraging results [Chocron 2016a].
Other approaches, such as Bayes networks, neural networks, or Markov decision processes, address this problem by introducing non symbolic processing. They hardly suffer from inconsistency due to smoother conditions. However, their numerical basis hinders the extraction of an explicit shareable knowledge representation that may be communicated across agents.
The notion of cultural evolution applies an idealised version of the theory of evolution to culture. Culture, in this context refers to a "patrimony of knowledge accumulating over generations" [Cavalli-Sforza 1981a]. It is somewhat related to the notion of meme [Dawkins 1976a] which follows more closely the genetic evolution analogy. It has been introduced, in ethology [Dawkins 1976a, Hauser 1997a], population dynamics [Cavalli-Sforza 1981a] and anthropology [Richerson 2005a]. In such fields, culture may be bird song melodies, food regime, tool design, or psychological dispositions. Work in cultural evolution is usually based on the observation of long-term behaviours: it relies on the long-term observation of populations or the study of archeological artefacts. In its quantitative form, it is modelled as dynamic systems and compared to observations [Cavalli-Sforza 1981a, Richerson 2005a]. Computers have been recently used for exploring small scale phenomena, e.g., the influence of population size on artefact complexity [Derex 2013a].
Cultural evolution experiments are performed through multi-agent simulation: a society of agents adapts its culture through a precisely defined protocol [Axelrod 1997a, Bryson 2014a]. Agents perform repeatedly and randomly a specific task, called game, and their evolution is monitored. This protocol aims to discover experimentally the state that agents may reach and the properties of that state.
Experimental cultural evolution has been successfully and convincingly applied to the evolution of natural language [Steels 2012a, Spranger 2016a]. Agents play language games and adjust their vocabulary and grammar as soon as they are not able to communicate properly, i.e., they misuse a term or they do not behave in the expected way. It showed its capacity to model various settings in a systematic framework and to provide convincing explanations of linguistic phenomena. Such experiments have shown how agents can agree on a colour coding system or a grammatical case system.
This approach has not been applied yet to knowledge representation directly. Although language experiments involve modifying cognitive representations, e.g., of colour [Steels 2012a] or position [Spranger 2016a], their properties are measured through language. So far, the closest works have only considered the terminological aspects of ontologies, i.e., associations between terms and concepts [Steels 1998a, Reitter 2011a]. This is the goal of the well-known naming game where agents learn to associate terms to objects or concepts [Steels 1998a]. Experiments have focussed on the way agents agree on terms for naming concepts (chair is the same as seat) and not on the way concepts are organised (through subsumption or disjointness relations for instance, e.g., what is the relation between a chair and a seat with four legs?). Only recently, we [Euzenat 2014c] and others [Chocron 2016a] started to deal with elaborate symbolic knowledge representation using a cultural evolution approach.
BDI (Beliefs, Desires, Intentions) is the dominant paradigm in multi-agent systems. Agents attribute beliefs, desires and intentions to themselves and other agents. On a theoretical side, agent knowledge is expressed in a modal logic allowing them to reason about such beliefs, desires and intentions (and in particular to compute plans which allow agents to fulfil their desires) [Wooldridge 2000a]. Focusing more on agent's knowledge, epistemic logics [Fagin 1995a] are very appealing to reason about what others know and its dynamic version accounts for events [van Ditmarsch 2007a]. So, these logics may be useful to abstract what occur in the situation we consider; less useful for agent implementations as they often adopt a global view of phenomena.
As mentioned previously, experimental cultural evolution uses directly multi-agent simulations. Such social simulations are also related to the artificial life field, which may include evolutionary simulations. For instance, Aevol simulates the evolution of bacteria colonies over hundred thousands of generations [Batut 2013a].
Cultural evolution suggests connections with various bioinspired approaches, such as evolutionary computation [Eiben 2015a], including memetics [Dawkins 1976a], or biological models, such as evolutionary game theory [Maynard Smith 1982a]. An important difference with cultural evolution is that, because it leads to faster adaptation, cultural evolution focuses on horizontal rather than vertical, i.e., genetic, transmission [Cavalli-Sforza 1981a]: agents manipulate directly their culture, through adaptation operators, instead of depending on random mutations. So we will not attempt to combine them in the mOeX project.
Moreover, our goal is to study knowledge evolution and what knowledge properties are satisfied by specific operators, thus we can find only limited inspiration in these works.