Our cultural knowledge evolution work first applied to alignment evolution before being extended to ontology evolution. Experiments have revealed that, by playing simple interaction games, agents can effectively repair random networks of ontologies or even create new alignments and ontologies.
Alignments between ontologies may be established through agents holding such ontologies attempting at communicating and taking appropriate actions when communication fails. We have tested this approach on alignment repair, i.e. the improvement of incorrect alignments. For that purpose, we performed a series of experiments in which agents react to mistakes in alignments. Agents may use ontology alignments to communicate when they represent knowledge with different ontologies: alignments help reclassifying objects from one ontology to the other. Such alignments may be provided by dedicated algorithms [Da Silva 2020a], but their accuracy is far from satisfying. Yet agents have to proceed. Agents only know about their ontologies and alignments with others and they act in a fully decentralised way. They can take advantage of their experience in order to evolve alignments: upon communication failure, they will adapt the alignments to avoid reproducing the same mistake.
Such repair experiments have been performed [Euzenat 2014c] and revealed that, by playing simple interaction games, agents can effectively repair random networks of ontologies.
We repeated these experiments and, using new measures, showed that the quality of previous results was underestimated. We introduced new adaptation operators (refine, addjoin and refadd) that improve those previously considered (delete, replace and add). We also allowed agents to go beyond the initial operators in two ways [Euzenat 2017a]: they can generate new correspondences when they discard incorrect ones, and they can provide less precise answers. The combination of these modalities satisfy the following properties:
The results above show 100% precision for all adaptation operators, i.e. all the correspondences in the alignments were correct, but were still missing some correspondences, and did not achieve 100% recall. We had conjectured that this was due to a phenomenon called reverse shadowing [Euzenat 2017a], avoiding to find specific correspondences.
We introduced a new adaptation modality, strengthening, to test this hypothesis. The strengthening modality replaces a successful correspondence by one of its subsumed correspondences covering the current instance. This modality is different from those developed so far, because it leads agents to adapt their alignment when the game played has been a success (previously, it was always when a failure occurred). We defined three alternative definitions of this modality depending on if the agent chooses the most general, most specific or a random such correspondence.
We experimentally showed that it was not interferring with the other modalities as soon as the add operator was used. This means that all properties of the previous adaptation operators are preserved. Moreover, as expected, recall was greatly increased, to the point that some operators achieve 99% F-measure. However, the agents still do not reach 100% recall.
The work on expansion suggests that, with the expansion modality, agents could develop alignments from scratch. We explored the use of expanding repair operators for that purpose. When starting from empty alignments, agents fail to create them as they have nothing to repair. Hence, we introduced the capability for agents to risk adding new correspondences when no existing one is useful [Euzenat 2017b]. We compared and discussed the results provided by this modality and showed that, due to this generative capability, agents reach better results than without it in terms of the accuracy of their alignments. When starting with empty alignments, alignments reach the same quality level as when starting with random alignments, thus providing a reliable way for agents to build alignment from scratch through communication. The evolution curves of both approaches (random and empty alignments), passed a starting phase in which figures correspond to this initial conditions, superimpose nearly exactly. This comfort a posteriori the experiments with random initialisation.
To adopt a population standpoint on experimental cultural evolution, we introduced the concept of population within the experiments. So far, a population is characterised as a set of agents sharing the same ontology. Such agents play the same alignment repair games as before with agents of other populations.
The notion of population enables to experiment with different transmission mechanisms: social transmission, in which culture spreads among agents of the same population, and wide transmission, in which it spreads across populations. We implemented explicit social transmission through a synchronisation procedure: at a given interval, agents of the same population exchange their knowledge, i.e. alignments. Each population builds a consensus and agents integrate the consensus into their local knowledge. The consensus consisting of merging alignments, may be obtained by vote or by preserving the most specific or most general correspondences.
Initially it was hypothesised that such knowledge transmission can help agent achieve faster convergence, but the results suggest otherwise. Agents regularly replace their (repaired) local alignment by the (conservative) population consensus which slows down the convergence significantly.
We have proposed three different ways in which agents can be critical towards the population consensus: They will adopt the consensus either based on a probability law, based on the distance between the consensus and their local alignment or based on the memory that the agent have of discarded correspondences. We tested all three approaches and found that agents can achieve faster convergence and produce alignments of comparable quality.
After alignment repair, we consider how agents may learn and evolve their ontologies.
So far our experiments in cultural knowledge evolution dealt with adapting alignments. However, agent knowledge is primarily represented in their ontologies which may also be adapted. In order to study ontology evolution, we designed a two-stage experiment in which:
In this scenario, fundamental questions arise: Do agents achieve successful interaction (increasingly consensual decisions)? Can this process improve knowledge correctness? Do all agents end up with the same ontology? We showed that agents indeed reduce interaction failure, most of the time they improve the accuracy of their knowledge about the environment, and they do not necessarily opt for the same ontology [Bourahla 2021a].
Knowledge transmission occurring between agents of the same generation, as described above, is considered as horizontal transmission. Other work has shown that variation generated through vertical, or inter-generation, transmission allows agents to exceed that level [Acerbi 2006a]. Such results have been obtained under the drastic selection of (less than 20%) agents allowed to transmit their knowledge or introducing artificial noise during transmission.
In order to study the impact of such measures on the quality of transmitted knowledge, we combined the settings of these two previous work and relaxed these assumptions (no strong selection of teachers, no fully correct seed, no introduction of artificial noise). Under this setting, we confirmed that vertical transmission improves on horizontal transmission even without drastic selection and oriented learning. We also showed that horizontal transmission is able to compensate for the lack of parent selection if it is maintained for long enough [Bourahla 2022a].
The work on cultural knowledge evolution reported above concentrated on agents performing a single task. This is not a natural condition, thus we are developing agents able to carry out several tasks and to adapt their knowledge with the same protocol. We introduced multi-tasking agents that interact over a limited set of tasks. Counter-intuitively, our experiments demonstrate that multi-task agents are not necessarily less accurate than specialised one.
We limited agent memory size in order to avoid multi-tasking agents to learn all tasks in the long term. On the one hand, when agents have unlimited memory, generalist agents are always more accurate than specialist agents on all tasks. On the other hand, when agents have limited memory, results suggest that the goals of maximising task specific accuracy and achieving consensus are mutually exclusive. Agents can either specialise in detriment of their interoperability, or learn to agree but fail to specialise.
Assessing knowledge diversity may be useful for many purposes. In particular, it is necessary to measure diversity in order to understand how it arises or is preserved. It is also necessary to control it in order to measure its effects. We have considered measuring knowledge diversity using two components: (a) a diversity measure taking advantage of (b) a knowledge difference measure [Bourahla 2021a]. We have proposed general principles for such components and compared various candidates. The most satisfycing solutions are entropy-based measures [Bourahla 2022c]. We designed algorithms using these measures to generate populations of agents with controlled levels of knowledge diversity.
The Alignment Repair Game (ARG) has been proposed for agents to simultaneously communicate and repair their alignments through adaptation operators when communication failures occur [Euzenat 2017a]. ARG was evaluated experimentally and the experiments showed that agents converge towards successful communication and improve their alignments. However, the logical properties of such operators, i.e. whether they are formally correct, complete or redundant, could not be established by experiments. We introduced Dynamic Epistemic Ontology Logic (DEOL) to answer these questions. It allows us [van den Berg 2020a, 2021a] (1) to express the ontologies and alignments used, (2) to model the ARG adaptation operators through announcements and conservative upgrades and (3) to formally establish the correctness, partial redundancy and incompleteness of the adaptation operators in ARG.
These results raise interesting issues about how closely a multi-agent process should be modelled logically (indeed, it is possible to make agents closer to the logic or try to have a logical model closer to the agents). They also open the perspective to model cultural (knowledge) evolution as a whole with dynamic epistemic logics [van den Berg 2021b].
In the DEOL modelling, agents are aware of the vocabulary that the other agents may use (we call this Public signature awareness). However, assuming that agents are fully aware of each other's signatures prevents them from adapting their vocabularies to newly gained information, from the environment or learned through agent communication. Therefore this is not realistic for open multi-agent systems. We have proposed a novel way to model awareness with partial valuations [van den Berg 2020b]. Partial Dynamic Epistemic Logic allows agents to use their own vocabularies to reason and talk about the world. These vocabularies may be extended through a new modality for raising awareness. We gave a first view on defining the dynamics of raising awareness on this framework. We also started investigating an associated forgetting operator [van den Berg 2020d].
Cultural evolution may be studied at a `macro' level, inspired from population dynamics, or at a `micro' level, inspired from genetics. The replicator-interactor model generalises the genotype-phenotype distinction of genetic evolution. We considered how it can be applied to cultural knowledge evolution experiments [Euzenat 2019a]. More specifically, we consider knowledge as the replicator and the behaviour it induces as the interactor. We showed that this requires to address problems concerning transmission. We discussed the introduction of horizontal transmission within the replicator-interactor model and/or differential reproduction within cultural evolution experiments.
We are designing experiments with human learners to test the hypothesis that the medium used for knowledge transmission has an effect on its acquisition and evolution. As a corollary, we want to assess the impact of physical presence, as a medium for knowledge transmission, on knowledge evolution. In other words, is the hybrid mode of teaching closer to the written modality of knowledge transmission or to the face-to-face classroom modality? For that purpose, we are investigating the possible use of the Class? game in the classroom.
|Publications on cultural knowledge evolution|