Distillation of knowledge and opinion of LLMs in an opinion dynamic framework
Abstract
Large language models (LLMs) have emerged as a group of powerful, versatile models that can assist humans in multiple tasks. To date the focus has largely been on single LLM or a mixture of LLMs to conduct tasks. Yet, there is an implication for knowledge and bias distillation arising with LLM interactions, such as in via fine-tuning or interactions. In this work, we aim to review the current approaches of social interactions of LLMs and propose a framework to investigate the interactions using agent-based modelling. Specifically we will use the opinion dynamic experiments and equilibrium analysis to enable the study of knowledge and bias propagation. This cascading through the networks can lead to LLM clusterings and group dynamics that differ from initial training objectives. This direction will have applications in the area of AI ethics and in the real world application of LLMs in the multi-agent setting.