How can we make the best possible use of large language models for a smarter and more inclusive society?
Large language models (LLMs) have developed rapidly in recent years and are becoming an integral part of our everyday lives through applications like ChatGPT. An article recently published in Nature Human Behaviour explains the opportunities and risks that arise from the use of LLMs for our ability to collectively deliberate, make decisions, and solve problems. Led by researchers from Copenhagen Business School and the Max Planck Institute for Human Development in Berlin, the interdisciplinary team of 28 scientists provides recommendations for researchers and policymakers to ensure LLMs are developed to complement rather than detract from human collective intelligence.
What do you do if you don't know a term like "LLM"? You probably quickly google it or ask your team. We use the knowledge of groups, known as collective intelligence, as a matter of course in everyday life. By combining individual skills and knowledge, our collective intelligence can achieve outcomes that exceed the capabilities of any individual alone, even experts. This collective intelligence drives the success of all kinds of groups, from small teams in the workplace to massive online communities like Wikipedia and even societies at large.
LLMs are artificial intelligence (AI) systems that analyze and generate text using large datasets and deep learning techniques. The new article explains how LLMs can enhance collective intelligence and discusses their potential impact on teams and society. "As large language models increasingly shape the information and decision-making landscape, it's crucial to strike a balance between harnessing their potential and safeguarding against risks. Our article details ways in which human collective intelligence can be enhanced by LLMs, and the various harms that are also possible," says Ralph Hertwig, co-author of the article and Director at the Max Planck Institute for Human Development, Berlin.
Among the potential benefits identified by the researchers is that LLMs can significantly increase accessibility in collective processes. They break down barriers through translation services and writing assistance, for example, allowing people from different backgrounds to participate equally in discussions. Furthermore, LLMs can accelerate idea generation or support opinion-forming processes by, for example, bringing helpful information into discussions, summarizing different opinions, and finding consensus.
Yet the use of LLMs also carries significant risks. For example, they could undermine people’s motivation to contribute to collective knowledge commons like Wikipedia and Stack Overflow. If users increasingly rely on proprietary models, the openness and diversity of the knowledge landscape may be endangered. Another issue is the risk of false consensus and pluralistic ignorance, where there is a mistaken belief that the majority accepts a norm. "Since LLMs learn from information available online, there is a risk that minority viewpoints are unrepresented in LLM-generated responses. This can create a false sense of agreement and marginalize some perspectives," points out Jason Burton, lead author of the study and assistant professor at Copenhagen Business School and associate research scientist at the MPIB.
“The value of this article is that it demonstrates why we need to think proactively about how LLMs are changing the online information environment and, in turn, our collective intelligence—for better and worse,” summarizes co-author Joshua Becker, assistant professor at University College London. The authors call for greater transparency in creating LLMs, including disclosure of training data sources, and suggest that LLM developers should be subject to external audits and monitoring. This would allow for a better understanding of how LLMs are actually being developed and mitigate adverse developments.
In addition, the article offers compact information boxes on topics related to LLMs, including the role of collective intelligence in the training of LLMs. Here, the authors reflect on the role of humans in developing LLMs, including how to address goals such as diverse representation. Two information boxes with a focus on research outline how LLMs can be used to simulate human collective intelligence, and identify open research questions, like how to avoid homogenization of knowledge and how credit and accountability should be apportioned when collective outcomes are co-created with LLMs.
Key Points:
• LLMs are changing how people search for, use, and communicate information, which can affect the collective intelligence of teams and society at large.
• LLMs offer new opportunities for collective intelligence, such as support for deliberative, opinion-forming processes, but also pose risks, such as endangering the diversity of the information landscape.
• If LLMs are to support rather than undermine collective intelligence, the technical details of the models must be disclosed, and monitoring mechanisms must be implemented.
Participating institutes
Department of Digitalization, Copenhagen Business School, Frederiksberg, DK
Center for Adaptive Rationality, Max Planck Institute for Human Development, Berlin, DE
Center for Humans and Machines, Max Planck Institute for Human Development, Berlin, DE
Humboldt-Universität zu Berlin, Department of Psychology, Berlin, DE
Center for Cognitive and Decision Sciences, University of Basel, Basel, CH
Google DeepMind, London, UK
UCL School of Management, London, UK
Centre for Collective Intelligence Design, Nesta, London, UK
Bonn-Aachen International Center for Information Technology, University of Bonn, Bonn, DE
Lamarr Institute for Machine Learning and Artificial Intelligence, Bonn, DE
Collective Intelligence Project, San Francisco, CA, USA
Center for Information Technology Policy, Princeton University, Princeton, NJ, USA
Department of Computer Science, Princeton University, Princeton, NJ, USA
School of Sociology, University College Dublin, Dublin, IE
Geary Institute for Public Policy, University College Dublin, Dublin, IE
Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA, USA
Department of Psychological Sciences, Birkbeck, University of London, London, UK
Science of Intelligence Excellence Cluster, Technische Universität Berlin, Berlin, DE
School of Information and Communication, Insight SFI Research Centre for Data Analytics, University College Dublin, Dublin, IE
Oxford Internet Institute, Oxford University, Oxford, UK
Deliberative Democracy Lab, Stanford University, Stanford, CA, USA
Tepper School of Business, Carnegie Mellon University, Pittsburgh, PA, USA
Originalpublikation:
Burton, J. W., Lopez-Lopez, E., Hechtlinger, S., Rahwan, Z., Aeschbach, S., Bakker, M. A., Becker, J. A., Berditchevskaia, A., Berger, J., Brinkmann, L., Flek, L., Herzog, S. M., Huang, S. S., Kapoor, S., Narayanan, A., Nussberger, A.-M., Yasseri, T., Nickl, P., Almaatouq, A., Hahn, U., Kurvers, R. H., Leavy, S., Rahwan, I., Siddarth, D., Siu, A., Woolley, A. W., Wulff, D. U., & Hertwig, R. (2024). How large language models can reshape collective intelligence. Nature Human Behaviour. Advance online publication. https://www.nature.com/articles/s41562-024-01959-9
Weitere Informationen:
https://www.mpib-berlin.mpg.de/press-releases/llms-and-collective-intelligence