Multi-agent Communication meets Natural Language: Synergies between Functional and Structural Language Learning

Angeliki Lazaridou, Anna Potapenko, Olivier Tieleman

Abstract Paper Share

Language Grounding to Vision, Robotics and Beyond Long Paper

Session 13B: Jul 8 (13:00-14:00 GMT)
Session 14A: Jul 8 (17:00-18:00 GMT)
Abstract: We present a method for combining multi-agent communication and traditional data-driven approaches to natural language learning, with an end goal of teaching agents to communicate with humans in natural language. Our starting point is a language model that has been trained on generic, not task-specific language data. We then place this model in a multi-agent self-play environment that generates task-specific rewards used to adapt or modulate the model, turning it into a task-conditional language model. We introduce a new way for combining the two types of learning based on the idea of reranking language model samples, and show that this method outperforms others in communicating with humans in a visual referential communication task. Finally, we present a taxonomy of different types of language drift that can occur alongside a set of measures to detect them.
You can open the pre-recorded video in a separate window.
NOTE: The SlidesLive video may display a random order of the authors. The correct author list is shown at the top of this webpage.

Similar Papers

Language to Network: Conditional Parameter Adaptation with Natural Language Descriptions
Tian Jin, Zhun Liu, Shengjia Yan, Alexandre Eichenberger, Louis-Philippe Morency,
A representative figure from paper main.625
Mapping Natural Language Instructions to Mobile UI Action Sequences
Yang Li, Jiacong He, Xin Zhou, Yuan Zhang, Jason Baldridge,
A representative figure from paper main.729