Shaping Visual Representations with Language for Few-Shot Classification
Jesse Mu, Percy Liang, Noah Goodman
Language Grounding to Vision, Robotics and Beyond Short Paper
Session 9A: Jul 7
(17:00-18:00 GMT)
Session 10B: Jul 7
(21:00-22:00 GMT)
Abstract:
By describing the features and abstractions of our world, language is a crucial tool for human learning and a promising source of supervision for machine learning models. We use language to improve few-shot visual classification in the underexplored scenario where natural language task descriptions are available during training, but unavailable for novel tasks at test time. Existing models for this setting sample new descriptions at test time and use those to classify images. Instead, we propose language-shaped learning (LSL), an end-to-end model that regularizes visual representations to predict language. LSL is conceptually simpler, more data efficient, and outperforms baselines in two challenging few-shot domains.
You can open the
pre-recorded video
in a separate window.
NOTE: The SlidesLive video may display a random order of the authors.
The correct author list is shown at the top of this webpage.