MEWL: Few-shot multimodal word learning with referential uncertainty

Guangyuan Jiang*, Manjie Xu, Shiji Xin, Wei Liang, Yujia Peng, Chi Zhang*, Yixin Zhu*

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

3 Citations (Scopus)

Abstract

Without explicit feedback, humans can rapidly learn the meaning of words. Children can acquire a new word after just a few passive exposures, a process known as fast mapping. This word learning capability is believed to be the most fundamental building block of multimodal understanding and reasoning. Despite recent advancements in multimodal learning, a systematic and rigorous evaluation is still missing for humanlike word learning in machines. To fill in this gap, we introduce the MachinE Word Learning (MEWL) benchmark to assess how machines learn word meaning in grounded visual scenes. MEWL covers human's core cognitive toolkits in word learning: cross-situational reasoning, bootstrapping, and pragmatic learning. Specifically, MEWL is a few-shot benchmark suite consisting of nine tasks for probing various word learning capabilities. These tasks are carefully designed to be aligned with the children's core abilities in word learning and echo the theories in the developmental literature. By evaluating multimodal and unimodal agents' performance with a comparative analysis of human performance, we notice a sharp divergence in human and machine word learning. We further discuss these differences between humans and machines and call for human-like few-shot word learning in machines.

Original languageEnglish
Pages (from-to)15144-15169
Number of pages26
JournalProceedings of Machine Learning Research
Volume202
Publication statusPublished - 2023
Event40th International Conference on Machine Learning, ICML 2023 - Honolulu, United States
Duration: 23 Jul 202329 Jul 2023

Fingerprint

Dive into the research topics of 'MEWL: Few-shot multimodal word learning with referential uncertainty'. Together they form a unique fingerprint.

Cite this