How do multimodal LLMs really fare in classical vision few-shot challenges? A deep dive

By Qing Guo, Prashan Wanigasekara, Skyler Zheng, Jacob Zhiyuan Fang, Xinwei Deng, Chenyang Tao
2023
Download Copy BibTeX
Copy BibTeX
Recent advances in multimodal foundational models have demonstrated marvelous in-context learning capabilities for diverse vision-language tasks. However, existing literature have mainly focused on few-shot learning tasks similar to their NLP counterparts. It is unclear whether these foundation models can also address classical vision challenges such as few-shot classification, which in some settings (e.g., 5-way 5-shot) necessitates sophisticated reasoning over several dozens of images – a challenging task for learning systems. In this work, we take a deep dive to probe the potentials and limitations of existing multimodal models on this problem. Our investigation reveals that while these models under careful calibration can outperform dedicated visual models in complex narratable scenes, they can falter with more abstract visual inputs. Moreover, we also investigate the curriculum learning and find out it can mitigate the performance gap via smoothly bridging verbal and nonverbal reasoning for vision language tasks.
Research areas

Latest news

GB, MLN, Edinburgh
We’re looking for a Machine Learning Scientist in the Personalization team for our Edinburgh office experienced in generative AI and large models. You will be responsible for developing and disseminating customer-facing personalized recommendation models. This is a hands-on role with global impact working with a team of world-class engineers and scientists across the Edinburgh offices and wider organization. You will lead the design of machine learning models that scale to very large quantities of data, and serve high-scale low-latency recommendations to all customers worldwide. You will embody scientific rigor, designing and executing experiments to demonstrate the technical efficacy and business value of your methods. You will work alongside aRead more