TY - JOUR
T1 - From Local Details to Global Context
T2 - 42nd International Conference on Machine Learning, ICML 2025
AU - Cai, Lincan
AU - Kang, Jingxuan
AU - Li, Shuang
AU - Ma, Wenxuan
AU - Xie, Binhui
AU - Qin, Zhida
AU - Liang, Jian
N1 - Publisher Copyright:
© 2025, ML Research Press. All rights reserved.
PY - 2025
Y1 - 2025
N2 - Pretrained vision-language models (VLMs), e.g., CLIP, demonstrate impressive zero-shot capabilities on downstream tasks. Prior research highlights the crucial role of visual augmentation techniques, like random cropping, in alignment with fine-grained class descriptions generated by large language models (LLMs), significantly enhancing zero-shot performance by incorporating multi-view information. However, the inherent randomness of these augmentations can inevitably introduce background artifacts and cause models to overly focus on local details, compromising global semantic understanding. To address these issues, we propose an Attention-Based Selection (ABS) method from local details to global context, which applies attention-guided cropping in both raw images and feature space, supplement global semantic information through strategic feature selection. Additionally, we introduce a soft matching technique to effectively filter LLM descriptions for better alignment. ABS achieves state-of-the-art performance on out-of-distribution generalization and zero-shot classification tasks. Notably, ABS is training-free and even rivals few-shot and test-time adaptation methods. Our code is available at https://github.com/BIT-DA/ABS.
AB - Pretrained vision-language models (VLMs), e.g., CLIP, demonstrate impressive zero-shot capabilities on downstream tasks. Prior research highlights the crucial role of visual augmentation techniques, like random cropping, in alignment with fine-grained class descriptions generated by large language models (LLMs), significantly enhancing zero-shot performance by incorporating multi-view information. However, the inherent randomness of these augmentations can inevitably introduce background artifacts and cause models to overly focus on local details, compromising global semantic understanding. To address these issues, we propose an Attention-Based Selection (ABS) method from local details to global context, which applies attention-guided cropping in both raw images and feature space, supplement global semantic information through strategic feature selection. Additionally, we introduce a soft matching technique to effectively filter LLM descriptions for better alignment. ABS achieves state-of-the-art performance on out-of-distribution generalization and zero-shot classification tasks. Notably, ABS is training-free and even rivals few-shot and test-time adaptation methods. Our code is available at https://github.com/BIT-DA/ABS.
UR - https://www.scopus.com/pages/publications/105023563199
M3 - Conference article
AN - SCOPUS:105023563199
SN - 2640-3498
VL - 267
SP - 6229
EP - 6242
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
Y2 - 13 July 2025 through 19 July 2025
ER -