Notably, large-scale contrastive language-image pre-training (CLIP) models have been widely applied to downstream visual tasks for their robust generalization capabilities, but the potential of CLIP ...