**Multi-modal Large Language Model**, **CVPR 2024** VLM for remote sensing dialogue and analysis.
**Text-to-Image Model**, **ICLR 2024** Leaverging LLM to generate complex scenes in Zero-Shot.
**Vision-Language Model**, **AAAI 2024**, **Oral**, **Top 9.5%** Self-structural Alignment of Foundational Models for Zero-Shot.
**Vision-Language Model**, **NeurIPS 2023** Test-Time Alignment of Foundational Models for Zero-shot.
**Vision-Language Model**, **ICCV 2023** Self-regularization for foundational vision-language models during fine-tuning.
**Vision-Language Model**, **ICCV 2023** Face anti-spoofing by adapting foundational vision-language models like CLIP.
**Visual-Spatial and Temporal Perception**, **ICCV 2023** Spatio-temporal focal modulation for video recognition is an efficient network.
**MICCAI 2023** Frequency domain adversarial training for robust medical segmentation.
**Vision-Language Model**, **CVPR 2023** Adapting vision language Foundational models like CLIP for video recognition.
**Adversarial Machine Learning**, **CVPR 2023** Adversarial transfer of textually defined Makeup for facial privacy.