Muzammal Naseer

I received my Ph.D. degree from the Australian National University, Australia in 2020. My thesis was “Novel Concepts and Designs for Adversarial Attacks and Defenses.” I am an assistant professor in the computer science department of the College of Computing and Mathematical Sciences at Khalifa University.

I am interested in building Robust Intelligent Systems. My research focuses on robust visual-spatial and temporal perception, understanding and explaining AI behavior through adversarial machine learning, representation learning through self-learning ( self-supervision, self-distillation, self-critique, self-reflection), and configuring the role of large language models (LLMs) in building robust AI systems across applications of security and life sciences.

⚡ Top-Venue

  • One AAAI 2025 paper accepted
  • One TPAMI 2025 paper accepted.
  • Four CVPR 2024 papers accepted.
  • One ICLR 2024 paper accepted.
  • One AAAI 2024 paper accepted (Oral, Top 9.0%).
  • One NeurIPS 2023 paper accepted.
  • Three ICCV 2023 papers accepted.
  • Three CVPR 2023 papers accepted.
  • One ICLR 2023 paper accepted.
  • One TPAMI 2022 paper accepted.
  • One CVPR 2022 paper accepted (Oral, Top 5.0%).
  • One ICLR 2022 paper accepted (Spotlight, Top 5.0%).
  • One NeurIPS 2021 paper accepted (Spotlight, Top 3.0%).
  • Two ICCV 2021 papers accepted.
  • One CVPR 2020 paper accepted (Oral, Top 5.7%).
  • One NeurIPS 2019 paper accepted.

🔥 Notable

  • One ACCV 2024 paper accepted (Oral, Top 5.6%, Best Student Paper Runner-up).
  • Four MICCAI 2024 papers accepted.
  • One MICCAI 2023 paper accepted (Early Accept, Top 14.0%).
  • One BMVC 2022 paper accepted (Oral, Top 9.5%).
  • One ACCV 2022 paper accepted (Oral, Top 14.6%).
CLIP2Protect: Protecting Facial Privacy Using Text-Guided Makeup via Adversarial Latent Search
CLIP2Protect: Protecting Facial Privacy Using Text-Guided Makeup via Adversarial Latent Search

**Adversarial Machine Learning**, **CVPR 2023** Adversarial transfer of textually defined Makeup for facial privacy.

Feb 27, 2023

Boosting Adversarial Transferability using Dynamic Cues
Boosting Adversarial Transferability using Dynamic Cues

**Adversarial Machine Learning**, **ICLR 2023** Learning temporal prompts within image models (ViTs) to fool video models.

Feb 1, 2023

Stylized Adversarial Defense
Stylized Adversarial Defense

**TPAMI 2022**, **Impact Factor [24.31]** Target-aware adversarial training with style, content, and boundary loss.

Apr 1, 2022

Guidance Through Surrogate: Towards a Generic Diagnostic Attack
Guidance Through Surrogate: Towards a Generic Diagnostic Attack

**TNNLS 2022**, **Impact Factor [14.25]** Match and Deceive loss for guidance to break adversarial defenses.

Mar 1, 2022

Adversarial Pixel Restoration as a Pretext Task for Transferable Perturbations
Adversarial Pixel Restoration as a Pretext Task for Transferable Perturbations

**Adversarial Machine Learning**, **BMVC 2022**, **Oral**, **Top 9.5%** Self-supervised adversarial training for transferable attacks.

Feb 1, 2022

Self-Distilled Vision Transformer for Domain Generalization
Self-Distilled Vision Transformer for Domain Generalization

**Self-Learning**, **ACCV 2022**, **Oral**, **Top 14.6%** Self-regularization for Vision Transformer for Domain Generalization.

Feb 1, 2022

Self-supervised Video Transformer
Self-supervised Video Transformer

**Self-Learning**, **CVPR 2022**, **Oral**, **Top 5.0%** Self-supervised spatiotemporal view matching for video understanding.

Feb 1, 2022

On Improving Adversarial Transferability of Vision Transformers
On Improving Adversarial Transferability of Vision Transformers

**Adversarial Machine Learning**, **ICLR 2022**, **Spotlight**, **Top 5.0%** Self-ensemble with token/feature refinement for adversarial transferability.

Feb 1, 2022

How to Train Vision Transformer on Small-scale Datasets?
How to Train Vision Transformer on Small-scale Datasets?

**BMVC 2022** Learning self-supervised inductive biases from small data for ViTs.

Feb 1, 2022

Intriguing Properties of Vision Transformers
Intriguing Properties of Vision Transformers

**Visual-Spatial Perception**, **NeurIPS 2021**, **Spotlight**, **Top 3.0%** Analysis of neural behavioral patterns of self-attention within ViTs.

Sep 1, 2021