AbstractTest-time adaptation (TTA) enhances the zero-shot robustness under distribution shifts by leveraging unlabeled test data during inference. Despite notable advances, several challenges still limit its broader applicability. First, most methods rely on backpropagation or iterative optimization, which limits scalability and hinders real-time deployment. Second, they lack explicit modeling of class-conditional feature distributions. This modeling is crucial for producing reliable decision boundaries and calibrated predictions, but it remains underexplored due to the lack of both source data and supervision at test time. In this paper, we propose ADAPT, an Advanced Distribution-Aware and backPropagation-free Test-time adaptation method. We reframe TTA as a Gaussian probabilistic inference task by modeling class-conditional likelihoods using gradually updated class means and a shared covariance matrix. This enables closed-form, training-free inference. To correct potential likelihood bias, we introduce lightweight regularization guided by CLIP priors and a historical knowledge bank. ADAPT requires no source data, no gradient updates, and no full access to target data, supporting both online and transductive settings. Extensive experiments across diverse benchmarks demonstrate that our method achieves state-of-the-art performance under a wide range of distribution shifts with superior scalability and robustness.
IntroductionTest-Time Adaptation (TTA) enhances zero-shot robustness under distribution shifts by adapting to unlabeled test data during inference. However, existing methods face two key limitations:
High computational cost: Most methods rely on backpropagation or iterative optimization, limiting scalability and real-time deployment.
Lack explicit class distribution modeling: They fail to model class-conditional feature distributions, leading to unstable decision boundaries.
(a) Backpropagation-required Online TTA
(b) GDA-based Transductive TTA
We propose ADAPT, a backpropagation-free and distribution-aware TTA framework that seamlessly supports both online and transductive adaptation settings.
Comparison with existing TTA methods. ADAPT is the method that achieves BP-Free, Distribution-Aware, and supports both Online and Transductive settings.
MethodWe propose ADAPT, an Advanced Distribution-Aware and backPropagation-free Test-time adaptation method. Our key idea is to reframe TTA as a Gaussian probabilistic inference task by modeling class-conditional likelihoods.
Gaussian Modeling: Estimate class-conditional feature distributions.
BP-free Adaptation: Training-free; works in both online & transductive modes.
Closed-form Update: One-pass; efficient; no iteration or fine-tuning required.
Algorithm 1: Online TTA
Algorithm 2: Transductive TTA
Notably, both procedures are fully optimization-free, relying on one-pass closed-form updates for efficient adaptation. ADAPT supports both online (streaming, one sample at a time) and transductive (full test set available) settings.
Experimental ResultsWe evaluate ADAPT on three tasks: natural distribution shift, corruption robustness, and fine-grained categorization. ADAPT achieves state-of-the-art performance across all benchmarks in both online and transductive settings.
Visualization of Decision Boundaries: We visualize the decision boundaries of different TTA methods on ImageNet-A. CLIP, TPT, and TDA rely on feature similarity without explicit class distribution modeling, leading to irregular and unstable boundaries. In contrast, ADAPT builds a Gaussian-based model that explicitly estimates class distributions, yielding more compact clusters and smoother boundaries.
Visualization of decision boundaries on ImageNet-A. Colors indicate different classes.
BibTeX@article{zhang2025backpropagation,
title={Backpropagation-Free Test-Time Adaptation via Probabilistic Gaussian Alignment},
author={Zhang, Youjia and Kim, Youngeun and Choi, Young-Geun and Kim, Hongyeob and Liu, Huiling and Hong, Sungeun},
journal={arXiv preprint arXiv:2508.15568},
year={2025}
}