PUBLICATIONS

  • PUBLICATIONS

SCAD: Super-Class-Aware Debiasing for Long-Tailed Semi-Supervised Learning

SCAD: Super-Class-Aware Debiasing for Long-Tailed Semi-Supervised Learning


Sunguk Jang*, Jinwoo Jeon*, Byung-Jun Lee (*: equal contribution)


In long-tailed semi-supervised learning (LTSSL), pseudolabeling often creates a vicious cycle of bias amplification, a problem that recent state-of-the-art methods attempt to mitigate using logit adjustment (LA). However, their adjustment schemes, inherited from LA, remain inherently hierarchyagnostic, failing to account for the semantic relationships between classes. In this regard, we identify a critical yet overlooked problem of intra-super-class imbalance, where a toxic combination of high semantic similarity and severe local imbalance within each super-class hinders effective LTSSL. This problem causes the model to reinforce on its errors, leading to representation overshadowing. To break this cycle, we propose Super-Class-Aware Debiasing (SCAD), a new framework that performs a dynamic, super-class-aware logit adjustment. SCAD leverages the latent semantic structure between classes to focus its corrective power on the most confusable groups, effectively resolving the local imbalances. Our extensive experiments validate that SCAD achieves new state-of-the-art performance, demonstrating the necessity of a super-class-aware approach for robust debiasing.



View on arXiv