Skip to main content

and
  1. No Access

    Chapter and Conference Paper

    Repdistiller: Knowledge Distillation Scaled by Re-parameterization for Crowd Counting

    Knowledge distillation (KD) is an important method to compress a large teacher model into a much smaller student model. However, the large capacity gap between the teacher and student models hinders the perfor...

    Tian Ni, Yuchen Cao, **aoyu Liang, Haoji Hu in Pattern Recognition and Computer Vision (2024)

  2. No Access

    Chapter and Conference Paper

    Meta-prototype Decoupled Training for Long-Tailed Learning

    Long-tailed learning aims to tackle the crucial challenge that head classes dominate the training procedure under severe class imbalance in real-world scenarios. Supervised contrastive learning has turned out ...

    Siming Fu, Huanpeng Chu, **aoxuan He, Hualiang Wang in Computer Vision – ACCV 2022 (2023)