Page
%P
![Loading...](https://link.springer.com/static/c4a417b97a76cc2980e3c25e2271af3129e08bbe/images/pdf-preview/spacer.gif)
-
Chapter and Conference Paper
Repdistiller: Knowledge Distillation Scaled by Re-parameterization for Crowd Counting
Knowledge distillation (KD) is an important method to compress a large teacher model into a much smaller student model. However, the large capacity gap between the teacher and student models hinders the perfor...
-
Chapter and Conference Paper
Meta-prototype Decoupled Training for Long-Tailed Learning
Long-tailed learning aims to tackle the crucial challenge that head classes dominate the training procedure under severe class imbalance in real-world scenarios. Supervised contrastive learning has turned out ...