![Loading...](https://link.springer.com/static/c4a417b97a76cc2980e3c25e2271af3129e08bbe/images/pdf-preview/spacer.gif)
-
Chapter and Conference Paper
Repdistiller: Knowledge Distillation Scaled by Re-parameterization for Crowd Counting
Knowledge distillation (KD) is an important method to compress a large teacher model into a much smaller student model. However, the large capacity gap between the teacher and student models hinders the perfor...
-
Chapter and Conference Paper
Convolutional Neural Network Design for Single Image Super-Resolution
Single image super-resolution (SR) is designed to recover high-resolution (HR) images from a single low-resolution (LR) image, which has important applications in surveillance equipment, satellite imagery, mob...
-
Chapter and Conference Paper
Meta-prototype Decoupled Training for Long-Tailed Learning
Long-tailed learning aims to tackle the crucial challenge that head classes dominate the training procedure under severe class imbalance in real-world scenarios. Supervised contrastive learning has turned out ...
-
Chapter and Conference Paper
Towards Calibrated Hyper-Sphere Representation via Distribution Overlap Coefficient for Long-Tailed Learning
Long-tailed learning aims to tackle the crucial challenge that head classes dominate the training procedure under severe class imbalance in real-world scenarios. However, little attention has been given to how...
-
Chapter and Conference Paper
Estimate Unlabeled-Data-Distribution for Semi-supervised PU Learning
Traditional supervised classifiers use only labeled data (features/label pairs) as the training set, while the unlabeled data is used as the testing set. In practice, it is often the case that the labeled data...