![Loading...](https://link.springer.com/static/c4a417b97a76cc2980e3c25e2271af3129e08bbe/images/pdf-preview/spacer.gif)
-
Article
YOLO-MTG: a lightweight YOLO model for multi-target garbage detection
With wide adoption of deep learning technology in AI, intelligent garbage detection has become a hot research topic. However, existing datasets currently used for garbage detection rarely involves multi-catego...
-
Chapter and Conference Paper
Repdistiller: Knowledge Distillation Scaled by Re-parameterization for Crowd Counting
Knowledge distillation (KD) is an important method to compress a large teacher model into a much smaller student model. However, the large capacity gap between the teacher and student models hinders the perfor...
-
Article
Dynamic connection pruning for densely connected convolutional neural networks
Densely connected convolutional neural networks dominate in a variety of downstream tasks due to their extraordinary performance. However, such networks typically require excessive computing resources, which h...
-
Chapter and Conference Paper
Convolutional Neural Network Design for Single Image Super-Resolution
Single image super-resolution (SR) is designed to recover high-resolution (HR) images from a single low-resolution (LR) image, which has important applications in surveillance equipment, satellite imagery, mob...
-
Chapter and Conference Paper
Meta-prototype Decoupled Training for Long-Tailed Learning
Long-tailed learning aims to tackle the crucial challenge that head classes dominate the training procedure under severe class imbalance in real-world scenarios. Supervised contrastive learning has turned out ...
-
Chapter and Conference Paper
Towards Calibrated Hyper-Sphere Representation via Distribution Overlap Coefficient for Long-Tailed Learning
Long-tailed learning aims to tackle the crucial challenge that head classes dominate the training procedure under severe class imbalance in real-world scenarios. However, little attention has been given to how...
-
Article
A unified framework for semi-supervised PU learning
Traditional supervised classifiers use only labeled data (features/label pairs) as the training set, while the unlabeled data is used as the testing set. In practice, it is often the case that the labeled data...
-
Chapter and Conference Paper
Estimate Unlabeled-Data-Distribution for Semi-supervised PU Learning
Traditional supervised classifiers use only labeled data (features/label pairs) as the training set, while the unlabeled data is used as the testing set. In practice, it is often the case that the labeled data...