Skip to main content

and
  1. No Access

    Chapter and Conference Paper

    Repdistiller: Knowledge Distillation Scaled by Re-parameterization for Crowd Counting

    Knowledge distillation (KD) is an important method to compress a large teacher model into a much smaller student model. However, the large capacity gap between the teacher and student models hinders the perfor...

    Tian Ni, Yuchen Cao, **aoyu Liang, Haoji Hu in Pattern Recognition and Computer Vision (2024)

  2. No Access

    Chapter and Conference Paper

    Estimate Unlabeled-Data-Distribution for Semi-supervised PU Learning

    Traditional supervised classifiers use only labeled data (features/label pairs) as the training set, while the unlabeled data is used as the testing set. In practice, it is often the case that the labeled data...

    Haoji Hu, Chaofeng Sha, **aoling Wang, Aoying Zhou in Web Technologies and Applications (2012)