Skip to main content

and
  1. No Access

    Chapter and Conference Paper

    Estimate Unlabeled-Data-Distribution for Semi-supervised PU Learning

    Traditional supervised classifiers use only labeled data (features/label pairs) as the training set, while the unlabeled data is used as the testing set. In practice, it is often the case that the labeled data...

    Haoji Hu, Chaofeng Sha, **aoling Wang, Aoying Zhou in Web Technologies and Applications (2012)

  2. No Access

    Article

    A unified framework for semi-supervised PU learning

    Traditional supervised classifiers use only labeled data (features/label pairs) as the training set, while the unlabeled data is used as the testing set. In practice, it is often the case that the labeled data...

    Haoji Hu, Chaofeng Sha, **aoling Wang, Aoying Zhou in World Wide Web (2014)

  3. No Access

    Chapter and Conference Paper

    Towards Calibrated Hyper-Sphere Representation via Distribution Overlap Coefficient for Long-Tailed Learning

    Long-tailed learning aims to tackle the crucial challenge that head classes dominate the training procedure under severe class imbalance in real-world scenarios. However, little attention has been given to how...

    Hualiang Wang, Siming Fu, **aoxuan He, Hangxiang Fang in Computer Vision – ECCV 2022 (2022)

  4. No Access

    Chapter and Conference Paper

    Convolutional Neural Network Design for Single Image Super-Resolution

    Single image super-resolution (SR) is designed to recover high-resolution (HR) images from a single low-resolution (LR) image, which has important applications in surveillance equipment, satellite imagery, mob...

    **nyi Hu, Yuxin Zhang, Haoji Hu in Proceedings of International Conference on… (2023)

  5. No Access

    Chapter and Conference Paper

    Meta-prototype Decoupled Training for Long-Tailed Learning

    Long-tailed learning aims to tackle the crucial challenge that head classes dominate the training procedure under severe class imbalance in real-world scenarios. Supervised contrastive learning has turned out ...

    Siming Fu, Huanpeng Chu, **aoxuan He, Hualiang Wang in Computer Vision – ACCV 2022 (2023)

  6. No Access

    Article

    Dynamic connection pruning for densely connected convolutional neural networks

    Densely connected convolutional neural networks dominate in a variety of downstream tasks due to their extraordinary performance. However, such networks typically require excessive computing resources, which h...

    **nyi Hu, Hangxiang Fang, Ling Zhang, Xue Zhang, Howard H. Yang in Applied Intelligence (2023)

  7. No Access

    Chapter and Conference Paper

    Repdistiller: Knowledge Distillation Scaled by Re-parameterization for Crowd Counting

    Knowledge distillation (KD) is an important method to compress a large teacher model into a much smaller student model. However, the large capacity gap between the teacher and student models hinders the perfor...

    Tian Ni, Yuchen Cao, **aoyu Liang, Haoji Hu in Pattern Recognition and Computer Vision (2024)

  8. No Access

    Article

    YOLO-MTG: a lightweight YOLO model for multi-target garbage detection

    With wide adoption of deep learning technology in AI, intelligent garbage detection has become a hot research topic. However, existing datasets currently used for garbage detection rarely involves multi-catego...

    Zhongyi **a, Houkui Zhou, Huimin Yu, Haoji Hu in Signal, Image and Video Processing (2024)