Skip to main content

and
  1. No Access

    Article

    Robust and reliable estimation via recursive nonlinear dynamic data reconciliation based on cubature Kalman filter

    Since measurements of process variables are subject to measurements errors as well as process variability, data reconciliation is the procedure of optimally adjusting measured date so that the adjusted values ...

    Min Bian, Jianlin Wang, Weimin Liu, Kepeng Qiu in Cluster Computing (2017)

  2. No Access

    Article

    Multi-objective service composition model based on cost-effective optimization

    The widespread application of cloud computing results in the exuberant growth of services with the same functionality. Quality of service (QoS) is mostly applied to represent nonfunctional properties of servic...

    Ying Huo, Peng Qiu, Jiyou Zhai, Dajuan Fan, Huanfeng Peng in Applied Intelligence (2018)

  3. No Access

    Article

    Dynamic hypersphere SVDD without describing boundary for one-class classification

    Support vector data description (SVDD), an efficient one-class classification method, captures the spherically shaped boundary around the same class data and achieves classification for setting the boundary re...

    Jianlin Wang, Weimin Liu, Kepeng Qiu, Huan **ong in Neural Computing and Applications (2019)

  4. No Access

    Article

    Implicit discourse relation detection using concatenated word embeddings and a gated relevance network

    **lan Fu, Qi Zhang, Jifan Chen, Minlong Peng in Science China Information Sciences (2019)

  5. No Access

    Article

    Chinese Word Segmentation via BiLSTM+Semi-CRF with Relay Node

    Semi-Markov conditional random fields (Semi-CRFs) have been successfully utilized in many segmentation problems, including Chinese word segmentation (CWS). The advantage of Semi-CRF lies in its inherent abilit...

    Nuo Qun, Hang Yan, **-Peng Qiu in Journal of Computer Science and Technology (2020)

  6. No Access

    Article

    Syntax-guided text generation via graph neural network

    Text generation is a fundamental and important task in natural language processing. Most of the existing models generate text in a sequential manner and have difficulty modeling complex dependency structures. ...

    Qipeng Guo, **peng Qiu, **angyang Xue, Zheng Zhang in Science China Information Sciences (2021)

  7. No Access

    Article

    Text information aggregation with centrality attention

    A lot of natural language processing problems need to encode the text sequence as a fix-length vector, which usually involves an aggregation process of combining the representations of all the words, such as p...

    **g**g Gong, Hang Yan, Yining Zheng, Qipeng Guo in Science China Information Sciences (2021)

  8. No Access

    Article

    Dual-axial self-attention network for text classification

    Text classification is an important task in natural language processing and numerous studies aim to improve the accuracy and efficiency of text classification models. In this study, we propose an effective and...

    **aochuan Zhang, **peng Qiu, Jianmin Pang, Fudong Liu in Science China Information Sciences (2021)

  9. Article

    Open Access

    Paradigm Shift in Natural Language Processing

    In the era of deep learning, modeling for most natural language processing (NLP) tasks has converged into several mainstream paradigms. For example, we usually adopt the sequence labeling paradigm to solve a b...

    Tian-**ang Sun, **ang-Yang Liu, **-Peng Qiu in Machine Intelligence Research (2022)

  10. No Access

    Article

    Improving BERT Fine-Tuning via Self-Ensemble and Self-Distillation

    Fine-tuning pre-trained language models like BERT have become an effective way in natural language processing (NLP) and yield state-of-the-art results on many downstream tasks. Recent studies on adapting BERT ...

    Yi-Ge Xu, **-Peng Qiu, Li-Gao Zhou in Journal of Computer Science and Technology (2023)

  11. No Access

    Article

    \(\cal{Y}\) -Tuning: an efficient tuning paradigm for large-scale pre-trained models via label representation learning

    With current success of large-scale pre-trained models (PTMs), how efficiently adapting PTMs to downstream tasks has attracted tremendous attention, especially for PTMs with billions of parameters. Previous wo...

    Yitao Liu, Chenxin An, **peng Qiu in Frontiers of Computer Science (2023)

  12. No Access

    Article

    ChatGPT: potential, prospects, and limitations

    Jie Zhou 周杰, Pei Ke 柯沛, in Frontiers of Information Technology & Elec… (2024)

  13. Article

    Open Access

    Multi-dimensional resource allocation strategy for LEO satellite communication uplinks based on deep reinforcement learning

    In the LEO satellite communication system, the resource utilization rate is very low due to the constrained resources on satellites and the non-uniform distribution of traffics. In addition, the rapid movement...

    Yu Hu, Feipeng Qiu, Fei Zheng, Jilong Zhao in Journal of Cloud Computing (2024)

  14. No Access

    Article

    CPT: a pre-trained unbalanced transformer for both Chinese language understanding and generation

    In this paper, we take the advantage of previous pre-trained models (PTMs) and propose a novel Chinese pre-trained unbalanced transformer (CPT). Different from previous Chinese PTMs, CPT is designed to utilize...

    Yunfan Shao, Zhichao Geng, Yitao Liu, Junqi Dai in Science China Information Sciences (2024)

  15. No Access

    Article

    MOSS: An Open Conversational Large Language Model

    Conversational large language models (LLMs) such as ChatGPT and GPT-4 have recently exhibited remarkable capabilities across various domains, capturing widespread attention from the public. To facilitate this ...

    Tianxiang Sun, **aotian Zhang, Zhengfu He, Peng Li in Machine Intelligence Research (2024)