Abstract
The application of linguistic knowledge derived from pre-trained language models has demonstrated considerable potential in text classification tasks. Despite this, effectively learning the distance between samples and different labels for supervised learning tasks remains a practical challenge. In this study, we propose a novel approach, termed Parallel Networks with Pre-trained Models (ParaNet), which learns distance information between input samples and different labels within the same space. Specifically, ParaNet utilizes a Parallel Networks network architecture comprising two distinct Transformer Encoders to extract sample features and label features separately. By fine-tuning the network parameters, ParaNet can achieve the closest possible distance between the sample and its corresponding label, while simultaneously achieving the farthest possible distance between the sample and a label that does not belong to it. To fully exploit label information, the model leverages the semantic knowledge of the pre-trained model by adding templates to the labels. Our experimental analysis of eight benchmark text classification datasets demonstrates that ParaNet significantly improves classification accuracy, with an average accuracy rate increase from 89.1% to 89.64%.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Wu, Y., Li, J., Wu, J., Chang, J.: Siamese capsule networks with global and local features for text classification. Neurocomputing 390, 88–98 (2020)
Wu, Y., Li, J., Song, C., Chang, J.: Words in pairs neural networks for text classification. Chin. J. Electron. 29(3), 491–500 (2020)
Wu, Y., Li, J., Chen, V., Chang, J., Ding, Z., Wang, Z.: Text classification using triplet capsule networks. In: 2020 International Joint Conference on Neural Networks (IJCNN), pp. 1–7 (2020)
Wan, J., Lai, Z., Liu, J., Zhou, J., Gao, C.: Robust face alignment by multi-order high-precision hourglass network. IEEE Trans. Image Process. 30, 121–133 (2020)
Wan, J., **, H., Zhou, J., Lai, Z., Pedrycz, W., Wang, X., Sun, H.: Robust and precise facial landmark detection by self-calibrated pose attention network. IEEE Trans. Cybern. (2021)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 4171–4186 (2019)
Hong, S., Jang, T.Y.: Lea: Meta knowledge-driven self-attentive document embedding for few-shot text classification. In: Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 99–106 (2022)
Croce, D., Castellucci, G., Basili, R.: Gan-Bert: generative adversarial learning for robust text classification with a bunch of labeled examples. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 2114–2119 (2020)
Qin, Q., Hu, W., Liu, B.: Feature projection for improved text classification. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 8161–8171 (2020)
Mekala, D., Shang, J.: Contextualized weak supervision for text classification. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 323–333 (2020)
Chen, Q., Zhang, R., Zheng, Y., Mao, Y.: Dual contrastive learning: text classification via label-aware data augmentation. ar**v preprint ar**v:2201.08702 (2022)
Paolini, G., et al.: Structured prediction as translation between augmented natural languages. In: International Conference on Learning Representations (2021)
Hu, S., et al.: Knowledgeable prompt-tuning: Incorporating knowledge into prompt verbalizer for text classification. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, pp. 2225–2240 (2022)
Mueller, A., et al.: Label semantic aware pre-training for few-shot text classification. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, pp. 8318–8334 (2022)
Chalkidis, I., Fergadiotis, M., Kotitsas, S., Malakasiotis, P., Aletras, N., Androutsopoulos, I.: An empirical study on large-scale multi-label text classification including few and zero-shot labels. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 7503–7515 (2020)
Zhang, Y., Yuan, C., Wang, X., Bai, Z., Liu, Y.: Learn to adapt for generalized zero-shot text classification. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, pp. 517–527 (2022)
Gera, A., Halfon, A., Shnarch, E., Perlitz, Y., Ein-Dor, L., Slonim, N.: Zero-shot text classification with self-training. In: Conference on Empirical Methods in Natural Language Processing, pp. 1107–1119 (2022)
Zhang, T., Xu, Z., Medini, T., Shrivastava, A.: Structural contrastive representation learning for zero-shot multi-label text classification. In: Findings of the Association for Computational Linguistics: EMNLP 2022, pp. 4937–4947 (2022)
Min, S., Lewis, M., Hajishirzi, H., Zettlemoyer, L.: Noisy channel language model prompting for few-shot text classification. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, pp. 5316–5330 (2022)
Zha, J., Li, Z., Wei, Y., Zhang, Y.: Disentangling task relations for few-shot text classification via self-supervised hierarchical task clustering. ar**v preprint ar**v:2211.08588 (2022)
Zhao, L., Yao, C.: EICO: improving few-shot text classification via explicit and implicit consistency regularization. In: Findings of the Association for Computational Linguistics: ACL 2022, pp. 3582–3587 (2022)
Zhang, H., Zhang, X., Huang, H., Yu, L.: Prompt-based meta-learning for few-shot text classification. In: Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pp. 1342–1357 (2022)
Wang, J., et al.: Towards unified prompt tuning for few-shot text classification. ar**v preprint ar**v:2205.05313 (2022)
Shnarch, E., et al.: Cluster & tune: boost cold start performance in text classification. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, pp. 7639–7653 (2022)
Zhao, Y., et al.: Improving meta-learning for low-resource text classification and generation via memory imitation. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, pp. 583–595 (2022)
Choi, H., Choi, D., Lee, H.: Early stop** based on unlabeled samples in text classification. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, pp. 708–718 (2022)
Zhang, Z., et al.: Universal multimodal representation for language understanding. IEEE Trans. Pattern Anal. Mach. Intell. 01, 1–18 (2023)
Smith, S., et al.: Using deepspeed and megatron to train megatron-turing NLG 530B, a large-scale generative language model. ar**v preprint ar**v:2201.11990 (2022)
Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., Soricut, R.: Albert: a lite BERT for self-supervised learning of language representations. In: International Conference on Learning Representations (2020)
Clark, K., Luong, M.T., Le, Q.V., Manning, C.D.: Electra: pre-training text encoders as discriminators rather than generators. ar**v preprint ar**v:2003.10555 (2020)
Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R., Le, Q.V.: XLNET: generalized autoregressive pretraining for language understanding. In: Proceedings of the 33rd International Conference on Neural Information Processing Systems, pp. 5753–5763 (2019)
Brown, T.B., et al.: Language models are few-shot learners. In: Proceedings of the 34th International Conference on Neural Information Processing Systems, pp. 1877–1901 (2020)
Ouyang, L., et al.: Training language models to follow instructions with human feedback. Adv. Neural. Inf. Process. Syst. 35, 27730–27744 (2022)
Qin, C., Zhang, A., Zhang, Z., Chen, J., Yasunaga, M., Yang, D.: Is chatGPT a general-purpose natural language processing task solver? ar**v preprint ar**v:2302.06476 (2023)
Zhang, X., Zhao, J., LeCun, Y.: Character-level convolutional networks for text classification. In: Proceedings of the 28th International Conference on Neural Information Processing Systems, pp. 649–657 (2015)
Wiebe, J., Wilson, T., Cardie, C.: Annotating expressions of opinions and emotions in language. Lang. Resour. Eval. 39, 165–210 (2005)
Pang, B., Lee, L.: A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In: Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics, pp. 271–278 (2004)
Li, X., Roth, D.: Learning question classifiers. In: Proceedings of the 19th international conference on Computational Linguistics, pp. 1–7 (2002)
Pang, B., Lee, L.: Seeing stars: exploiting class relationships for sentiment categorization with respect to rating scales. In: Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pp. 115–124 (2005)
Socher, et al.: Recursive deep models for semantic compositionality over a sentiment treebank. In: Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pp. 1631–1642 (2013)
Hu, M., Liu, B.: Mining and summarizing customer reviews. In: Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 168–177 (2004)
Liu, Y., et al.: Roberta: s robustly optimized BERT pretraining approach. ar**v preprint ar**v:1907.11692 (2019)
Aghajanyan, A., Gupta, A., Shrivastava, A., Chen, X., Zettlemoyer, L., Gupta, S.: Muppet: massive multi-task representations with pre-finetuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 5799–5811 (2021)
Acknowledgement
This work was Sponsored by Natural Science Foundation of Shanghai(No.22ZR1445000) and Research Foundation of Shanghai Sanda University(No.2020BSZX005,No.2021BSZX006).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Wu, Y., Guo, X., Wei, Y., Chen, X. (2023). ParaNet:Parallel Networks with Pre-trained Models for Text Classification. In: Yang, X., et al. Advanced Data Mining and Applications. ADMA 2023. Lecture Notes in Computer Science(), vol 14178. Springer, Cham. https://doi.org/10.1007/978-3-031-46671-7_9
Download citation
DOI: https://doi.org/10.1007/978-3-031-46671-7_9
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-46670-0
Online ISBN: 978-3-031-46671-7
eBook Packages: Computer ScienceComputer Science (R0)