Abstract
Automated writing evaluation system that employs automated essay scoring technologies generates a rating to the writings which helps students with self-assessment. It provides an efficient and easy way for grading essays, which usually takes abundance of time to be evaluated by human graders; this can be greatly useful for educational institutions like schools and colleges. It provides ratings for essays based on the grammatical errors and topic relevancy by using specific tools implemented in the auto grading system. The generated result will help students grading essays which helps students in self-assessment, this will be useful for the student to understand the mistakes that he makes which will be pinpointed by the system in matter of time, not only that the system also displays the strengths of the user, but also the system will improve the accuracy compared to already existing automated grading system. The system uses scores from multiple features (handcraft features, coherence score, prompt-relevant score, and semantic score). These key features illustrated above will provide enough information on the essay for the student.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Zhang Z, Zhang Y (2018) Automated writing evaluation system: tap** its potential for learner engagement. IEEE Eng Manage Rev 46(3):29–33. 1 Third Quarter, Sept. 2018. https://doi.org/10.1109/EMR.2018.2866150
Burstein J, Kukich K, Wolff S, Lu C, Chodorows M, Braden-harderss L, Harrissss M (2002) Automated scoring using a hybrid feature identification technique, vol 1. https://doi.org/10.3115/980451.980879
Alqahtani A, Alsaif A (2019) Automatic evaluation for Arabic essays: a rule-based system. In: 2019 IEEE international symposium on signal processing and information technology (ISSPIT), pp 1–7.https://doi.org/10.1109/ISSPIT47144.2019.9001802
Yang Y, **a L, Zhao Q (2019) An automated grader for Chinese essay combining shallow and deep semantic attributes. IEEE Access 7:176306–176316. https://doi.org/10.1109/ACCESS.2019.2957582
Liu J, Xu Y (2019) Automated essay scoring based on two-stage learning
Wang Z, Liu J, Dong R (2018) Intelligent auto-grading system. In: 2018 5th IEEE international conference on cloud computing and intelligence systems (CCIS), pp 430–435. https://doi.org/10.1109/CCIS.2018.8691244
Prabhu S, Akhila K, Sanriya S (2022) A hybrid approach towards automated essay evaluation based on Bert and feature engineering. In: 2022 IEEE 7th international conference for convergence in technology (I2CT), pp 1–4. https://doi.org/10.1109/I2CT54291.2022.9824999
Chen H, Xu J, He B (2014) Automated essay scoring by capturing relative writing quality. Comput J 57(9):1318–1330. https://doi.org/10.1093/comjnl/bxt117
Näther (2020) An in-depth comparison of 14 spelling correction tools on a common benchmark. LREC. https://aclanthology.org/2020.lrec-1.228
Taghipour, Ng (2016) A neural approach to automated essay scoring. EMNLP. https://aclanthology.org/D16-1193
Alikaniotis D, Yannakoudakis H, Rei M (2016) Automatic text scoring using neural networks, pp 715–725. https://doi.org/10.18653/v1/P16-1068
Farag Y, Yannakoudakis H, Briscoe T (2018) Neural automated essay scoring and coherence modeling for adversarially crafted input, pp 263–271. https://doi.org/10.18653/v1/N18-1024
Taghipour K, Ng H (2016) A neural approach to automated essay scoring. https://doi.org/10.18653/v1/D16-1193
Sindhu C, Vadivu G (2021) Fine grained sentiment polarity classification using augmented knowledge sequence-attention mechanism. J Microprocess Microsyst 81
** C, He B, Hui K, Sun L (2018) TDNN: a two-stage deep neural network for prompt-independent automated essay scoring. https://doi.org/10.18653/v1/P18-1100
Mayfield, Black (2020) Should you fine-tune BERT for automated essay scoring? BEA. https://aclanthology.org/2020.bea-1.15
Wang et al (2022) On the use of Bert for automated essay scoring: joint learning of multi-scale essay representation. NAACL. https://aclanthology.org/2022.naacl-main.249
Chen T, Guestrin C (2016) Xgboost: a scalable tree boosting system. CoRR, vol. abs/1603.02754. [Online]. Available http://arxiv.org/abs/1603.02754
Yannakoudakis H, Briscoe T, Medlock B (2011) A new dataset and method for automatically grading ESOL texts, pp 180–189
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Vanga, R.R., Bharath, M.S., Sindhu, C., Vadivu, G., Hsu, H.C. (2023). Grade It: A Quantitative Essay Grading System. In: Hassanien, A.E., Castillo, O., Anand, S., Jaiswal, A. (eds) International Conference on Innovative Computing and Communications. ICICC 2023. Lecture Notes in Networks and Systems, vol 537. Springer, Singapore. https://doi.org/10.1007/978-981-99-3010-4_28
Download citation
DOI: https://doi.org/10.1007/978-981-99-3010-4_28
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-3009-8
Online ISBN: 978-981-99-3010-4
eBook Packages: EngineeringEngineering (R0)