Log in

Detecting mistakes in a domain model: a comparison of three approaches

  • Special Issue on Models/MoDeVVa’22 & SAM'22
  • Published:
Innovations in Systems and Software Engineering Aims and scope Submit manuscript

Abstract

Domain models are a fundamental part of software engineering, thus it is important for every software engineer to know the principles of domain modeling. To teach students these essential modeling principles, instructors play a vital role. Instructors check models created by students for mistakes by comparing them with a correct solution, kee** in mind the possible variations. While this did not use to be an overwhelming task, this is not the case anymore nowadays due to a rapid increase in the number of students wanting to become software engineers, leading to larger class sizes. Hence, students may need to wait for a longer time to get feedback on their solutions and the feedback may be more superficial due to time constraints. In this paper, we evaluate three approaches for a mistake detection system (MDS) that aim to automate the manual approach of checking student solutions and help save both students’ and instructors’ time: (i) the basic approach, (ii) the basic plus synonyms approach, and (iii) the basic plus synonyms plus variations approach. In all cases, MDS automatically indicates the exact location and the type of the mistake to the student. At present, MDS accurately detects 83 out of 97 identified different types of mistakes that may exist in a student solution. A prototype tool verifies the feasibility of the proposed MDS. When synonyms and variations (i.e., multiple correct instructor solutions) are considered by MDS, recall of 0.97 and precision of 0.84 are achieved based on the results for real student solutions. This is an improvement of 0.20 both in terms of recall and precision over the basic approach. The proposed MDS takes us one step closer to automating the existing manual approach, freeing up instructor time and hel** students learn domain modeling more effectively.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Data availability

The implementation is open-source and is available at https://github.com/YounesB-McGill/modeling-assistant.

Notes

  1. List of mistake types available at https://github.com/YounesB-McGill/modeling-assistant/blob/main/modelingassistant/corpus_descriptions/README.md.

  2. https://github.com/YounesB-McGill/modeling-assistant.

  3. List of mistake types available at https://github.com/YounesB-McGill/modeling-assistant/blob/main/modelingassistant/corpus_descriptions/README.md.

References

  1. Eclipse Foundation: Eclipse Modeling Framework (EMF) (2022). https://www.eclipse.org/modeling/emf/

  2. Lethbridge TC, Forward A, Badreddin O, Brestovansky D, Garzon M, Aljamaan H, Eid S, Husseini Orabi A, Husseini Orabi M, Abdelzad V, Adesina O, Alghamdi A, Algablan A, Zakariapour A (2021) Umple: model-driven development for open source and education. Sci Comput Program 208:102665. https://doi.org/10.1016/j.scico.2021.102665

    Article  Google Scholar 

  3. Mussbacher G et al (2014) The relevance of model-driven engineering thirty years from now. In: Dingel J, Schulte W, Ramos I, Abrahão S, Insfran E (eds) Model-driven engineering languages and systems. Springer, Cham, pp 183–200. https://doi.org/10.1007/978-3-319-11653-2_12

    Chapter  Google Scholar 

  4. Singer N (2019) The hard part of Computer Science? getting into class. The New York Times . https://www.nytimes.com/2019/01/24/technology/computer-science-courses-college.html

  5. McGill University Enrolment Services: Enrolment Report Fall 2018: Total (FT and PT) Enrolments by Faculty, by Degree and by Gender (2018). https://www.mcgill.ca/es/files/es/fall_2018_-_total_ft_and_pt_enrolments_by_faculty_by_degree_and_by_gender.pdf

  6. Adams JC (2014) Computing is the safe stem career choice today. https://cacm.acm.org/blogs/blog-cacm/180053-computing-is-the-safe-stem -career-choice-today/fulltext

  7. Boubekeur Y, Mussbacher G (2020) Towards a better understanding of interactions with a domain modeling assistant. In: Proceedings of the 23rd ACM/IEEE international conference on model driven engineering languages and systems: companion proceedings. MODELS ’20, pp 1–10. Association for Computing Machinery, New York. https://doi.org/10.1145/3417990.3418742

  8. Singh P, Boubekeur Y, Mussbacher G (2022) Detecting mistakes in a domain model. In: Proceedings of the 25th international conference on model driven engineering languages and systems: companion proceedings. MODELS ’22, pp 257–266. Association for Computing Machinery, New York. https://doi.org/10.1145/3550356.3561583

  9. Boubekeur Y (2022) A learning corpus and feedback mechanism for a domain modeling assistant. Master’s thesis, McGill University, Canada

  10. Object Management Group: OMG ®Unified Modeling Language ®(OMG UML ®) (2017). https://www.omg.org/spec/UML/2.5.1/PDF

  11. Lethbridge TC, Laganière R (2005) Object-oriented software engineering: practical software development using UML and Java. McGraw-Hill Education

    Google Scholar 

  12. Hoggarth G, Lockyer M (1998) An automated student diagram assessment system. SIGCSE Bull 30(3):122–124. https://doi.org/10.1145/290320.283089

    Article  Google Scholar 

  13. Soler J, Boada I, Prados F, Poch J, Fabregat R (2010) A web-based e-learning tool for UML class diagrams. In: IEEE EDUCON 2010 conference, pp 973–979. https://doi.org/10.1109/EDUCON.2010.5492473

  14. Ali NH, Shukur Z, Idris SB (2007) Assessment system for UML class diagram using notations extraction. Int J Comput Sci Netw Secur 7:181–187

    Google Scholar 

  15. Tselonis C, Sargeant J, Wood MM (2005) Diagram matching for human-computer collaborative assessment. In: Proceedings of the 9th international computer assisted assessment conference. https://repository.lboro.ac.uk/articles/conference_contribution/Diagram_matching_for_human-computer_collaborative_assessment/9488852

  16. Jayal A, Shepperd M (2009) The problem of labels in e-assessment of diagrams. J Educ Resour Comput 8:1–13. https://doi.org/10.1145/1482348.1482351

    Article  Google Scholar 

  17. Fourati R, Bouassida N, Abdallah HB (2011) A metric-based approach for anti-pattern detection in UML designs. In: Lee R (ed) Computer and information science. Springer, Berlin, pp 17–33

    Google Scholar 

  18. Hasker RW (2011) UMLGrader: an automated class diagram grader. J Comput Sci Coll 27(1):47–54

  19. Bian W, Alam O, Kienzle J (2019) Automated grading of class diagrams. In: 2019 ACM/IEEE 22nd MODELS companion, pp 700–709. https://doi.org/10.1109/MODELS-C.2019.00106

  20. Boubekeur Y, Mussbacher G, McIntosh S (2020) Automatic assessment of students’ software models using a simple heuristic and machine learning. In: Proceedings of the 23rd ACM/IEEE international conference on model driven engineering languages and systems: companion proceedings. MODELS ’20, pp 1–10. Association for Computing Machinery, New York. https://doi.org/10.1145/3417990.3418741

  21. Deeva G, Bogdanova D, Serral E, Snoeck M, De Weerdt J (2021) A review of automated feedback systems for learners: classification framework, challenges and opportunities. Comput Educ 162:104094. https://doi.org/10.1016/j.compedu.2020.104094

    Article  Google Scholar 

  22. Eclipse Foundation: Eclipse EMF Compare (2023). https://projects.eclipse.org/projects/modeling.emfcompare

  23. Schöttle M, Thimmegowda N, Alam O, Kienzle J, Mussbacher G (2015) Feature modelling and traceability for concern-driven software development with touchcore. In: Companion proceedings of the 14th international conference on modularity. MODULARITY companion 2015, pp 11–14. Association for Computing Machinery, New York. https://doi.org/10.1145/2735386.2735922

  24. Boubekeur Y, Singh P, Mussbacher G (2022) A DSL and model transformations to specify learning corpora for modeling assistants. In: Proceedings of the 25th international conference on model driven engineering languages and systems: companion proceedings. MODELS ’22, pp 95–102. Association for Computing Machinery, New York. https://doi.org/10.1145/3550356.3556502

  25. Levenshtein VI (1966) Binary codes capable of correcting deletions, insertions, and reversals. Soviet Phys Doklady 10:707–710

  26. Toutanova K, Klein D, Manning C.D, Singer Y (2003) Feature-rich part-of-speech tagging with a cyclic dependency network. In: NACCL’03, vol 1, pp 173–180. Association for Computational Linguistics. https://doi.org/10.3115/1073445.1073478

  27. Toutanova K, Manning CD (2000) Enriching the knowledge sources used in a maximum entropy part-of-speech tagger. In: EMNLP’00, vol 13, pp 63–70. Association for Computational Linguistics. https://doi.org/10.3115/1117794.1117802

  28. Gamma E, Helm R, Johnson R, Vlissides J (1994) Design patterns: elements of reusable object-oriented software. Addison-Wesley Professional

Download references

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

PS implemented the mistake detection algorithm, while YB and GM reviewed and suggested improvements to the implementation. PS is the main contributor to the performance analysis using the Hotel Booking domain, while YB is the main contributor to the performance analysis using the Smart Home domain. The analysis data and results were independently checked and confirmed by all the authors. The contribution to the remainder of the paper is equal among the authors.

Corresponding author

Correspondence to Prabhsimran Singh.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Ethical approval

Not applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Singh, P., Boubekeur, Y. & Mussbacher, G. Detecting mistakes in a domain model: a comparison of three approaches. Innovations Syst Softw Eng (2024). https://doi.org/10.1007/s11334-024-00566-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11334-024-00566-1

Keywords

Navigation