Abstract
Automatic testing constitutes an important part of everyday development practice. Worldline, a major IT company, is creating more and more tests to ensure the good behavior of its applications and gains in efficiency and quality. But running all these tests may take hours. This is especially true for large systems involving, for example, the deployment of a web server or communication with a database. For this reason, tests are not launched as often as they should be and are mostly run at night. The company wishes to improve its development and testing process by giving to developers rapid feedback after a change. An interesting solution is to reduce the number of tests to run by identifying only those exercising the piece of code changed. Two main approaches are proposed in the literature: static and dynamic. The static approach creates a model of the source code and explores it to find links between changed methods and tests. The dynamic approach records invocations of methods during the execution of test scenarios. Before deploying a test case selection solution, Worldline created a partnership with us to investigate the situation in its projects and to evaluate these approaches on three industrial, closed source, cases to understand the strengths and weaknesses of each solution. We propose a classification of problems that may arise when trying to identify the tests that cover a method. We give concrete examples of these problems and list some possible solutions. We also evaluate other issues such as the impact on the results of the frequency of modification of methods or considering groups of methods instead of single ones. We found that solutions must be combined to obtain better results, and problems have different impacts on projects. Considering commits instead of individual methods tends to worsen the results, perhaps due to their large size.
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs11219-016-9328-4/MediaObjects/11219_2016_9328_Fig1_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs11219-016-9328-4/MediaObjects/11219_2016_9328_Fig2_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs11219-016-9328-4/MediaObjects/11219_2016_9328_Fig3_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs11219-016-9328-4/MediaObjects/11219_2016_9328_Fig4_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs11219-016-9328-4/MediaObjects/11219_2016_9328_Fig5_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs11219-016-9328-4/MediaObjects/11219_2016_9328_Fig6_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs11219-016-9328-4/MediaObjects/11219_2016_9328_Fig7_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs11219-016-9328-4/MediaObjects/11219_2016_9328_Fig8_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs11219-016-9328-4/MediaObjects/11219_2016_9328_Fig9_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs11219-016-9328-4/MediaObjects/11219_2016_9328_Fig10_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs11219-016-9328-4/MediaObjects/11219_2016_9328_Fig11_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs11219-016-9328-4/MediaObjects/11219_2016_9328_Fig12_HTML.gif)
Similar content being viewed by others
Notes
We ignore a third category in this paper: “obsolete”.
The client does not call the framework but the framework calls the client.
References
Agrawal, H., Alberi, J. L., Horgan, J. R., Li, J. J., London, S., Wong, W. E., et al. (1998). Mining system tests to aid software maintenance. IEEE Computer, 31(7), 64–73. doi:10.1109/2.689678.
Badri, L., Badri, M., & St-Yves, D. (2005). Supporting predictive change impact analysis: a control call graph based technique. In Software Engineering Conference, 2005. APSEC’05. 12th Asia-Pacific, IEEE, (pp. 9–pp)
Beszedes, A., Gergely, T., Schrettner, L., Jasz, J., Lango, L., & Gyimothy, T. (2012). Code coverage-based regression test selection and prioritization in webkit. In 2012 28th IEEE international conference on software maintenance (ICSM), (pp. 46–55), doi:10.1109/ICSM.2012.6405252
Biswas, S., Mall, R., Satpathy, M., & Sukumaran, S. (2011). Regression test selection techniques: A survey. Informatica, (03505596) 35, (3)
Ducasse, S., Lanza, M., & Tichelaar, S. (2000). Moose: an extensible language-independent environment for reengineering object-oriented systems. In Proceedings of the 2nd international symposium on constructing software engineering tools, CoSET ’00, http://scg.unibe.ch/archive/papers/Duca00bMooseCoset
Ducasse, S., Anquetil, N., Bhatti, U., Cavalcante Hora, A., Laval, J., & Girba, T. (2011). MSE and FAMIX 3.0: an interexchange format and source code model family. Tech. rep., RMod – INRIA Lille-Nord Europe, http://rmod.lille.inria.fr/archives/reports/Duca11c-Cutter-deliverable22-MSE-FAMIX30
Ekelund, E. D., & Engström, E. (2015). Efficient regression testing based on test history: An industrial evaluation. In International Conference on Software Maintenance and Evolution, IEEE Computer Society
Elbaum, S., Kallakuri, P., Malishevsky, A. G., Rothermel, G., & Kanduri, S. (2003). Understanding the effects of changes on the cost-effectiveness of regression testing techniques. Journal of Software Testing, Verification, and Reliability
Engström, E., Skoglund, M., & Runeson, P. (2008). Empirical evaluations of regression test selection techniques: A systematic review. In Proceedings of the Second ACM-IEEE international symposium on Empirical software engineering and measurement, ACM, (pp. 22–31)
Engström, E., Runeson, P., & Skoglund, M. (2010). A systematic review on regression test selection techniques. Information and Software Technology, 52(1), 14–30.
Ernst, M.D. (2003). Static and dynamic analysis: Synergy and duality. In WODA 2003: ICSE Workshop on Dynamic Analysis, Citeseer, (pp. 24–27)
Gligoric, M., Eloussi, L., & Marinov, D. (2015). Practical regression test selection with dynamic file dependencies. In Proceedings of the 2015 international symposium on software testing and analysis, ACM, New York, NY, USA, ISSTA 2015, (pp. 211–222), doi:10.1145/2771783.2771784
Gupta, R., Harrold, M.J., & Soffa, M.L. (1992). An approach to regression testing using slicing. In Software Maintenance, 1992. Proceedings Conference on IEEE, (pp. 299–308)
Hsia, P., Li, X., Chenho Kung, D., Hsu, C. T., Li, L., Toyoshima, Y., et al. (1997). A technique for the selective revalidation of oo software. Journal of Software Maintenance: Research and Practice, 9(4), 217–233.
Huang, S., Li, Z. J., Zhu, J., **ao, Y., & Wang, W. (2011). A novel approach to regression test selection for j2ee applications. In 2011 27th IEEE international conference on software maintenance (ICSM), (pp. 13–22), doi:10.1109/ICSM.2011.6080768
Jász, J., Beszédes, Á., Gyimóthy, T., & Rajlich, V. (2008). Static execute after/before as a replacement of traditional software dependencies. In IEEE international conference on software maintenance, 2008. ICSM 2008., IEEE, (pp. 137–146)
Leung, H.K., & White, L. (1989). Insights into regression testing. In Software Maintenance, 1989., Proceedings Conference on IEEE, (pp. 60–69), http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=65194
Lingampally, R., Gupta, A., & Jalote, P. (2007). A multipurpose code coverage tool for java. In HICSS 2007. 40th Annual Hawaii International Conference on System Sciences, 2007, (pp. 261b–261b), doi:10.1109/HICSS.2007.24
Nanda, A., Mani, S., Sinha, S., Harrold, M., & Orso, A. (2011). Regression testing in the presence of non-code changes. In 2011 IEEE Fourth International conference on software testing, verification and validation (ICST), (pp. 21–30). doi:10.1109/ICST.2011.60
Parsai, A., Soetens, Q. D., Murgia, A., & Demeyer, S. (2014). Considering polymorphism in change-based test suite reduction. In Dingsoyr, T., Moe, N., Tonelli, R., Counsell, S., Gencel, C., Petersen, K. (Eds.), Agile methods. Large-scale development, refactoring, testing, and estimation, Lecture Notes in Business Information Processing, vol 199. Springer International Publishing, pp 166–181. doi:10.1007/978-3-319-14358-3_14
Rothermel, G., & Harrold, M.J. (1993). A safe, efficient algorithm for regression test selection. In Proceedings of the international conference on software maintenance (ICSM ’93), IEEE, (pp. 358–367)
Runeson, P., & Höst, M. (2009). Guidelines for conducting and reporting case study research in software engineering. Empirical software engineering, 14(2), 131–164.
Soetens, Q. D., Demeyer, S., & Zaidman, A. (2013). Change-based test selection in the presence of developer tests. In Software maintenance and reengineering (CSMR), 2013 17th European conference on IEEE, (pp. 101–110)
Tengeri, D., Horváth, F., Beszédes, Á., Gergely, T., & Gyimóthy, T. (2016). Negative effects of bytecode instrumentation on Java source code coverage. In Proceedings of the IEEE 23rd international conference on software analysis, evolution, and reengineering (SANER’16), (pp. 225–235)
White, L., & Leung, H. (1992). A firewall concept for both control-flow and data-flow in regression integration testing. In Software Maintenance, 1992. Proceedings., Conference on, (pp. 262–271), doi:10.1109/ICSM.1992.242535
White, L., Jaber, K., & Robinson, B. (2005). Utilization of extended firewall for object-oriented regression testing. In Software Maintenance, 2005. Proceedings of the 21st IEEE international conference on ICSM’05, (pp. 695–698), doi:10.1109/ICSM.2005.101
Willmor, D., & Embury, S. (2005). A safe regression test selection technique for database-driven applications. In Software Maintenance, 2005. Proceedings of the 21st IEEE International Conference on ICSM’05, (pp. 421–430), doi:10.1109/ICSM.2005.15
Yoo, S., & Harman, M. (2012). Regression testing minimization, selection and prioritization: a survey. Software Testing, Verification and Reliability, 22(2), 67–120. doi:10.1002/stvr.430.
Zheng, J., Williams, L., Robinson, B., & Smiley, K. (2007). Regression test selection for black-box dynamic link library components. In Incorporating COTS Software into Software Systems: Tools and Techniques, 2007. IWICSS ’07. Second International Workshop on, (pp. 9–9), doi:10.1109/IWICSS.2007.8
Acknowledgments
This work was supported by Worldline and by Ministry of Higher Education and Research, Nord-Pas de Calais Regional Council, CPER Nord-Pas de Calais/FEDER DATA Advanced data science and technologies 2015–2020.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Blondeau, V., Etien, A., Anquetil, N. et al. Test case selection in industry: an analysis of issues related to static approaches. Software Qual J 25, 1203–1237 (2017). https://doi.org/10.1007/s11219-016-9328-4
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11219-016-9328-4