How games for computing education are evaluated? A systematic literature review
ARTICLE
Giani Petri, Christiane Gresse von Wangenheim, Graduate Program in Computer Science, Brazil
Computers & Education Volume 107, Number 1, ISSN 0360-1315 Publisher: Elsevier Ltd
Abstract
Educational games are assumed to be an effective and efficient instructional strategy for computing education. However, it is essential to systematically evaluate such games in order to obtain sound evidence of their impact. Thus, the objective of this article is to present the state of the art on how games for computing education are evaluated. Therefore, we performed a systematic literature review of a sample of 3617 articles from which 112 relevant articles have been identified, describing 117 studies on the evaluation of games for computing education. Based on these studies we analyzed how evaluations are defined (the analysis factors evaluated, research designs, evaluation models/methods used, kind of data collection instruments, etc.), how they have been executed (sample size and replications) and analyzed (data analysis methods used). As a result, we can confirm that most evaluations use a simple research design in which, typically, the game is used and afterwards subjective feedback is collected via questionnaires from the learners. The majority of the evaluations are run with small samples, without replication, using mostly qualitative methods for data analysis. We also observed that most studies do not use a well-defined evaluation model or method. This shows that there is a need for more rigorous evaluations as well as methodological support in order to assist game creators and instructors to improve such games as well as to systematically support decisions on when or how to include them within instructional units.
Citation
Petri, G. & Gresse von Wangenheim, C. (2017). How games for computing education are evaluated? A systematic literature review. Computers & Education, 107(1), 68-90. Elsevier Ltd. Retrieved February 5, 2023 from https://www.learntechlib.org/p/200455/.
This record was imported from
Computers & Education
on February 20, 2019.
Computers & Education is a publication of Elsevier.
Keywords
References
View References & Citations Map- Abt, C.C. (2002). Serious games. Lanhan, MD: University Press of America.
- ACM/IEEE-CS, None (2013). Computer science curricula 2013: Curriculum guidelines for undergraduate degree programs in computer science, 2013.
- All, A., Castellar, E.P.N., & Looy, J.V. (2016). Assessing the effectiveness of digital game-based learning: Best practices. Computers & Education, 92–93, pp. 90-103.
- Basili, V.R., Caldiera, G., & Rombach, H.D. (1994). Goal, question metric paradigm. Encyclopedia of software engineering, pp. 528-532. New York, NY, USA: Wiley-Interscience.
- Battistella, P., & Gresse von Wangenheim, C. (2016). Games for teaching computing in higher education – A systematic review. IEEE Technology and Engineering Education, 9(1), pp. 8-30.
- Bednarik, R., Gerdt, P., Miraftabi, R., & Tukiainen, M. (2004). Development of the TUP model – evaluating educational software. Proc. of the 4th IEEE International Conference on Advanced Learning Technologies, pp. 699-701.
- Bloom, B.S. (1956). Taxonomy of educational objectives: The classification of educational goals: Handbook I, cognitive domain. New York: Toronto: Longmans, Green.
- Boyle, E.A., Connolly, T.M., & Hainey, T. (2011). The role of psychology in understanding the impact of computer games. Entertainment Computing, 2, pp. 69-74.
- Boyle, E.A., Hainey, T., Connolly, T.M., Gray, G., Earp, J., & Ott, M. (2016). An update to the systematic literature review of empirical evidence of the impacts and outcomes of computer games and serious games. Computers & Education, 94, pp. 178-192.
- Branch, R.M. (2010). Instructional Design: The ADDIE Approach.
- Brooke, J. (1996). SUS: A "quick and dirty" usability scale. Usability evaluation in industry London: Taylor and Francis.
- Calderón, A., & Ruiz, M. (2015). A systematic literature review on serious games evaluation: An application to software project management. Computers & Education, 87, pp. 396-422.
- Caulfield, C., Xia, J., Veal, D., & Maj, S.P. (2011). A systematic survey of games used for software engineering education. Modern Applied Science, 5(6), pp. 28-43.
- Connolly, T.M., Boyle, E.A., MacArthur, E., Hainey, T., & Boyle, J.M. (2012). A systematic literature review of empirical evidence on computer games and serious games. Computers & Education, 59(2), pp. 661-686.
- Connolly, T.M., Stansfield, M.H., & Hainey, T. (2009). Towards the development of a games-based learning evaluation framework. Games-based learning advancement for multisensory human computer interfaces: Techniques and effective practices Hershey: Idea-Group Publishing.
- Davis, F.D., Bagozzi, R.P., & Warshaw, P.R. (1989). User acceptance of computer technology: A comparison of two theoretical model. Management Science., 35(8), pp. 982-1003.
- Falchikov, N., & Boud, D. (1989). Student self-assessment in higher education: A meta-analysis. Review of Educational Research, 59(4), pp. 395-430.
- Fenton, N.E., & Pfleeger, S.L. (1998). Software metrics: A rigorous and practical approach. Boston, MA, USA: PWS Pub. Co..
- Freedman, D., Pisani, R., & Purves, R. (2007). Statistics. New York: W. W. Norton & Company.
- Freeman, S., Eddy, S.L., McDonough, M., Smith, M.K., Okoroafor, N., & Jordt, H. (2014). Active learning increases student performance in science, engineering, and mathematics. Proceedings of the National Academy of Sciences of the United States of America, 111(23), pp. 8410-8415.
- Fu, F., Su, R., & Yu, S. (2009). EGameFlow: A scale to measure learners' enjoyment of e-learning games. Computers & Education, 52(1), pp. 101-112.
- Gagné, R.M., Briggs, L.J., & Wager, W.W. (1992). Principles of instructional design. Forth Worth, TX: Harcourt Brace Jovanovich College Publishers.
- Garris, R., Ahlers, R., & Driskell, J.E. (2002). Games, motivation, and learning: A research and practice model. Simulation Gaming, 33(4), pp. 441-467.
- Gibson, B., & Bell, T. (2013). Evaluation of games for teaching computer science. Proc. Of the 8th workshop in primary and secondary computing education, pp. 51-60. New York, NY, USA: ACM.
- Gouws, L.A., Bradshaw, K., & Wentworth, P. (2013). Computational thinking in educational activities: An evaluation of the educational game light-bot. Proc. Of the 18th ACM Conf. On innovation and technology in computer science education, pp. 10-15. New York, NY, USA: ACM.
- Graf, S., Viola, S.R., Leo, T., & Kinshuk, None (2007). In-depth analysis of the Felder-Silverman learning style dimensions. Journal of Research on Technology in Education, 40(1), pp. 79-93.
- Gresse von Wangenheim, C., Kochanski, D., & Savi, R. (2009). Systematic Review on evaluation of games for software engineering learning in Brazil. Fortaleza, Brazil: Software Engineering Education Forum.
- Gresse von Wangenheim, C., Savi, R., & Borgatto, A.F. (2013). SCRUMIA - An educational game for teaching SCRUM in computing courses. Journal of Systems and Software, 86(10), pp. 2675-2687.
- Gresse von Wangenheim, C., & Shull, F. (2009). To Game or Not to Game?. Software, IEEE, 26(2), pp. 92-94.
- Hainey, T., Connolly, T.M., Stansfield, M., & Boyle, E.A. (2011). Evaluation of a game to teach requirements collection and analysis in software engineering at tertiary education level. Computers & Education, 56(1), pp. 21-35.
- Hays, R.T. (2005). The effectiveness of instructional games: A literature review and discussion. Orlando, FL, USA: Naval Air Warfare Center Training System Division.
- Ibrahim, R., Yusoff, R.C.M., Omar, H.M., & Jaafar, A. (2011). Students perceptions of using educational games to learn introductory programming. Computer and Information Science, 4(1), pp. 205-216.
- International Standard Organization (ISO), None (2011). ISO/IEC 25010: Systems and software engineering – systems and software Quality Requirements and Evaluation (SQuaRE) – System and software quality models.
- Kazimoglu, C., Kiernan, M., Bacon, L., & Mackinnon, L. (2012). A serious game for developing computational thinking and learning introductory computer programming. Procedia - Social and Behavioral Sciences, 47, pp. 1991-1999.
- Keller, J. (1987). Development and use of the ARCS model of motivational design. Journal of Instructional Development, 10(3), pp. 2-10.
- Kitchenham, B. (2010). Systematic literature reviews in software engineering – a tertiary study. Information and Software Technology, 52(1), pp. 792-805.
- Kitchenham, B., Pfleeger, S.L., & Fenton, N. (1995). Towards a framework for software measurement validation. IEEE Transactions on Software Engineering, 21(12), pp. 929-944.
- MUMMS, None (2015). Measuring the usability of multi-media systems.
- Parsons, P. (2011). Preparing computer science graduates for the 21st Century. Teaching Innovation Projects, 1(1).
- Petri, G., & Gresse von Wangenheim, C. (2016). How to evaluate educational games: a systematic literature review. Journal of Universal Computers Science, 22(7), pp. 992-1021.
- Ross, J.A. (2006). The reliability, validity, and utility of self-assessment. Practical Assessment, Research & Evaluation, 11(10), pp. 1-13.
- Ross, J.A., Rolheiser, C., & Hogaboam-Gray, A. (1998). Skills training versus action research InService: Impact on student attitudes to self-evaluation. Teaching and Teacher Education, 14(5), pp. 463-477.
- Sindre, G., Natvig, L., & Jahre, M. (2009). Experimental validation of the learning effect for a pedagogical game on computer fundamentals. IEEE Transactions on Education, 52(1), pp. 10-18.
- Sitzmann, T., Ely, K., Brown, K.G., & Bauer, K.N. (2010). Self-assessment of knowledge: A cognitive learning or affective measure?. Academy of Management Learning & Education, 9(2), pp. 169-191.
- Takatalo, J., Häkkinen, J., Kaistinen, J., & Nyman, G. (2010). Presence, involvement, and flow in digital games. Evaluating user experience in Games: Concepts and methods, pp. 23-46.
- Topping, K. (2003). Self and peer assessment in school and University: Reliability, validity and utility. Optimising new modes of Assessment: In search of qualities and standards, 1, pp. 55-87. Dordrecht: Kluwer Academic Publishers.
- Tullis, T., & Albert, W. (2008). Measuring the user Experience: Collecting, analyzing, and presenting usability metrics.
- Wagner, R.W. (1970). . Edgar Dale: Professional theory into practice, Vol. 9.
- Wohlin, C., Runeson, P., Höst, M., Ohlsson, M.C., Regnell, B., & Wesslén, A. (2012). Experimentation in software engineering.
- Yin, R.K. (2009). Case study research: Design and methods. Beverly Hills: Sage Publications.
These references have been extracted automatically and may have some errors. Signed in users can suggest corrections to these mistakes.
Suggest Corrections to References