You are here:

Detecting Dummy Learner Submitted Annotations in an Online Case Learning Environment

, McGill University, Canada ; , University of Utah, United States ; , University Health Network, Toronto Western Hospital, Canada ; , McGill University, Canada

EdMedia + Innovate Learning, in Vancouver, BC, Canada ISBN 978-1-939797-24-7 Publisher: Association for the Advancement of Computing in Education (AACE), Waynesville, NC


One of the key approaches in designing adaptive learning systems is the use of algorithms that can process and discover interesting, interpretable, and meaningful knowledge from the data tracked and logged by learning systems. Text classification has been employed with much success in a wide variety of tasks such as information extraction and summarization, text retrieval, and document classification. In this paper, we focus on discriminating between legitimate and dummy annotations in an online medical learning environment called MedU by infusing a text-classification based approach into the process. Manually detecting dummy annotations in MedU can be quite time-consuming, especially when it involves big data. Employing automatic text classification approach can mitigate the aforementioned issue. Moreover, a system capable of detecting learner submitted dummy annotations could be adapted to provide appropriate feedback to the learner.


Doleck, T., Poitras, E., Naismith, L. & Lajoie, S. (2016). Detecting Dummy Learner Submitted Annotations in an Online Case Learning Environment. In Proceedings of EdMedia 2016--World Conference on Educational Media and Technology (pp. 498-503). Vancouver, BC, Canada: Association for the Advancement of Computing in Education (AACE). Retrieved February 16, 2019 from .

View References & Citations Map


  1. Azevedo, R. (2005). Computers as metacognitive tools for enhancing learning. Educational Psychologist, 40(4), 193–197.
  2. Baker, R., Walonoski, J., Heffernan, N., Roll, I., Corbett, A., Koedinger, K. (2008) Why Students Engage in "Gaming the System" Behavior in Interactive Learning Environments. Journal of Interactive Learning Research, 19 (2), 185-224.
  3. Doleck, T., Basnet, R.B., Poitras, E.G., & Lajoie, S.P. (2015). Mining Learner-System Interaction Data: Implications for Modeling Learner Behaviors and Improving Overlay Models. Journal of Computers in Education, 2(4), 421-447.
  4. Fall, L., Berman, N., Smith, S., White, C., Woodhead, J., & Olson, A. (2005). Multi-institutional Development and Utilization of a Computer-Assisted Learning Program for the Pediatrics Clerkship: The CLIPP Project. Academic Medicine, 80(9), 847-855.
  5. Ferguson, R., Wei, Z., He. Y., & Buckingham, S. (2013). An evaluation of learning analytics to identify exploratory dialogue in online discussions. In Proceedings of the 3rd International Conference on Learning Analytics and Knowledge. New York, NY: ACM.
  6. Lajoie, S.P. (2005). Extending the scaffolding metaphor. Instructional Science, 33(5-6), 541-557.
  7. Lajoie, S.P., & Azevedo, R. (2006). Teaching and learning in technology-rich environments. In P.A. Alexander & P.H. Winne (Eds.), Handbook of educational psychology (pp. 803-821). Mahwah, NJ: Lawrence Erlbaum
  8. Mayfield, E., Rose, C.P. (2013). LightSIDE: Open Source Machine Learning for Text Accessible to Non-Experts. In M.D. Shermis & J. Burstein (Eds.), Handbook of Automated Essay Evaluation (pp. 124-135.). Routledge.
  9. Miltsakaki, E., & Troutt, A. (2008). Real-time web text classification and analysis of reading difficulty. In J. Tetreault, J. Burstein, & R. De Felice (Eds.), EANL Proceedings of the 3rd Workshop on Innovative Use of NLP for Building Educational Applications (pp. 89-97). Morristown: Association for Computational Linguistics.
  10. Pea, R.D. (2004). The social and technological dimensions of scaffolding and related theoretical concepts for learning, education, and human activity. Journal of the Learning Sciences, 13(3), 423–451.
  11. Yahya, A.A., & Osman, A. (2011). Automatic Classification of Question into Bloom's Cognitive Level using Support Vector Machines. In the International Arab Conference on Information Technology, Naif Arab University for Security Science (NAUSS), Riyadh, Saudi Arabia.
  12. Yang, Y., & Liu, X (1999). A re-examination of text categorization methods. In Proceedings of the 22nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (42-49). Berkeley, CA
  13. Yu, H.-F., Ho, C.-H., Juan, Y.-C., & Lin. C.-J. (2013). LibShortText: a library for short-text classification and analysis. Technical report. Retrieved from

These references have been extracted automatically and may have some errors. If you see a mistake in the references above, please contact