Detecting Dummy Learner Submitted Annotations in an Online Case Learning Environment
Tenzin Doleck, McGill University, Canada ; Eric Poitras, University of Utah, United States ; Laura Naismith, University Health Network, Toronto Western Hospital, Canada ; Susanne Lajoie, McGill University, Canada
EdMedia + Innovate Learning, in Vancouver, BC, Canada ISBN 978-1-939797-24-7 Publisher: Association for the Advancement of Computing in Education (AACE), Waynesville, NC
One of the key approaches in designing adaptive learning systems is the use of algorithms that can process and discover interesting, interpretable, and meaningful knowledge from the data tracked and logged by learning systems. Text classification has been employed with much success in a wide variety of tasks such as information extraction and summarization, text retrieval, and document classification. In this paper, we focus on discriminating between legitimate and dummy annotations in an online medical learning environment called MedU by infusing a text-classification based approach into the process. Manually detecting dummy annotations in MedU can be quite time-consuming, especially when it involves big data. Employing automatic text classification approach can mitigate the aforementioned issue. Moreover, a system capable of detecting learner submitted dummy annotations could be adapted to provide appropriate feedback to the learner.
Doleck, T., Poitras, E., Naismith, L. & Lajoie, S. (2016). Detecting Dummy Learner Submitted Annotations in an Online Case Learning Environment. In Proceedings of EdMedia 2016--World Conference on Educational Media and Technology (pp. 498-503). Vancouver, BC, Canada: Association for the Advancement of Computing in Education (AACE).
© 2016 Association for the Advancement of Computing in Education (AACE)