Login or register for free to remove ads.
You are here:

Detecting Dummy Learner Submitted Annotations in an Online Case Learning Environment Proceeding

, McGill University, Canada ; , University of Utah, United States ; , University Health Network, Toronto Western Hospital, Canada ; , McGill University, Canada

EdMedia: World Conference on Educational Media and Technology, in Vancouver, BC, Canada ISBN 978-1-939797-24-7 Publisher: Association for the Advancement of Computing in Education (AACE), Waynesville, NC

Abstract

One of the key approaches in designing adaptive learning systems is the use of algorithms that can process and discover interesting, interpretable, and meaningful knowledge from the data tracked and logged by learning systems. Text classification has been employed with much success in a wide variety of tasks such as information extraction and summarization, text retrieval, and document classification. In this paper, we focus on discriminating between legitimate and dummy annotations in an online medical learning environment called MedU by infusing a text-classification based approach into the process. Manually detecting dummy annotations in MedU can be quite time-consuming, especially when it involves big data. Employing automatic text classification approach can mitigate the aforementioned issue. Moreover, a system capable of detecting learner submitted dummy annotations could be adapted to provide appropriate feedback to the learner.

Citation

Doleck, T., Poitras, E., Naismith, L. & Lajoie, S. (2016). Detecting Dummy Learner Submitted Annotations in an Online Case Learning Environment. In Proceedings of EdMedia 2016--World Conference on Educational Media and Technology (pp. 498-503). Vancouver, BC, Canada: Association for the Advancement of Computing in Education (AACE). Retrieved September 26, 2017 from .