You are here:

A Tool to Assess Fine-grained Knowledge from Correct and Incorrect Answers in Online Multiple-choice Tests: an Application to Student Modeling
PROCEEDING

, , , Learning Research and Development Center, University of Pittsburgh, United States

EdMedia + Innovate Learning, in Washington, DC ISBN 978-1-939797-29-2 Publisher: Association for the Advancement of Computing in Education (AACE), Waynesville, NC

Abstract

Intelligent tutoring systems (ITS), like human tutors, try to adapt to student’s knowledge level so that the instruction is tailored to their needs. One aspect of this adaptation relies on the ability to have an understanding of the student’s initial knowledge so as to build on it, avoiding teaching what the student already knows and focusing on the knowledge the student lacks or understands poorly. One way of acquiring this initial student knowledge state is by having the student take a multiple-choice test. However, the overall results commonly provided by multiple-choice tests may not be at the level of granularity needed by the ITS. This paper presents a tool that allows the extraction of fine-grained knowledge from correct and incorrect answers given in multiple-choice tests. Although the tool was developed to be used by ITSs, we argue that it could become a useful instrument for teachers in classroom evaluations.

Citation

Albacete, P., Silliman, S. & Jordan, P. (2017). A Tool to Assess Fine-grained Knowledge from Correct and Incorrect Answers in Online Multiple-choice Tests: an Application to Student Modeling. In J. Johnston (Ed.), Proceedings of EdMedia 2017 (pp. 988-996). Washington, DC: Association for the Advancement of Computing in Education (AACE). Retrieved October 20, 2019 from .

References

View References & Citations Map

These references have been extracted automatically and may have some errors. Signed in users can suggest corrections to these mistakes.

Suggest Corrections to References