You are here:

Assessing creative problem-solving with automated text grading
ARTICLE

, ,

Computers & Education Volume 51, Number 4, ISSN 0360-1315 Publisher: Elsevier Ltd

Abstract

The work aims to improve the assessment of creative problem-solving in science education by employing language technologies and computational–statistical machine learning methods to grade students’ natural language responses automatically. To evaluate constructs like creative problem-solving with validity, open-ended questions that elicit students’ constructed responses are beneficial. But the high cost required in manually grading constructed responses could become an obstacle in applying open-ended questions. In this study, automated grading schemes have been developed and evaluated in the context of secondary Earth science education. Empirical evaluations revealed that the automated grading schemes may reliably identify domain concepts embedded in students’ natural language responses with satisfactory inter-coder agreement against human coding in two sub-tasks of the test (Cohen’s Kappa=.65–.72). And when a single holistic score was computed for each student, machine-generated scores achieved high inter-rater reliability against human grading (Pearson’s r=.92). The reliable performance in automatic concept identification and numeric grading demonstrates the potential of using automated grading to support the use of open-ended questions in science assessments and enable new technologies for science learning.

Citation

Wang, H.C., Chang, C.Y. & Li, T.Y. (2008). Assessing creative problem-solving with automated text grading. Computers & Education, 51(4), 1450-1466. Elsevier Ltd. Retrieved October 22, 2019 from .

This record was imported from Computers & Education on February 1, 2019. Computers & Education is a publication of Elsevier.

Full text is availabe on Science Direct: http://dx.doi.org/10.1016/j.compedu.2008.01.006

Keywords