You are here:

Assessing Writing in MOOCs: Automated Essay Scoring and Calibrated Peer Review™
ARTICLE

Research & Practice in Assessment Volume 8,

Abstract

Two of the largest Massive Open Online Course (MOOC) organizations have chosen different methods for the way they will score and provide feedback on essays students submit. EdX, MIT and Harvard's non-profit MOOC federation, recently announced that they will use a machine-based Automated Essay Scoring (AES) application to assess written work in their MOOCs. Coursera, a Stanford startup for MOOCs, has been skeptical of AES applications and therefore has held that it will use some form of human-based "calibrated peer review" to score and provide feedback on student writing. This essay reviews the relevant literature on AES and UCLA's Calibrated Peer Review™ (CPR) product at a high level, outlines the capabilities and limitations of both AES and CPR, and provides a table and framework for comparing these forms of assessment of student writing in MOOCs.

Citation

Balfour, S.P. (2013). Assessing Writing in MOOCs: Automated Essay Scoring and Calibrated Peer Review™. Research & Practice in Assessment, 8, 40-48. Retrieved August 13, 2020 from .

This record was imported from ERIC on November 3, 2015. [Original Record]

ERIC is sponsored by the Institute of Education Sciences (IES) of the U.S. Department of Education.

Copyright for this record is held by the content creator. For more details see ERIC's copyright policy.

Keywords

Cited By

View References & Citations Map

These links are based on references which have been extracted automatically and may have some errors. If you see a mistake, please contact info@learntechlib.org.