You are here:

Comparability of Computer-Based and Paper-and-Pencil Testing in K-12 Reading Assessments: A Meta-Analysis of Testing Mode Effects
ARTICLE

, , , ,

Educational and Psychological Measurement Volume 68, Number 1, ISSN 0013-1644

Abstract

In recent years, computer-based testing (CBT) has grown in popularity, is increasingly being implemented across the United States, and will likely become the primary mode for delivering tests in the future. Although CBT offers many advantages over traditional paper-and-pencil testing, assessment experts, researchers, practitioners, and users have expressed concern about the comparability of scores between the two test administration modes. To help provide an answer to this issue, a meta-analysis was conducted to synthesize the administration mode effects of CBTs and paper-and-pencil tests on K-12 student reading assessments. Findings indicate that the administration mode had no statistically significant effect on K-12 student reading achievement scores. Four moderator variables--study design, sample size, computer delivery algorithm, and computer practice--made statistically significant contributions to predicting effect size. Three moderator variables--grade level, type of test, and computer delivery method--did not affect the differences in reading scores between test modes. (Contains 4 tables.)

Citation

Wang, S., Jiao, H., Young, M.J., Brooks, T. & Olson, J. (2008). Comparability of Computer-Based and Paper-and-Pencil Testing in K-12 Reading Assessments: A Meta-Analysis of Testing Mode Effects. Educational and Psychological Measurement, 68(1), 5-24. Retrieved August 18, 2019 from .

This record was imported from ERIC on April 18, 2013. [Original Record]

ERIC is sponsored by the Institute of Education Sciences (IES) of the U.S. Department of Education.

Copyright for this record is held by the content creator. For more details see ERIC's copyright policy.

Keywords