You are here:

Evaluating Comparability in Computerized Adaptive Testing: Issues, Criteria and an Example
ARTICLE

,

Journal of Educational Measurement Volume 38, Number 1, ISSN 0022-0655

Abstract

Reviews research literature on comparability issues in computerized adaptive testing (CAT) and synthesizes issues specific to comparability and test security. Develops a framework for evaluating comparability that contains three categories of criteria: (1) validity; (2) psychometric property/reliability; and (3) statistical assumption/test administration condition. Provides an illustrative example that shows how simulations can be used to improve comparability in CAT development. (SLD)

Citation

Wang, T. & Kolen, M.J. (2001). Evaluating Comparability in Computerized Adaptive Testing: Issues, Criteria and an Example. Journal of Educational Measurement, 38(1), 19-49. Retrieved July 18, 2019 from .

This record was imported from ERIC on April 18, 2013. [Original Record]

ERIC is sponsored by the Institute of Education Sciences (IES) of the U.S. Department of Education.

Copyright for this record is held by the content creator. For more details see ERIC's copyright policy.

Keywords