Search results for author:"Hong Jiao"
Total records matched: 9 Search took: 0.067 secs
Incorporating Person Covariates and Response Times as Collateral Information to Improve Person and Item Parameter Estimations
Annual Meeting of the National Council on Measurement in Education (NCME) 2011 (April 2011)
For decades, researchers and practitioners have made a great deal of effort to study a variety of methods to increase parameter accuracy, but only recently can researchers start focusing on improving parameter estimations by using a joint model that ...
Journal of Educational Measurement Vol. 50, No. 2 (2013) pp. 186–203
This study demonstrated the equivalence between the Rasch testlet model and the three-level one-parameter testlet model and explored the Markov Chain Monte Carlo (MCMC) method for model parameter estimation in WINBUGS. The estimation accuracy from...
Effect of Person Cluster on Accuracy of Ability Estimation of Computerized Adaptive Testing in K-12 Education Assessment
Annual Meeting of the American Educational Research Association 2011 (Oct 05, 2011)
The ability estimation procedure is one of the most important components in a computerized adaptive testing (CAT) system. Currently, all CATs that provide K-12 student scores are based on the item response theory (IRT) model(s); while such...
Construct Validity and Measurement Invariance of Computerized Adaptive Testing: Application to Measures of Academic Progress (MAP) Using Confirmatory Factor Analysis
Annual Meeting of the American Educational Research Association 2012 (April 2012)
The purposes of this study are twofold. First, to investigate the construct or factorial structure of a set of Reading and Mathematics computerized adaptive tests (CAT), "Measures of Academic Progress" (MAP), given in different states at different...
Investigating Effect of Ignoring Hierarchical Data Structures on Accuracy of Vertical Scaling Using Mixed-Effects Rasch Model
Annual Meeting of the National Council on Measurement in Education (NCME) 2010 (2010)
The vertical scales of large-scale achievement tests created by using item response theory (IRT) models are mostly based on cluster (or correlated) educational data in which students usually are clustered in certain groups or settings (classrooms or ...
Comparison between Dichotomous and Polytomous Scoring of Innovative Items in a Large-Scale Computerized Adaptive Test
Educational and Psychological Measurement Vol. 72, No. 3 (June 2012) pp. 493–509
This study explored the impact of partial credit scoring of one type of innovative items (multiple-response items) in a computerized adaptive version of a large-scale licensure pretest and operational test settings. The impacts of partial credit...
Applied Psychological Measurement Vol. 36, No. 6 (September 2012) pp. 469–493
This study explored a computerized adaptive test delivery algorithm for latent class identification based on the mixture Rasch model. Four item selection methods based on the Kullback-Leibler (KL) information were proposed and compared with the...
Comparability of Computer-Based and Paper-and-Pencil Testing in K-12 Reading Assessments: A Meta-Analysis of Testing Mode Effects
Educational and Psychological Measurement Vol. 68, No. 1 (2008) pp. 5–24
In recent years, computer-based testing (CBT) has grown in popularity, is increasingly being implemented across the United States, and will likely become the primary mode for delivering tests in the future. Although CBT offers many advantages over...
Educational and Psychological Measurement Vol. 67, No. 2 (2007) pp. 219–238
This study conducted a meta-analysis of computer-based and paper-and-pencil administration mode effects on K-12 student mathematics tests. Both initial and final results based on fixed- and random-effects models are presented. The results based on...