Course evaluation surveys: In-class paper surveys versus voluntary online surveys
Carolyn Grim Fidelman, Boston College, United States
Boston College . Awarded
When surveys move from group administration to online administration as in the case of course evaluation at many universities, survey nonresponse and ratings shift are perceived as threats to the reliability, validity and invariance (non-bias) of parameter estimates for these measures. In the case of university course evaluations, this can be particularly worrisome for voluntary, online instruments where social desirability factors that formerly kept students in the group administration are no longer at work. Summative course evaluations, while low stakes for students, are important tools in the tenure and promotion process and are also purportedly used for the improvement of university teaching and learning. In this study, 808 students from a random sample of 72 classes were invited to complete an online course evaluation using no incentives. The same students were given the course evaluation in class in paper form, followed by a self-report survey of their attitudes toward course evaluations at the university. The instruments were validated using classical test theory methods for reliability, confirmatory factor analysis for validity, and IRT graded-response model for measurement invariance. A multilevel model of course evaluation rating by class was tested using background variables (Gender, Year in school, Ethnicity, On-Campus housed, Teaching Experience), class level variables (Class Size, Course Level), contextual variables (Topic Interest, Expected Grade, Class Participation), and motivational variables (perceived Effort/Enjoyment, Autonomy, Curricular Activism). I then tested a multiple logistic regression model to predict unit-level nonresponse to the online survey using the same variables. Topic Interest was predictive of ratings only. Year in school, Gender, and Teaching Experience were predictive of nonresponse only. Expected Grade was the only variable predictive of both nonresponse and of ratings and thus can be a potential source of bias for online course evaluations in this context. A collateral outcome of the research is a model item set for measuring Topic Interest, a variable of use in research on survey nonresponse and nonresponse bias.
Fidelman, C.G. Course evaluation surveys: In-class paper surveys versus voluntary online surveys. Ph.D. thesis, Boston College.
Citation reproduced with permission of ProQuest LLC.
For copies of dissertations and theses: (800) 521-0600/(734) 761-4700 or https://dissexpress.umi.com