Assessing Online Dialogue in Higher Education PROCEEDINGS
Eva Bures, Bishop's University, Canada ; Philip Abrami, Concordia University, Canada ; Alexandra Barclay, Mount St. Vincent University, Canada ; Eva Bures, Bishop's University, Canada
E-Learn: World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education, in Orlando, Florida, USA ISBN 978-1-880094-83-9 Publisher: Association for the Advancement of Computing in Education (AACE), Chesapeake, VA
The aim of this study is to develop valid researcher and instructor approaches to successfully assess online dialogue. Two content analysis approaches measuring critical thinking and interactivity were applied to representative online groups of 3-4 graduate education students engaged in 2 two-week long online activities. Two representative successful groups and 2 less successful groups were chosen based on the instructor’s marks. The interactivity measure did not distinguish between the more and less successful groups but the critical thinking measure did. A revised content analysis is being applied by two coders to two online activities in 3 classes of education undergraduate students (n=82). The instructor is assessing the students’ online dialogue using a holistic rubric we have developed, as are the coders. The results from the content analysis and the holistic marks will be compared, in a search for validity, and inter-rater reliability will be explored for all measures.
Bures, E., Abrami, P., Barclay, A. & Bures, E. (2010). Assessing Online Dialogue in Higher Education. In J. Sanchez & K. Zhang (Eds.), Proceedings of E-Learn 2010--World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education (pp. 438-448). Orlando, Florida, USA: Association for the Advancement of Computing in Education (AACE). Retrieved October 21, 2017 from https://www.learntechlib.org/p/35585/.
© 2010 AACE