Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 87753
Evaluation: Developing An Appropriate Survey Instrument For E-Learning
Authors: Brenda Ravenscroft, Ulemu Luhanga, Bev King
Abstract:
A comprehensive evaluation of online learning needs to include a blend of educational design, technology use, and online instructional practices that integrate technology appropriately for developing and delivering quality online courses. Research shows that classroom-based evaluation tools do not adequately capture the dynamic relationships between content, pedagogy, and technology in online courses. Furthermore, studies suggest that using classroom evaluations for online courses yields lower than normal scores for instructors, and may affect faculty negatively in terms of administrative decisions. In 2014, the Faculty of Arts and Science at Queen’s University responded to this evidence by seeking an alternative to the university-mandated evaluation tool, which is designed for classroom learning. The Faculty is deeply engaged in e-learning, offering large variety of online courses and programs in the sciences, social sciences, humanities and arts. This paper describes the process by which a new student survey instrument for online courses was developed and piloted, the methods used to analyze the data, and the ways in which the instrument was subsequently adapted based on the results. It concludes with a critical reflection on the challenges of evaluating e-learning. The Student Evaluation of Online Teaching Effectiveness (SEOTE), developed by Arthur W. Bangert in 2004 to assess constructivist-compatible online teaching practices, provided the starting point. Modifications were made in order to allow the instrument to serve the two functions required by the university: student survey results provide the instructor with feedback to enhance their teaching, and also provide the institution with evidence of teaching quality in personnel processes. Changes were therefore made to the SEOTE to distinguish more clearly between evaluation of the instructor’s teaching and evaluation of the course design, since, in the online environment, the instructor is not necessarily the course designer. After the first pilot phase, involving 35 courses, the results were analyzed using Stobart's validity framework as a guide. This process included statistical analyses of the data to test for reliability and validity, student and instructor focus groups to ascertain the tool’s usefulness in terms of the feedback it provided, and an assessment of the utility of the results by the Faculty’s e-learning unit responsible for supporting online course design. A set of recommendations led to further modifications to the survey instrument prior to a second pilot phase involving 19 courses. Following the second pilot, statistical analyses were repeated, and more focus groups were used, this time involving deans and other decision makers to determine the usefulness of the survey results in personnel processes. As a result of this inclusive process and robust analysis, the modified SEOTE instrument is currently being considered for adoption as the standard evaluation tool for all online courses at the university. Audience members at this presentation will be stimulated to consider factors that differentiate effective evaluation of online courses from classroom-based teaching. They will gain insight into strategies for introducing a new evaluation tool in a unionized institutional environment, and methodologies for evaluating the tool itself.Keywords: evaluation, online courses, student survey, teaching effectiveness
Procedia PDF Downloads 267