![]() |
Quality Assurance in Teaching and Learning in Higher Education
Performance measures are used increasingly by governments in Australia, as elsewhere, to compare universities with each other. Performance comparisons may be linked to recurrent funding in future. Consequently, measures of performance have increasing salience to the higher education community. Regardless how one views of the use of performance indicators in this way, it is becoming more and more likely that the future will see as much or more of this kind of application rather than less of it.
The Course Experience Questionnaire (CEQ) (Ramsden, 1991) has become a central performance indicator of the quality of courses in the higher education sector in Australia because of its use by government to compare institutions' performances in teaching and course characteristics. The CEQ measures course characteristics using five multiple item scales (Good Teaching Scale, Appropriate Workload Scale, Clear Goals and Standards Scale, Appropriate Assessment Scale and the Generic Skills Scale).
Apart from generating debate, another impact of the CEQ has been to focus academics' attention to the kinds of things that might be done in these circumstances to improve their relative ranking based on the CEQ measures. However, the problems with the CEQ make it very difficult for academics to focus their efforts and attention in effective ways that might improve the curriculum and students' experiences of the course as a whole.
In this paper we report on one such attempt at the University of Queensland in Brisbane, Australia. We piloted a method for gathering data on whole courses (as opposed to teaching or subject evaluation data) for the purposes of targeting improvement strategies. The project was funded out of university teaching quality funding and involved the cooperation of the University's centrally funded Teaching and Educational Development Institute (TEDI). In this paper we describe the strategy and the reporting protocols that were developed, and reflect on the costs and benefits of engaging in this kind of data gathering exercise for quality assurance and quality enhancement purposes.
The current project, called the Continuous Curriculum Review (CCR) Pilot Project was conceived as a method gather information about curricula that would help teaching staff overcome these problems and target specific areas of the course for improvement and to do so in a timely manner.
The project strategy
One department or school from each faculty was chosen for this pilot. With the help of consultants from the Teaching and Educational Development Institute (TEDI), the members of each school's or department's curriculum committee, in consultation with other school/department staff, produced a questionnaire for each year of their degree program. Students were surveyed using these questionnaires early in the academic year (during Semester 1) in the case of the first participating school, and later in the year (during Semester 2) in the case of the remaining schools and departments. Survey instruments were constructed in consultation with members of teaching and learning committees or other appropriate persons.
Structure of the instruments
The features of the typical survey instrument are as follows:
| Contact person: Dr Calvin Smith. Email: c.smith@mailbox.uq.edu.au Voice: +61(0)7 3365 3065 Fax: +61(0)7 3365 1966 Please cite as: Smith, C., Watt, K. and Robinson, W. (2000). Quality assurance through a continuous curriculum review (CCR) strategy: Reflections on a pilot project. In Flexible Learning for a Flexible Society, Proceedings of ASET-HERDSA 2000 Conference. Toowoomba, Qld, 2-5 July. ASET and HERDSA. http://cleo.murdoch.edu.au/gen/aset/confs/aset-herdsa2000/abstracts/smith2-abs.html |