ASET-HERDSA 2000 Main Page
[ Abstracts ] [ Program ] [ Proceedings ] [ ASET-HERDSA 2000 Main ]

Quality assurance through a continuous curriculum review (CCR) strategy: Reflections on a pilot project

Calvin Smith
Kerrianne Watt

Teaching and Educational Development Institute, University of Queensland
Wayne Robinson
Deputy Vice-President (Academic), Deakin University, Geelong



Woodward (1993:113) defines institutional research as "the activity in which the research effort of an academic institution is directed at the solution of its own problems and to the enhancement of its own performance".

Quality Assurance in Teaching and Learning in Higher Education
Performance measures are used increasingly by governments in Australia, as elsewhere, to compare universities with each other. Performance comparisons may be linked to recurrent funding in future. Consequently, measures of performance have increasing salience to the higher education community. Regardless how one views of the use of performance indicators in this way, it is becoming more and more likely that the future will see as much or more of this kind of application rather than less of it.

The Course Experience Questionnaire (CEQ) (Ramsden, 1991) has become a central performance indicator of the quality of courses in the higher education sector in Australia because of its use by government to compare institutions' performances in teaching and course characteristics. The CEQ measures course characteristics using five multiple item scales (Good Teaching Scale, Appropriate Workload Scale, Clear Goals and Standards Scale, Appropriate Assessment Scale and the Generic Skills Scale).

Apart from generating debate, another impact of the CEQ has been to focus academics' attention to the kinds of things that might be done in these circumstances to improve their relative ranking based on the CEQ measures. However, the problems with the CEQ make it very difficult for academics to focus their efforts and attention in effective ways that might improve the curriculum and students' experiences of the course as a whole.

In this paper we report on one such attempt at the University of Queensland in Brisbane, Australia. We piloted a method for gathering data on whole courses (as opposed to teaching or subject evaluation data) for the purposes of targeting improvement strategies. The project was funded out of university teaching quality funding and involved the cooperation of the University's centrally funded Teaching and Educational Development Institute (TEDI). In this paper we describe the strategy and the reporting protocols that were developed, and reflect on the costs and benefits of engaging in this kind of data gathering exercise for quality assurance and quality enhancement purposes.

The current project, called the Continuous Curriculum Review (CCR) Pilot Project was conceived as a method gather information about curricula that would help teaching staff overcome these problems and target specific areas of the course for improvement and to do so in a timely manner.

The project strategy
One department or school from each faculty was chosen for this pilot. With the help of consultants from the Teaching and Educational Development Institute (TEDI), the members of each school's or department's curriculum committee, in consultation with other school/department staff, produced a questionnaire for each year of their degree program. Students were surveyed using these questionnaires early in the academic year (during Semester 1) in the case of the first participating school, and later in the year (during Semester 2) in the case of the remaining schools and departments. Survey instruments were constructed in consultation with members of teaching and learning committees or other appropriate persons.

Structure of the instruments
The features of the typical survey instrument are as follows:

  1. CEQ items - for comparison with the "real" CEQ data collected from graduates each year;
  2. Items relating to the whole year in question - either to check whole year objectives or goals, or to assess such things as peer networking;
  3. Items relating to individual subjects - for example to assess the adequacy of labs, or the effectiveness of tutorials (obviously some questions can be asked of all subjects in a year, whereas for others, only those subjects to which the question relates would have the question applied to them);
  4. Items measuring achievement of various graduate attributes - students were asked to self assess their level of skill in each of the graduate attribute areas the University of Queensland has determined are those that should be developed in students over the course of a degree. Students were asked to state their level of competence at the start of the year in question and at the end of the year in question.
This project is continuing into its next phase (implementing and measuring the effects of changes in teaching and learning design) and this early phase is being evaluated. We report on the process, its implementation, its pros and cons, and recommendations for practice for those wishing to trial a similar approach.

Contact person: Dr Calvin Smith. Email: c.smith@mailbox.uq.edu.au
Voice: +61(0)7 3365 3065 Fax: +61(0)7 3365 1966

Please cite as: Smith, C., Watt, K. and Robinson, W. (2000). Quality assurance through a continuous curriculum review (CCR) strategy: Reflections on a pilot project. In Flexible Learning for a Flexible Society, Proceedings of ASET-HERDSA 2000 Conference. Toowoomba, Qld, 2-5 July. ASET and HERDSA. http://cleo.murdoch.edu.au/gen/aset/confs/aset-herdsa2000/abstracts/smith2-abs.html



[ Abstracts ] [ Program ] [ Proceedings ] [ ASET-HERDSA 2000 Main ]
Created 19 June 2000. Last revised: 19 June 2000. HTML: Roger Atkinson [atkinson@cleo.murdoch.edu.au]
This URL: http://cleo.murdoch.edu.au/gen/aset/confs/aset-herdsa2000/abstracts/smith2-abs.html