Teaching and Learning Forum 2014 Home Page

Category: Research

Teaching and Learning Forum 2014 [ Refereed papers ]
Dialogue by design: Creating a dialogic feedback cycle using assessment rubrics

Gael Gibbs
Murdoch University
Email: g.gibbs@murdoch.edu.au

Students consistently identify assessment feedback as a troubling aspect of their university experience. They find it confusing, difficult to act upon and unhelpful to their processes of establishing, understanding and fulfilling the criteria for high quality performance. Without a clear understanding of what constitutes 'high quality performance' students find it difficult to engage with learning: they are unable to establish goals to direct their cognitive and behavioural efforts and thereby remain engaged and motivated. Empirical evidence and attendant theorising suggests a need to reconceptualise assessment feedback within a dialogic process through which assessment goals and criterion meanings are shared and expectations and standards are clarified. In higher education assessment rubrics are a valuable but under-utilised tool capable of establishing and explicating assessment criteria and providing feedback. This paper argues for a feedback cycle that is embedded in normal teaching and learning processes and encourages staff and students to develop a shared dialogue that supports clear and transparent understandings of assessment criteria and standards. This dialogic feedback cycle simultaneously enables staff to provide high quality, criteria specific, actionable feedback and enables students to develop skills self-regulated learning. This model was tested in a pre-university enabling program in 2012 and 2013 and resulted in n overall 14.5% increase in the student retention rate.


Introduction

Assessment tasks act as a primary driver of student-initiated learning activity. They are a point at which students seek to engage with learning, the curriculum and the institution. Students readily understand that assessment tasks are the instruments through which they are required to demonstrate the quantity and quality of their knowledge and skills and thereby maintain a place in the institution (Case, 2008). Many students utilise the information they gather from written materials, such as assessment task descriptions and formative feedback, to generate learning goals and regulate their cognitive and behavioural learning activities (Gibbs, 2006, p. 3). Disappointingly, research in Australian universities indicates that students are rarely provided with detailed information about the specific elements of assessment tasks that are considered important and what levels of task fulfilment are required to achieve various levels of quality (Gore, Ladwig, Elsworth, & Ellis, 2009).

This paper demonstrates how assessment rubrics can be used to initiate and cultivate transparent and nuanced assessment dialogue that enables the tacit assumptions, understandings and limitations inherent in all assessment tools and methods to be brought to the surface and investigated. The project reported in this paper shows that when time and space are created to verbally unpack the criteria and quality standards articulated in assessment rubrics teaching staff and students are able to develop shared conceptual and linguistic understandings. These understandings subsequently frame and support the production of meaningful written feedback that concurrently underpins the development of self-regulated learning competence and significantly enhances student progression and retention. The outcomes of the project are in contrast to the criticism by some authors (Bloxham, 2009; Bloxham, Boyd, & Orr, 2011) who claim that assessment rubrics are unsuitable for establishing effective assessment dialogue because they render assessment assumptions and limitations opaque.

This paper is divided into three sections. The first section discusses the importance of formative feedback in improving student learning and development and highlights some limitations students experience when interpreting and implementing formative feedback. The second section argues for assessment rubrics as both a tool for generating effective formative feedback within the university context and as a reference document for stimulating and anchoring assessment focussed dialogue between teaching staff and students. The final section outlines the research project undertaken in OnTrack, a pre-university enabling program, in 2012 and 2013. The project developed a dialogical feedback cycle that incorporated a series of assessment rubrics and was embedded into the programs' normal teaching and learning activities of the program. This cycle allowed staff and students to develop shared understandings of assessment criteria and standards, and enabled staff to provide high quality, actionable feedback that supported students to develop self-regulated learning competence.

Formative feedback as transformative information

University students rely on written assessment information, particularly formative feedback, to construct and refine their understanding of the criteria and standards by which their skills and knowledge are assessed (Flint & Johnson, 2011). Students expect assessment feedback to provide summative information, that allows them to compare their performance to a standard, and more importantly they expect feedback to provide specific information that includes 'suggestions for ways to improve in subsequent assessments and performance' (Carr, Siddiqui, Jonas-Dwyer, & Miller, 2013, p. 4). Student expectations differ to those of teaching staff who similarly see 'feedback as an opportunity to inform students about their progress and performance' (Carr et al., 2013, p. 4) but tend to focus on correcting technical details (such as spelling and grammar) and highlighting positive aspects. A focus on technical details means that teaching staff often provide limited feedback about how to improve broader structural and/or academic elements. (Duncan, 2007).

A meta-analysis of 250 studies, drawn from a variety of education sectors, shows that high quality formative feedback produces improved learning and achievement across all content areas, knowledge and skill types, and levels of education (Black & William, 1998). Similarly, other studies demonstrate that formative assessment feedback has a profound influence on how students engage with and enact the processes of learning (Gibbs, 2006; Hattie & Timperley, 2007; Hounsell, 2007). Formative feedback impacts the cognitive, behavioural and affective domains of student learning: it provides a point of engagement through which it is possible to stimulate transformation within students' learning, learning processes, behaviours, and motivation.

Unfortunately, surveys repeatedly identify an inability to understand or use assessment feedback as one of the most troubling aspects of the student experience (Carless, Salter, Yang, & Lam, 2011; Flint & Johnson, 2011; Krause, Hartley, James, & McInnis, 2005; nicol, 2010). University students routinely experience written feedback as ambiguous, abstract, vague or cryptic (Higgins, Hartley, & Skelton, 2001; M. Walker, 2009). Critically, students frequently find feedback difficult to act upon (Gibbs, 2006; Poulos & Mahony, 2008). This is not surprising as students are seldom taught how to use feedback (Weaver, 2006) and as a result tend to rely on relatively unsophisticated strategies for decoding and implementing the written feedback they receive (Burke, 2009).

Difficulties with interpreting and applying assessment feedback are particularly common amongst beginning university students. Krause et al. (2005) reported that more than two-thirds of first year Australian university students consider the written feedback they receive to be unhelpful to their academic progress. Further Krause et al. (2005) noted that this experience is disappointing for beginning students who position feedback on their first major piece of assessment as a 'watershed experience' that they hope will provide answers to their 'most pressing assessment questions' including 'how do I know what is expected of me?' and 'what does a good assignment in this subject look like?' (Krause et al., 2005, p. 32).

Beginning university students are more vulnerable to the impact of ineffectual or poor quality feedback because they are unlikely to question the practices or judgements of the academy and have fewer resources to bring to decoding and implementing it (Flint & Johnson, 2011). These factors mean that the feedback beginning students receive in response to early assessment tasks can have a potent effect on their affective domains such as self-esteem, motivation and engagement with learning. In combination these factors mean that beginning university students are at particular risk of experiencing assessment related stress that precipitates cognitive and behavioural disengagement. Such disengagement places students 'at risk' for progression and retention with the academy (Tinto, 1993).

Effective assessment feedback is one of the most powerful single elements in improving student learning and achievement (Hattie, 1987; Hattie & Timperley, 2007) because it can assist students to develop self-regulated learning. Self-regulated learning is an active, constructive process whereby learners set goals for their learning and monitor, regulate, and control their cognition, motivation and behaviour in accordance with those goals and the constrains of the environment (Pintrich & Zusho, 2002, p. 64). Effective formative feedback allows students to accurately identify useful goals and develop realistic action plans for their achievement. It is a foundation stone for engagement and transformative learning (Taylor, 2008) as it enables students to achieve agency over both the cognitive and affective aspects of their learning and learning processes (Nicol & Macfarlane-Dick, 2006).

Self-regulated learning competence is highly advantageous to university students. Self-regulated learners achieve better academic outcomes and learn more effectively than their peers (Pintrich, 1995; Zimmerman & Schunk, 2001). Further, self-regulated learners require less external support from teaching staff, peers and their families (Pintrich, 1995; Zimmerman & Schunk, 2001). In the Australian university context, where the student population is increasing in quantum and diversity and fiscal pressures have decreased the time and space available for teaching staff and students to interact face-to-face (Nicol, 2010), self-regulated learning competence is likely to become increasingly important.

The Australasian Survey of Student Engagement (Radloff & Coates, 2009, p. 22) recently found that up to fifty per cent of university students have never spoken directly with teaching staff about the outcomes achieved on an assessment task. This finding indicates that written feedback carries a heavy burden within the modern university. More critically, it suggests that the feedback students receive and their ability to use it to self-regulate their learning will have a strong impact on student progression and retention. Assessment feedback is influential in determining how students' experience their attempts to succeed in the system. This experience cannot be separated from the broader university experience and therefore is a component of how universities 'understand students' experiences of alienation and engagement' (Case, 2008, p. 330).

Based on a synthesis of research literature about formative assessment and self-regulated learning Nicol and Macfarlane-Dick (2006, p. 205) assert that good feedback practice conforms to seven principles:

  1. helps clarify what good performance is (goals, criteria, expected standards);
  2. facilitates the development of self-assessment (reflection) in learning;
  3. delivers high quality information to students about their learning;
  4. encourages teacher and peer dialogue around learning;
  5. encourages positive motivational beliefs and self-esteem;
  6. provides opportunities to close the gap between current and desired performance;
  7. provides information to teachers that can be used to help shape teaching.
Importantly, these principles incorporate affective aspects of feedback alongside the more commonly addressed cognitive and behavioural aspects (Nicol & Macfarlane 2006).

Using assessment rubrics to transform feedback

Assessment rubrics provide a useful tool for generating effective formative feedback that is supportive of Nicol and Macfarlane-Dick's (2006) principles. An assessment rubric is a 'document that articulates the expectations for an assignment by listing criteria, or what counts, and describing levels of quality from excellent to poor' (Reddy & Andrade, 2010, p. 435). It has three essential components (Reddy & Andrade, 2010):
  1. A list of evaluation criteria: indicators that reflect the processes and/or content that the assessor considers important in determining the quality of the performance and/or product.
  2. A continuum of quality definitions: a detailed indicative description of how and to what level the evaluation criteria must be fulfilled to demonstrate incremental levels of proficiency and/or achievement.
  3. A scoring strategy: an algorithm and/or scale used to convert the quality definition assigned to each evaluation criterion into a summative grade.
Assessment rubrics are able to address the seven principles of high quality formative feedback in the following ways. The list of evaluation criteria in tandem with the continuum of quality definitions clarify what good performance is (1), deliver high quality information to students about their learning (3) and provide information to teachers that can be used to help shape teaching (7). Additionally, where assessment rubrics are made accessible to students prior to the receipt of assessment feedback from teaching staff the list of evaluation criteria in tandem with the continuum of quality definitions can also be used to facilitate the development of self-assessment (reflection) in learning (2). Further, the arrangement of the quality definitions as a continuum, that identifies the extent to which specific criteria must be fulfilled to achieve incremental increases in quality, provides opportunities for students to close the gap between current and desired performance (6). Importantly, this arrangement of the criteria and quality definitions facilitates a form of scaffolded learning (Vygotsky, 1986) 'that allows students to come to an understanding of the quality of the work expected and shows students ways to achieve this quality' (S. Walker & Hobson, 2013, p. 3) thereby enhancing agency which encourages positive motivational beliefs and self-esteem (5). Finally, assessment rubrics are able to function as reference documents that provide a stimulus and anchor for robust teacher and peer dialogue around learning and assessment (4).

The project: OnTrack

OnTrack is a pre-university enabling program that provides an alternative entry pathway for approximately 300 students per year to Murdoch University. It is specifically designed to accommodate the needs of people identified as belonging to an 'equity' group (Department of Employment Education and Training, 1990), particularly people who have experienced educational disruption or disadvantage during their pre-tertiary years as a result of socio-economic factors. The central aim of OnTrack is to improve the recruitment, progression and retention rates of students from historically under-represented groups within the wider student cohort of the University. The OnTrack program aims to provide a transformative learning experience (Taylor, 2008).

Since its inception, in 2008, the teaching and learning processes of OnTrack have reflected constructionist understandings (Vygotsky, 1986) that seek to engage students as agents of their own transformation. Specifically, OnTrack assessment practices have been informed by research that demonstrates the value of formative assessment as a learning tool (Butler, 1987, 1988). This guiding frame is outlined for students in the following way:

OnTrack is informed by research that indicates the value of using assessment as a learning tool. When assessment focuses on what the student knows (and does not know), rather that on what mark they should be given, it is possible for the academic to concentrate on providing quality feedback. In turn, when it is not the mark that counts, students can more readily assimilate constructive criticism and use it to develop their academic and reflective practices (Gibbs, 2012, p. 13).
Students who successfully complete OnTrack are offered admission into their chosen program of study at Murdoch University. To successfully completed OnTrack a student must fulfil the following three criteria:
  1. Attempt the ten assessment tasks as detailed in the program materials.
  2. Achieve a greater than 50 per cent aggregate of marks available for the ten assessment tasks.
  3. Fulfil performance goals for class attendance and participation.
The ten assessment tasks consist of three learning portfolios, three essays, three oral presentations and a mock exam. The portfolio, essay and oral presentation tasks are deliberately iterative and are scheduled so that students are able to receive and apply feedback between attempts. Students receive written feedback and a summative grade for all tasks, except the mock exam and the performance goal for which only summative feedback is provided.

The problem

At the end of 2011, OnTrack's annual review and reporting processes exposed some concerns that centred on the nexus between assessment feedback and student learning. Teaching staff articulated two primary concerns: that students frequently demonstrated anxiety related to their understanding of assessment criteria and required standards of task performance (particularly for initial assessment tasks); and students frequently failed to act on written feedback and claimed that they did not understand what they needed to do to improve. Teaching staff anecdotally noted that a high proportion of students requested extensions for the first assessment tasks in each iterative series while a lesser, but substantial, portion of students withdrew from OnTrack prior to completing the initial iteration of each task.

An analysis of OnTrack enrolment and progression data provided support, albeit inconclusive, for these accounts. Over the four-year period between 2008 and 2011 approximately one third (32.3%) of students requested extensions to the due dates for tasks within the initial cycle of assessments and almost one third of the cohort (30.8%) withdrew during the same period. In evidence of the second concern teaching staff expressed frustration at the need to constantly repeat similar feedback on subsequent iterations of each task whilst observing little change in overall task performance or modification of learning strategies. Again, analysis of OnTrack teaching records and progression data provided some, also inconclusive, evidence in support of this perception. Over the same four-year period less than two thirds (60.1%) of students demonstrated sufficient outcomes to be offered admission into the University while an additional quantum of students (7.9%) completed all of the required assessment tasks but failed to achieve an aggregate of 50 per cent and were, as a result, not offered admission into the University.

In 2011, as the new unit coordinator, I also observed divergent understandings with regard to assessment criteria and performance standards amongst the teaching staff. This divergence was particularly evident during assessment moderation activities. Staff frequently disagreed over what constituted assessment criteria, the relative importance of specific criterion and the extent to which each criterion must addressed for each of the five quality levels (high distinction, distinction, credit, pass and fail) to be reached. At many meetings such discussions were both passionate and protracted.

The design principles

In response to these issues I redesigned the assessment practices on the basis that OnTrack's assessment practices needed to:
  1. continue to reflect the program's commitment to constructionist teaching and learning and formative assessment.
  2. provide both teaching staff and students with a clear and consistent assessment criteria and performance standards, facilitate teaching staff to produce criteria-specific and enable students to understand and utilise assessment feedback.
  3. create conditions that fostered the development of self-regulated learning competence amongst students.
The purpose of the project was to develop tools and processes necessary to enable teaching staff and students in the OnTrack program to clarify and utilise jointly understood assessment criteria and performance standards. These tools and processes would allow teaching staff to produce specific, useable feedback and would teach students how to use feedback to develop self-regulated learning competence. The project was conducted in three phases:
  1. The development of the assessment rubrics
  2. The development and articulation of a dialogic cycle
  3. The implementation of the assessment rubrics within the dialogic cycle
Using Nicol and Macfarlane-Dick's (2006) seven principles of good feedback practice plus an eighth principle of sustainability the following framework was developed, that the assessment practices will:
  1. clarify what good performance is (goals, criteria, expected standards).
  2. facilitate the development of self-assessment (reflection) in learning.
  3. deliver high quality information to students about their learning.
  4. encourage teachers and peer dialogue around learning.
  5. encourage positive motivational beliefs and self-esteem.
  6. provide opportunities to close the gap between current and desired performance.
  7. provide information to teachers that can be used to help shape teaching.
  8. be sustainable and integrated into the teaching and learning practices of OnTrack.
The project utilised assessment rubrics, as the sole tools for recording and communicating assessment feedback, and established a dialogic cycle in which feedback was positioned as an integral part of the iterative processes of teaching and learning rather than as a freestanding unit of information (Taras, 2003).

The development of assessment rubrics

At the beginning 2011 twelve assessment rubrics were developed. Each assessment rubric was laid out according to a standard template that identified four or five key assessment foci and described a set of specific assessment criteria and corresponding quality standards. The assessment rubrics were developed in four sets (portfolio, essay, oral presentation and attendance and participation) reflective of the iterative nature of the tasks. The criteria, articulated within each set, elucidate a developmental trajectory aligned directly to the skills and understandings explicitly taught up to that point. Similarly, the quality standards incrementally increased with regard to the extent and standard of performance required for subsequent iterations of the assessment task.

The assessment criteria and descriptors of quality standards used in the rubrics were derived from the previously established weekly learning objectives of the OnTrack program. The selection, development and refinement of criteria and standards were undertaken over a period of eight weeks. Critical and/or significant learning objectives were first selected from the published unit materials. These objectives were re-written as assessable criteria that could be referenced to tangible evidence, such as an identifiable behaviour or demonstrable skill. The assessable criteria were then arranged into a coherent developmental sequence. Finally, the wording of each criterion was reviewed to ensure that concepts and vocabulary were used consistently, both within the rubrics and in alignment with the course materials, and that they used common language to the greatest extent possible. Once developed all teaching staff in OnTrack were invited to provide feedback on the structure of the rubrics as well as the assessment criteria and descriptors of quality standards contained in each.

The development and articulation of a dialogic cycle

As the consultation phase of the rubric development concluded, I set about designing a dialogic feedback cycle that could be embedded within the usual teaching and learning activities of OnTrack. The primary aim of establishing the cycle was to create conditions that encourage and support the development of self-regulated learning competence. Consistent with this aim I used the three conditions Sadler (1989) identified as necessary for students to effectively use feedback, that students:
  1. possess a clear understanding of the criteria for a high quality performance;
  2. accurately compare their current performance to that of the high quality performance;
  3. identify specific actions that will reduce the gap between their current performance and that of a high quality performance.
Sadler's (1989) conditions are particularly well aligned to the concept of dialogic feedback cycles in that they highlight the need for students to possess some of the same evaluative skills and understandings as their teachers. These skills enable students to accurately understand what is required, compare their performance against this requirement, understand the feedback provided by assessors and on this basis formulate specific actions that improve their performance in respect to the task requirements. I used Sadler's (1989) conditions to guide the creation of a five-step dialogic feedback cycle (Figure 1). The feedback cycle consists of two tutorials and three independent activities which together establish the three conditions necessary for students to effectively use feedback (Sadler, 1989).

Figure 1

Figure 1: The dialogic feedback cycle

The implementation of the assessment rubrics within the dialogic feedback cycle

At the beginning of 2012 the dialogic feedback cycle (Figure 1.) and rubrics were incorporated into the usual teaching and learning activities of OnTrack. Two tutorials were allocated for each assessment task to provide time and space for dialogue between teaching staff and students. Detailed lesson plans were created for each of the allocated tutorials and were integrated into the tutors' guide that is provided to all teaching staff in their induction package.

Teaching staff underwent training specific to the implementation of the assessment rubrics and the dialogic feedback cycle. Many of the teaching staff had been involved in the development of the rubrics and as a result were optimistic about their use as an assessment tool. The concept of a dialogic cycle was more novel and was positioned as a useful process for acknowledging and formalising the majority of conversations previously conducted with individual or small groups of students.

The dialogic feedback cycle consists of five parts. It begins with a tutorial session in which teaching staff and students 'unpack' the details of the assessment task. During this session both the task description, which is articulated in the unit guide, and the assessment rubric, which is available for download from the OnTrack webpage, are deconstructed and discussed. The discussion focuses particularly on developing a shared understanding of the concepts and vocabulary used in the assessment criteria to be applied to the submitted work. The aim is for students and staff to jointly explicate the specific skills and understandings that are to be assessed and the means, extent and standard to which those skills and understandings must be demonstrated for the work to fulfil the pre-determined levels of quality.

Having constructed a comprehensive and explicit understanding of what is required and how it will be evaluated the students complete the assessment task and submit it for assessment by the staff member with whom they jointly negotiated their shared understanding. While students wait to receive feedback they are asked to self-evaluate their submission using the appropriate assessment rubric.

One week after the submission date the teaching staff and students undertake a second tutorial in which the assessment feedback is 'unpacked'. For the first part of the tutorial the students work individually to compare and analyse the assessment feedback generated through the use of the rubric: both the feedback provided by the assessor and their own self-assessment. Students begin by comparing their self-assessment with that of the assessor, specifically they note where the assessment feedback is similar and where it differs. The aim is to identify elements of the criteria or quality standards that require further discussion. Students then continue to work individually to create an action plan that identifies one specific achievement and two specific, measurable goals and actions required to accomplish them.

During the second part of the tutorial students work in small groups to discuss the elements of the criteria or quality standards that they identified as being in need of further discussion and to share their action plan. Students work collaboratively to refine their learning and performance goals and to identify alternative ways of accomplishing them. Each group then feeds back a selection of their learning goals and accompanying actions to the whole class. Finally, each group highlights to the tutor any elements of the criteria or quality standards that require further attention during the first the initial tutorial of the next cycle.

Following the second tutorial students may annotate an assessment rubric for the next related assessment task by highlighting the specific assessment criteria they are planning to address through their action plan. The annotated rubric is later attached to the assessment task when it is submitted by the student and is ultimately used by the staff member to assess the submission. This process allows each student to clearly communicate their aims and foci to the assessor and allows the assessor to focus their feedback on a needs identified by the student.

The project outcomes

The most obvious results from the introduction of both the assessment rubrics and the dialogic feedback model were evident in the students' outcomes, particularly those of progression and retention. A notable improvement was achieved across all of the metrics that had originally indicated the issues the project sought to redress (Table 1.)

Table 1: Student outcomes pre-project and post-project


2008-2011
(n=866)
2012-2013 (S1 only)
(n=561)
Progression rate beyond initial assessment tasks69.2%91.7%
Extension request rate for initial assessment tasks32.3%3.9%
Retention rate to end of OnTrack68.0%82.5%
Progression to University entry60.1%77.2%

A post-project analysis of OnTrack enrolment and progression data showed that student requests for extensions to the due dates of tasks within the initial cycle of assessments decreased by 28.4 per cent while the progression rate beyond initial assessment tasks increased by 22.5 per cent. This data is understood to indicate that the process enabled students to better understand what is required of them for high quality performance on the initial assessment tasks. Additional post-project analysis showed that the retention rate within the OnTrack program increased by 14.5 per cent and the progression rate to University entry increased by 17.1 per cent. This data suggests that the project supported an overall improvement in students' engagement and achievement.

The introduction of assessment rubrics generated staff and student responses that parallel those reported by Reddy and Andrade (2010) as common in higher education institutions. On the whole student responses were positive. Unsolicited emails highlighted that many students valued the transparency inherent in the use of rubrics. Specifically, they noted that the rubrics enabled them to understand what was being assessed and how it was being assessed, and that this allowed them to focus the energy they put into improving their work. Teaching staff responses were more restrained but generally positive. Some staff required time to expand their conception of assessment and the role of feedback and initially saw the rubrics as overly detailed and constraining. Others reveled in the opportunity provided by the rubrics to efficiently generate consistent, objective formative feedback.

The introduction of the dialogic feedback model produced more conflicted responses. Even though the theoretical model is embedded in the teaching materials staff value and commit to these activities differentially. As both staff and students have became familiar with the territory of assessment focused dialogue and began to interact with the concepts more open and nuanced understandings have emerged. The one component of the dialogic feedback model that enjoyed unanimous support by staff from the outset was its focus on having students use feedback to improve their skills. Staff voiced pleasure in knowing that the feedback they provided was acted upon and in being provided with clear information about how to tailor their comments.

Conclusion

For students to understand the specific and nuanced meanings of assessment criteria and quality standards they often require assistance to develop and refine their working knowledge of the fundamental concepts and vocabulary being used (S. Walker & Hobson, 2013). This is because the language of assessment is neither transparent nor readily accessible to students (Bloxham & West, 2007; Lea & Street, 2000). Such dialogue must go beyond simple explication and embrace rich, iterative and penetrating discourse that supports 'both socialisation and discussion into a shared community of understanding' (Walker and Hobson, 2013). The dialogical feedback model provides a space in which interpretations are shared, meanings are negotiated and expectations are clarified (Carless et al., 2011).

This paper has demonstrated that access to assessment rubrics, that clearly articulate the criteria by which achievement on assessment tasks (at a range of levels) is evaluated, creates a space in which teaching staff and students are able establish dialogue focussed on developing a shared language and understanding of assessment. Additionally, it has shown that when such dialogue is embedded within a cycle that creates a closed loop of feedback provision and utilisation it is possible to increase student engagement and achievement. Such reciprocal and collective involvement is important as it allows goals and performance standards to be established and explicated (as opposed to tacit) and engenders higher levels of agency and motivation through mutual ownership of the outcomes. These elements are crucially important for commencing university students who are more vulnerable to the cognitive, behavioural and affective the impacts of assessment feedback. The outcomes of this project suggest that the use of assessment rubrics within a dialogic feedback cycle may provide an robust process for facilitating student engagement and transformative learning by enabling and supporting the development of self-regulated learning competencies.

Acknowledgements

Many thanks to Julia Hobson for feedback on an earlier version of this paper.
Thanks also to Clare Freeman for her assistance in designing the assessment rubrics.

References

Black, P. & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education, 5(1), 7-74. http://dx.doi.org/10.1080/0969595980050102

Bloxham, S. (2009). Marking and moderation in the UK: False assumptions and wasted resources. Assessment & Evaluation in Higher Education, 34(2), 209-220. http://dx.doi.org/10.1080/02602930801955978

Bloxham, S., Boyd, P. & Orr, S. (2011). Mark my words: The role of assessment criteria in UK higher education grading practices. Studies in Higher Education, 36(6), 655-670. http://dx.doi.org/10.1080/03075071003777716

Bloxham, S. & West, A. (2007). Learning to write in higher education: Students' perceptions of an intervention in developing understanding of assessment criteria. Teaching in Higher Education, 12(1), 77-89. http://dx.doi.org/10.1080/13562510601102180

Burke, D. (2009). Strategies for using feedback students bring to higher education. Assessment & Evaluation in Higher Education, 34(1), 41-50. http://dx.doi.org/10.1080/02602930801895711

Butler, R. (1987). Task-involving and ego-involving properties of evaluation: Effects of different feedback conditions on motivational perceptions, interests and performance. Journal of Educational Psychology, 79(4), 474-482. http://dx.doi.org/10.1037/0022-0663.79.4.474

Butler, R. (1988). Enhancing and undermining intrinsic motivation: The effects of task-involving and ego-involving evaluation on interest and involvement. British Journal of Educational Psychology, 58(1), 1-14. http://dx.doi.org/10.1111/j.2044-8279.1988.tb00874.x

Carless, D., Salter, D., Yang, M. & Lam, J. (2011). Developing sustainable feedback practices. Studies in Higher Education, 36(4), 395-407. http://dx.doi.org/10.1080/03075071003642449

Carr, S. E., Siddiqui, Z. S., Jonas-Dwyer, D. & Miller, S. (2013). Enhancing feedback for students across a health sciences faculty. In Design, develop, evaluate: The core of the learning environment. Proceedings of the 22nd Annual Teaching Learning Forum, 7-8 February 2013. Perth: Murdoch University. http://ctl.curtin.edu.au/professional_development/conferences/tlf/tlf2013/refereed/carr.html

Case, J. M. (2008). Alienation and engagement: Development of an alternative theoretical framework for understanding students learning. Higher Education, 55(3), 321-332. http://dx.doi.org/10.1007/s10734-007-9057-5

Department of Employment Education and Training (1990). A fair chance for all: Higher education that's within everyone's reach. Canberra: Department of Employment Education and Training.

Duncan, N. (2007). 'Feed-forward': Improving students' use of tutors' comments. Assessment & Evaluation in Higher Education, 32(3), 271-283. http://dx.doi.org/10.1080/02602930600896498

Flint, N. & Johnson, B. (2011). Towards fairer university assessment: Recognizing the concerns of students. London: Routledge.

Gibbs, G. (2006). How assessment frames student learning. In C. Bryan & K. Clegg (Eds.), Innovative assessment in higher education. London: Routledge.

Gibbs, G. (2012). OnTrack (EQU060): Unit information. Perth: Murdoch University. http://handbook.murdoch.edu.au/units/details/?unit=EQU060&year=2014

Gore, J., Ladwig, J., Elsworth, W. & Ellis, H. (2009). Quality assessment framework: A guide for assessment practice in higher education. ALTC Grant Report: The University of Newcastle. http://www.olt.gov.au/system/files/resources/QAF%20FINAL%20doc%20for%20print.pdf

Hattie, J. (1987). Identifying the salient facets of a model of student learning: A synthesis and meta-analysis. International Journal of Educational Research, 11, 187-212.

Hattie, J. & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81-112. http://dx.doi.org/10.3102/003465430298487

Higgins, R., Hartley, P. & Skelton, A. (2001). Getting the message across: The problem of communicating assessment feedback. Teaching in Higher Education, 6(2), 269-274. http://dx.doi.org/10.1080/13562510120045230

Hounsell, D. (2007). Towards more sustainable feedback to students. In D. Boud & N. Falchikov (Eds.), Rethinking assessment in higher education (pp. 101-113). London: Routledge.

Krause, K., Hartley, R., James, R. & McInnis, C. (2005). The first year experience in Australian universities: Findings from a decade of national studies. http://www.cshe.unimelb.edu.au/research/experience/docs/FYEReport05KLK.pdf

Lea, M. & Street, B. (2000). Student writing and staff feedback in higher education: An academic literacies approach. In M. Lea & B. Stierer (Eds.), Student writing in higher education (pp. 32-46). Buckingham: SRHE and Open University Press.

Nicol, D. J. (2010). From monologue to dialogue: Improving written feedback processes in mass higher education. Assessment & Evaluation in Higher Education, 35(5), 501-517. http://dx.doi.org/10.1080/02602931003786559

Nicol, D. J. & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199-218. http://dx.doi.org/10.1080/03075070600572090

Pintrich, P. (1995). Understanding self-regulated learning. San Francisco: Jossey-Bass.

Pintrich, P. & Zusho, A. (2002). Students motivation and self-regulated learning in the college classroom. In J. Smart & W. Tierney (Eds.), Higher education: Handbook of theory and research (Vol. XVII). New York: Agathon Press.

Poulos, A. & Mahony, M. J. (2008). Effectiveness of feedback: The students' perspective. Assessment & Evaluation in Higher Education, 33(2), 143-154. http://dx.doi.org/10.1080/02602930601127869

Radloff, A. & Coates, H. (2009). Doing more for learning: Enhancing engagement and outcomes. Australasian survey of student engagement. Melbourne: ACER. http://www.acer.edu.au/documents/aussereports/AUSSE_2009_Student_Engagement_Report.pdf

Reddy, Y. M. & Andrade, H. (2010). A review of rubric use in higher education. Assessment & Evaluation in Higher Education, 35(4), 435-448. http://dx.doi.org/10.1080/02602930902862859

Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18(2), 119-144. http://dx.doi.org/10.1007/BF00117714

Taylor, E. (2008). Transformative learning theory. New Directions for Adult and Continuing Education, 119, 5-15. http://dx.doi.org/10.1002/ace.301

Tinto, V. (1993). Leaving college: Rethinking the causes and cures of student attrition (2nd ed.). Chicago: University of Chicago Press.

Vygotsky, L. (1986). Thought and language. Cambridge, MA: MIT Press.

Walker, M. (2009). An investigation into written comments on assignments: Do students find them usable. Assessment & Evaluation in Higher Education, 34(1), 67-78. http://dx.doi.org/10.1080/02602930801895752

Walker, S. & Hobson, J. (2013). Interventions in teaching first-year law: Feeding forward to improve learning outcomes. Assessment & Evaluation in Higher Education. http://dx.doi.org/10.1080/02602938.2013.832728

Weaver, M. (2006). Do students value feedback? Student perceptions of tutors' written responses. Assessment & Evaluation in Higher Education, 31(3), 379-394. http://dx.doi.org/10.1080/02602930500353061

Zimmerman, B. & Schunk, D. (2001). Self-regulated learning and academic achievement: Theoretical prespectives. New Jersey: Lawrence Erlbaum Associates.

Please cite as: Gibbs, G. (2014). Dialogue by design: Creating a dialogic feedback cycle using assessment rubrics. In Transformative, innovative and engaging. Proceedings of the 23rd Annual Teaching Learning Forum, 30-31 January 2014. Perth: The University of Western Australia. http://ctl.curtin.edu.au/professional_development/conferences/tlf/tlf2014/refereed/gibbs.html

© Copyright 2014 Gael Gibbs. The author assigns to the TL Forum and not for profit educational institutions a non-exclusive licence to reproduce this article for personal use or for institutional teaching and learning purposes, in any format, provided that the article is used and cited in accordance with the usual academic conventions.


[PDF version] [Refereed papers] [Contents - All Presentations] [Home Page]
This URL: http://ctl.curtin.edu.au/professional_development/conferences/tlf/tlf2014/refereed/gibbs.html
Created 29 Jan 2014. Last revision: 29 Jan 2014.