![]() Category: Research |
| Teaching and Learning Forum 2014 [ Refereed papers ] |
Gael Gibbs
Murdoch University
Email: g.gibbs@murdoch.edu.au
Students consistently identify assessment feedback as a troubling aspect of their university experience. They find it confusing, difficult to act upon and unhelpful to their processes of establishing, understanding and fulfilling the criteria for high quality performance. Without a clear understanding of what constitutes 'high quality performance' students find it difficult to engage with learning: they are unable to establish goals to direct their cognitive and behavioural efforts and thereby remain engaged and motivated. Empirical evidence and attendant theorising suggests a need to reconceptualise assessment feedback within a dialogic process through which assessment goals and criterion meanings are shared and expectations and standards are clarified. In higher education assessment rubrics are a valuable but under-utilised tool capable of establishing and explicating assessment criteria and providing feedback. This paper argues for a feedback cycle that is embedded in normal teaching and learning processes and encourages staff and students to develop a shared dialogue that supports clear and transparent understandings of assessment criteria and standards. This dialogic feedback cycle simultaneously enables staff to provide high quality, criteria specific, actionable feedback and enables students to develop skills self-regulated learning. This model was tested in a pre-university enabling program in 2012 and 2013 and resulted in n overall 14.5% increase in the student retention rate.
This paper demonstrates how assessment rubrics can be used to initiate and cultivate transparent and nuanced assessment dialogue that enables the tacit assumptions, understandings and limitations inherent in all assessment tools and methods to be brought to the surface and investigated. The project reported in this paper shows that when time and space are created to verbally unpack the criteria and quality standards articulated in assessment rubrics teaching staff and students are able to develop shared conceptual and linguistic understandings. These understandings subsequently frame and support the production of meaningful written feedback that concurrently underpins the development of self-regulated learning competence and significantly enhances student progression and retention. The outcomes of the project are in contrast to the criticism by some authors (Bloxham, 2009; Bloxham, Boyd, & Orr, 2011) who claim that assessment rubrics are unsuitable for establishing effective assessment dialogue because they render assessment assumptions and limitations opaque.
This paper is divided into three sections. The first section discusses the importance of formative feedback in improving student learning and development and highlights some limitations students experience when interpreting and implementing formative feedback. The second section argues for assessment rubrics as both a tool for generating effective formative feedback within the university context and as a reference document for stimulating and anchoring assessment focussed dialogue between teaching staff and students. The final section outlines the research project undertaken in OnTrack, a pre-university enabling program, in 2012 and 2013. The project developed a dialogical feedback cycle that incorporated a series of assessment rubrics and was embedded into the programs' normal teaching and learning activities of the program. This cycle allowed staff and students to develop shared understandings of assessment criteria and standards, and enabled staff to provide high quality, actionable feedback that supported students to develop self-regulated learning competence.
A meta-analysis of 250 studies, drawn from a variety of education sectors, shows that high quality formative feedback produces improved learning and achievement across all content areas, knowledge and skill types, and levels of education (Black & William, 1998). Similarly, other studies demonstrate that formative assessment feedback has a profound influence on how students engage with and enact the processes of learning (Gibbs, 2006; Hattie & Timperley, 2007; Hounsell, 2007). Formative feedback impacts the cognitive, behavioural and affective domains of student learning: it provides a point of engagement through which it is possible to stimulate transformation within students' learning, learning processes, behaviours, and motivation.
Unfortunately, surveys repeatedly identify an inability to understand or use assessment feedback as one of the most troubling aspects of the student experience (Carless, Salter, Yang, & Lam, 2011; Flint & Johnson, 2011; Krause, Hartley, James, & McInnis, 2005; nicol, 2010). University students routinely experience written feedback as ambiguous, abstract, vague or cryptic (Higgins, Hartley, & Skelton, 2001; M. Walker, 2009). Critically, students frequently find feedback difficult to act upon (Gibbs, 2006; Poulos & Mahony, 2008). This is not surprising as students are seldom taught how to use feedback (Weaver, 2006) and as a result tend to rely on relatively unsophisticated strategies for decoding and implementing the written feedback they receive (Burke, 2009).
Difficulties with interpreting and applying assessment feedback are particularly common amongst beginning university students. Krause et al. (2005) reported that more than two-thirds of first year Australian university students consider the written feedback they receive to be unhelpful to their academic progress. Further Krause et al. (2005) noted that this experience is disappointing for beginning students who position feedback on their first major piece of assessment as a 'watershed experience' that they hope will provide answers to their 'most pressing assessment questions' including 'how do I know what is expected of me?' and 'what does a good assignment in this subject look like?' (Krause et al., 2005, p. 32).
Beginning university students are more vulnerable to the impact of ineffectual or poor quality feedback because they are unlikely to question the practices or judgements of the academy and have fewer resources to bring to decoding and implementing it (Flint & Johnson, 2011). These factors mean that the feedback beginning students receive in response to early assessment tasks can have a potent effect on their affective domains such as self-esteem, motivation and engagement with learning. In combination these factors mean that beginning university students are at particular risk of experiencing assessment related stress that precipitates cognitive and behavioural disengagement. Such disengagement places students 'at risk' for progression and retention with the academy (Tinto, 1993).
Effective assessment feedback is one of the most powerful single elements in improving student learning and achievement (Hattie, 1987; Hattie & Timperley, 2007) because it can assist students to develop self-regulated learning. Self-regulated learning is an active, constructive process whereby learners set goals for their learning and monitor, regulate, and control their cognition, motivation and behaviour in accordance with those goals and the constrains of the environment (Pintrich & Zusho, 2002, p. 64). Effective formative feedback allows students to accurately identify useful goals and develop realistic action plans for their achievement. It is a foundation stone for engagement and transformative learning (Taylor, 2008) as it enables students to achieve agency over both the cognitive and affective aspects of their learning and learning processes (Nicol & Macfarlane-Dick, 2006).
Self-regulated learning competence is highly advantageous to university students. Self-regulated learners achieve better academic outcomes and learn more effectively than their peers (Pintrich, 1995; Zimmerman & Schunk, 2001). Further, self-regulated learners require less external support from teaching staff, peers and their families (Pintrich, 1995; Zimmerman & Schunk, 2001). In the Australian university context, where the student population is increasing in quantum and diversity and fiscal pressures have decreased the time and space available for teaching staff and students to interact face-to-face (Nicol, 2010), self-regulated learning competence is likely to become increasingly important.
The Australasian Survey of Student Engagement (Radloff & Coates, 2009, p. 22) recently found that up to fifty per cent of university students have never spoken directly with teaching staff about the outcomes achieved on an assessment task. This finding indicates that written feedback carries a heavy burden within the modern university. More critically, it suggests that the feedback students receive and their ability to use it to self-regulate their learning will have a strong impact on student progression and retention. Assessment feedback is influential in determining how students' experience their attempts to succeed in the system. This experience cannot be separated from the broader university experience and therefore is a component of how universities 'understand students' experiences of alienation and engagement' (Case, 2008, p. 330).
Based on a synthesis of research literature about formative assessment and self-regulated learning Nicol and Macfarlane-Dick (2006, p. 205) assert that good feedback practice conforms to seven principles:
Since its inception, in 2008, the teaching and learning processes of OnTrack have reflected constructionist understandings (Vygotsky, 1986) that seek to engage students as agents of their own transformation. Specifically, OnTrack assessment practices have been informed by research that demonstrates the value of formative assessment as a learning tool (Butler, 1987, 1988). This guiding frame is outlined for students in the following way:
OnTrack is informed by research that indicates the value of using assessment as a learning tool. When assessment focuses on what the student knows (and does not know), rather that on what mark they should be given, it is possible for the academic to concentrate on providing quality feedback. In turn, when it is not the mark that counts, students can more readily assimilate constructive criticism and use it to develop their academic and reflective practices (Gibbs, 2012, p. 13).Students who successfully complete OnTrack are offered admission into their chosen program of study at Murdoch University. To successfully completed OnTrack a student must fulfil the following three criteria:
An analysis of OnTrack enrolment and progression data provided support, albeit inconclusive, for these accounts. Over the four-year period between 2008 and 2011 approximately one third (32.3%) of students requested extensions to the due dates for tasks within the initial cycle of assessments and almost one third of the cohort (30.8%) withdrew during the same period. In evidence of the second concern teaching staff expressed frustration at the need to constantly repeat similar feedback on subsequent iterations of each task whilst observing little change in overall task performance or modification of learning strategies. Again, analysis of OnTrack teaching records and progression data provided some, also inconclusive, evidence in support of this perception. Over the same four-year period less than two thirds (60.1%) of students demonstrated sufficient outcomes to be offered admission into the University while an additional quantum of students (7.9%) completed all of the required assessment tasks but failed to achieve an aggregate of 50 per cent and were, as a result, not offered admission into the University.
In 2011, as the new unit coordinator, I also observed divergent understandings with regard to assessment criteria and performance standards amongst the teaching staff. This divergence was particularly evident during assessment moderation activities. Staff frequently disagreed over what constituted assessment criteria, the relative importance of specific criterion and the extent to which each criterion must addressed for each of the five quality levels (high distinction, distinction, credit, pass and fail) to be reached. At many meetings such discussions were both passionate and protracted.
The assessment criteria and descriptors of quality standards used in the rubrics were derived from the previously established weekly learning objectives of the OnTrack program. The selection, development and refinement of criteria and standards were undertaken over a period of eight weeks. Critical and/or significant learning objectives were first selected from the published unit materials. These objectives were re-written as assessable criteria that could be referenced to tangible evidence, such as an identifiable behaviour or demonstrable skill. The assessable criteria were then arranged into a coherent developmental sequence. Finally, the wording of each criterion was reviewed to ensure that concepts and vocabulary were used consistently, both within the rubrics and in alignment with the course materials, and that they used common language to the greatest extent possible. Once developed all teaching staff in OnTrack were invited to provide feedback on the structure of the rubrics as well as the assessment criteria and descriptors of quality standards contained in each.
Figure 1: The dialogic feedback cycle
Teaching staff underwent training specific to the implementation of the assessment rubrics and the dialogic feedback cycle. Many of the teaching staff had been involved in the development of the rubrics and as a result were optimistic about their use as an assessment tool. The concept of a dialogic cycle was more novel and was positioned as a useful process for acknowledging and formalising the majority of conversations previously conducted with individual or small groups of students.
The dialogic feedback cycle consists of five parts. It begins with a tutorial session in which teaching staff and students 'unpack' the details of the assessment task. During this session both the task description, which is articulated in the unit guide, and the assessment rubric, which is available for download from the OnTrack webpage, are deconstructed and discussed. The discussion focuses particularly on developing a shared understanding of the concepts and vocabulary used in the assessment criteria to be applied to the submitted work. The aim is for students and staff to jointly explicate the specific skills and understandings that are to be assessed and the means, extent and standard to which those skills and understandings must be demonstrated for the work to fulfil the pre-determined levels of quality.
Having constructed a comprehensive and explicit understanding of what is required and how it will be evaluated the students complete the assessment task and submit it for assessment by the staff member with whom they jointly negotiated their shared understanding. While students wait to receive feedback they are asked to self-evaluate their submission using the appropriate assessment rubric.
One week after the submission date the teaching staff and students undertake a second tutorial in which the assessment feedback is 'unpacked'. For the first part of the tutorial the students work individually to compare and analyse the assessment feedback generated through the use of the rubric: both the feedback provided by the assessor and their own self-assessment. Students begin by comparing their self-assessment with that of the assessor, specifically they note where the assessment feedback is similar and where it differs. The aim is to identify elements of the criteria or quality standards that require further discussion. Students then continue to work individually to create an action plan that identifies one specific achievement and two specific, measurable goals and actions required to accomplish them.
During the second part of the tutorial students work in small groups to discuss the elements of the criteria or quality standards that they identified as being in need of further discussion and to share their action plan. Students work collaboratively to refine their learning and performance goals and to identify alternative ways of accomplishing them. Each group then feeds back a selection of their learning goals and accompanying actions to the whole class. Finally, each group highlights to the tutor any elements of the criteria or quality standards that require further attention during the first the initial tutorial of the next cycle.
Following the second tutorial students may annotate an assessment rubric for the next related assessment task by highlighting the specific assessment criteria they are planning to address through their action plan. The annotated rubric is later attached to the assessment task when it is submitted by the student and is ultimately used by the staff member to assess the submission. This process allows each student to clearly communicate their aims and foci to the assessor and allows the assessor to focus their feedback on a needs identified by the student.
| 2008-2011 (n=866) | 2012-2013 (S1 only) (n=561) | |
| Progression rate beyond initial assessment tasks | 69.2% | 91.7% |
| Extension request rate for initial assessment tasks | 32.3% | 3.9% |
| Retention rate to end of OnTrack | 68.0% | 82.5% |
| Progression to University entry | 60.1% | 77.2% |
A post-project analysis of OnTrack enrolment and progression data showed that student requests for extensions to the due dates of tasks within the initial cycle of assessments decreased by 28.4 per cent while the progression rate beyond initial assessment tasks increased by 22.5 per cent. This data is understood to indicate that the process enabled students to better understand what is required of them for high quality performance on the initial assessment tasks. Additional post-project analysis showed that the retention rate within the OnTrack program increased by 14.5 per cent and the progression rate to University entry increased by 17.1 per cent. This data suggests that the project supported an overall improvement in students' engagement and achievement.
The introduction of assessment rubrics generated staff and student responses that parallel those reported by Reddy and Andrade (2010) as common in higher education institutions. On the whole student responses were positive. Unsolicited emails highlighted that many students valued the transparency inherent in the use of rubrics. Specifically, they noted that the rubrics enabled them to understand what was being assessed and how it was being assessed, and that this allowed them to focus the energy they put into improving their work. Teaching staff responses were more restrained but generally positive. Some staff required time to expand their conception of assessment and the role of feedback and initially saw the rubrics as overly detailed and constraining. Others reveled in the opportunity provided by the rubrics to efficiently generate consistent, objective formative feedback.
The introduction of the dialogic feedback model produced more conflicted responses. Even though the theoretical model is embedded in the teaching materials staff value and commit to these activities differentially. As both staff and students have became familiar with the territory of assessment focused dialogue and began to interact with the concepts more open and nuanced understandings have emerged. The one component of the dialogic feedback model that enjoyed unanimous support by staff from the outset was its focus on having students use feedback to improve their skills. Staff voiced pleasure in knowing that the feedback they provided was acted upon and in being provided with clear information about how to tailor their comments.
This paper has demonstrated that access to assessment rubrics, that clearly articulate the criteria by which achievement on assessment tasks (at a range of levels) is evaluated, creates a space in which teaching staff and students are able establish dialogue focussed on developing a shared language and understanding of assessment. Additionally, it has shown that when such dialogue is embedded within a cycle that creates a closed loop of feedback provision and utilisation it is possible to increase student engagement and achievement. Such reciprocal and collective involvement is important as it allows goals and performance standards to be established and explicated (as opposed to tacit) and engenders higher levels of agency and motivation through mutual ownership of the outcomes. These elements are crucially important for commencing university students who are more vulnerable to the cognitive, behavioural and affective the impacts of assessment feedback. The outcomes of this project suggest that the use of assessment rubrics within a dialogic feedback cycle may provide an robust process for facilitating student engagement and transformative learning by enabling and supporting the development of self-regulated learning competencies.
Bloxham, S. (2009). Marking and moderation in the UK: False assumptions and wasted resources. Assessment & Evaluation in Higher Education, 34(2), 209-220. http://dx.doi.org/10.1080/02602930801955978
Bloxham, S., Boyd, P. & Orr, S. (2011). Mark my words: The role of assessment criteria in UK higher education grading practices. Studies in Higher Education, 36(6), 655-670. http://dx.doi.org/10.1080/03075071003777716
Bloxham, S. & West, A. (2007). Learning to write in higher education: Students' perceptions of an intervention in developing understanding of assessment criteria. Teaching in Higher Education, 12(1), 77-89. http://dx.doi.org/10.1080/13562510601102180
Burke, D. (2009). Strategies for using feedback students bring to higher education. Assessment & Evaluation in Higher Education, 34(1), 41-50. http://dx.doi.org/10.1080/02602930801895711
Butler, R. (1987). Task-involving and ego-involving properties of evaluation: Effects of different feedback conditions on motivational perceptions, interests and performance. Journal of Educational Psychology, 79(4), 474-482. http://dx.doi.org/10.1037/0022-0663.79.4.474
Butler, R. (1988). Enhancing and undermining intrinsic motivation: The effects of task-involving and ego-involving evaluation on interest and involvement. British Journal of Educational Psychology, 58(1), 1-14. http://dx.doi.org/10.1111/j.2044-8279.1988.tb00874.x
Carless, D., Salter, D., Yang, M. & Lam, J. (2011). Developing sustainable feedback practices. Studies in Higher Education, 36(4), 395-407. http://dx.doi.org/10.1080/03075071003642449
Carr, S. E., Siddiqui, Z. S., Jonas-Dwyer, D. & Miller, S. (2013). Enhancing feedback for students across a health sciences faculty. In Design, develop, evaluate: The core of the learning environment. Proceedings of the 22nd Annual Teaching Learning Forum, 7-8 February 2013. Perth: Murdoch University. http://ctl.curtin.edu.au/professional_development/conferences/tlf/tlf2013/refereed/carr.html
Case, J. M. (2008). Alienation and engagement: Development of an alternative theoretical framework for understanding students learning. Higher Education, 55(3), 321-332. http://dx.doi.org/10.1007/s10734-007-9057-5
Department of Employment Education and Training (1990). A fair chance for all: Higher education that's within everyone's reach. Canberra: Department of Employment Education and Training.
Duncan, N. (2007). 'Feed-forward': Improving students' use of tutors' comments. Assessment & Evaluation in Higher Education, 32(3), 271-283. http://dx.doi.org/10.1080/02602930600896498
Flint, N. & Johnson, B. (2011). Towards fairer university assessment: Recognizing the concerns of students. London: Routledge.
Gibbs, G. (2006). How assessment frames student learning. In C. Bryan & K. Clegg (Eds.), Innovative assessment in higher education. London: Routledge.
Gibbs, G. (2012). OnTrack (EQU060): Unit information. Perth: Murdoch University. http://handbook.murdoch.edu.au/units/details/?unit=EQU060&year=2014
Gore, J., Ladwig, J., Elsworth, W. & Ellis, H. (2009). Quality assessment framework: A guide for assessment practice in higher education. ALTC Grant Report: The University of Newcastle. http://www.olt.gov.au/system/files/resources/QAF%20FINAL%20doc%20for%20print.pdf
Hattie, J. (1987). Identifying the salient facets of a model of student learning: A synthesis and meta-analysis. International Journal of Educational Research, 11, 187-212.
Hattie, J. & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81-112. http://dx.doi.org/10.3102/003465430298487
Higgins, R., Hartley, P. & Skelton, A. (2001). Getting the message across: The problem of communicating assessment feedback. Teaching in Higher Education, 6(2), 269-274. http://dx.doi.org/10.1080/13562510120045230
Hounsell, D. (2007). Towards more sustainable feedback to students. In D. Boud & N. Falchikov (Eds.), Rethinking assessment in higher education (pp. 101-113). London: Routledge.
Krause, K., Hartley, R., James, R. & McInnis, C. (2005). The first year experience in Australian universities: Findings from a decade of national studies. http://www.cshe.unimelb.edu.au/research/experience/docs/FYEReport05KLK.pdf
Lea, M. & Street, B. (2000). Student writing and staff feedback in higher education: An academic literacies approach. In M. Lea & B. Stierer (Eds.), Student writing in higher education (pp. 32-46). Buckingham: SRHE and Open University Press.
Nicol, D. J. (2010). From monologue to dialogue: Improving written feedback processes in mass higher education. Assessment & Evaluation in Higher Education, 35(5), 501-517. http://dx.doi.org/10.1080/02602931003786559
Nicol, D. J. & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199-218. http://dx.doi.org/10.1080/03075070600572090
Pintrich, P. (1995). Understanding self-regulated learning. San Francisco: Jossey-Bass.
Pintrich, P. & Zusho, A. (2002). Students motivation and self-regulated learning in the college classroom. In J. Smart & W. Tierney (Eds.), Higher education: Handbook of theory and research (Vol. XVII). New York: Agathon Press.
Poulos, A. & Mahony, M. J. (2008). Effectiveness of feedback: The students' perspective. Assessment & Evaluation in Higher Education, 33(2), 143-154. http://dx.doi.org/10.1080/02602930601127869
Radloff, A. & Coates, H. (2009). Doing more for learning: Enhancing engagement and outcomes. Australasian survey of student engagement. Melbourne: ACER. http://www.acer.edu.au/documents/aussereports/AUSSE_2009_Student_Engagement_Report.pdf
Reddy, Y. M. & Andrade, H. (2010). A review of rubric use in higher education. Assessment & Evaluation in Higher Education, 35(4), 435-448. http://dx.doi.org/10.1080/02602930902862859
Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18(2), 119-144. http://dx.doi.org/10.1007/BF00117714
Taylor, E. (2008). Transformative learning theory. New Directions for Adult and Continuing Education, 119, 5-15. http://dx.doi.org/10.1002/ace.301
Tinto, V. (1993). Leaving college: Rethinking the causes and cures of student attrition (2nd ed.). Chicago: University of Chicago Press.
Vygotsky, L. (1986). Thought and language. Cambridge, MA: MIT Press.
Walker, M. (2009). An investigation into written comments on assignments: Do students find them usable. Assessment & Evaluation in Higher Education, 34(1), 67-78. http://dx.doi.org/10.1080/02602930801895752
Walker, S. & Hobson, J. (2013). Interventions in teaching first-year law: Feeding forward to improve learning outcomes. Assessment & Evaluation in Higher Education. http://dx.doi.org/10.1080/02602938.2013.832728
Weaver, M. (2006). Do students value feedback? Student perceptions of tutors' written responses. Assessment & Evaluation in Higher Education, 31(3), 379-394. http://dx.doi.org/10.1080/02602930500353061
Zimmerman, B. & Schunk, D. (2001). Self-regulated learning and academic achievement: Theoretical prespectives. New Jersey: Lawrence Erlbaum Associates.
| Please cite as: Gibbs, G. (2014). Dialogue by design: Creating a dialogic feedback cycle using assessment rubrics. In Transformative, innovative and engaging. Proceedings of the 23rd Annual Teaching Learning Forum, 30-31 January 2014. Perth: The University of Western Australia. http://ctl.curtin.edu.au/professional_development/conferences/tlf/tlf2014/refereed/gibbs.html |
© Copyright 2014 Gael Gibbs. The author assigns to the TL Forum and not for profit educational institutions a non-exclusive licence to reproduce this article for personal use or for institutional teaching and learning purposes, in any format, provided that the article is used and cited in accordance with the usual academic conventions.