Context & Scale
This pattern is useful in the context of any unit with formative or summative assessment. Providing students with rubrics is best practice for all assessment, however rubrics are particularly important for the successful implementation of authentic assessment types. This is because authentic assessment types often require complex and divergent responses from students, meaning clear standards are needed for equity in marking, and to develop students’ evaluative judgements (Villarroel et al., 2018). This pattern relates to CLaS principle #3 of relevant and authentic assessment and feedforward.
This pattern supports learning and teaching at scale by making the process of providing more detailed and explicit feedback easier for teachers and clearer for students. The use of online rubrics integrated into an LMS can support teachers to provide more detailed feedback more quickly when marking large quantities of assignments and improve consistency in marking across a team.
The provision of online rubrics with detailed criterion descriptors for performance at each grade level implemented in an LMS supports more efficient marking processes for teachers and feedback to students. By providing criterion descriptors for each grade level, students gain clarity around assignment expectations and what they need to demonstrate.
The use and evaluation of outcome-based rubrics for higher education assessment, particularly for authentic assessment (Villarroel et al., 2018), is well discussed in higher education literature. Drawing on Popham’s (1997) foundational work in defining the design elements of rubrics, Dawson (2017) provides a useful guide for a replicable rubric design process.
Research shows that rubrics support core elements of quality assessment design, including learning outcome alignment (Biggs & Tang, 2011), providing students with clarity and consistency of assessment expectations (Andrade & Du, 2005; Brookhart & Chen, 2015; Jonsson, 2014), as well as supporting teacher feedback to students (Reddy & Andrade, 2010).
Problem
Large cohorts present difficulties in communicating assignment expectations clearly, explicitly and consistently to students, and giving consistent feedback at scale, especially to support the delivery of authentic assessment. It is challenging for teachers to provide quality feedback to a large number of students using manual processes, and to achieve consistency in marking across a large teaching team.
Solution
The use of detailed online rubrics with assessment criteria and associated grade level descriptors implemented in an LMS.
Implementation
Before beginning your rubric design, make sure you’ve undertaken alignment of assessment criteria with broader unit learning outcomes, program level learning outcomes and university-wide graduate qualities or abilities. This means there is a logical mapping between what you’re expecting students to learn and how their performance is being assessed. This should be done at the stage when the overall unit assessment design is proposed and structured.
Choose a type of rubric. There are different types or styles of rubrics, including:
- analytic (sometimes called conceptual)
- holistic
- checklist.
While each have advantages and disadvantages, this pattern focuses on analytic rubrics as they support more detailed student feedback and more consistent grading for markers (Zimmaro, 2004). This resource provides further information on choosing the right rubric for your assessment needs.
An analytic rubric should describe three main components, usually laid out in a matrix grid: outcome-based evaluative criteria; descriptors of each criterion; and performance or grading standards (Popham, 1997; Dawson, 2017).
Develop a draft rubric:
- Establish a set of outcome-based evaluative criteria for student performance in the assessment task. These criteria should be mapped to the relevant broader unit learning outcomes (Biggs & Tang, 2011).
- When structuring your rubric, ensure that each criterion relates to one assessable skill or capacity you are asking a student to demonstrate.
- Consider the weighting of the various criteria – are they evenly weighted? Or are some more important than others?
- Draft a scalable description of performance indexed to each grade level to show development across standards (e.g. from HD to F).
- Consider learning taxonomies to differentiate the progressive standards of achievement in each grade level description from higher order through to lower order capacities e.g. Blooms’ revised taxonomy (Krathwohl et al., 2001) and Biggs’ Structure of Observed Learning Outcomes (SOLO) (Biggs & Tang, 2011).
- Use language to differentiate the progressive standards of performance in each grade level description:
- Use verbs e.g. HD= “critiques”, F= “identifies”
- Use adverbs e.g. HD= “exceptionally”, F= “poorly”
- Use adjectives e.g. HD= “persuasive” argument, F= “weak” argument
- Consider the indicators within the students’ work that can be used to differentiate between these levels of performance. E.g. what does ‘exceptional’ actually look like in the context of expected student work in this assessment compared with ‘poor’?
- Establish a set of outcome-based evaluative criteria for student performance in the assessment task. These criteria should be mapped to the relevant broader unit learning outcomes (Biggs & Tang, 2011).
Consult with tutors on the rubric design by distributing the draft and soliciting feedback. If possible, consult with students for their feedback and interpretation of the rubric.
Once the rubric is finalised, make it available online by building it into the LMS assignment for online marking.
Tutors should be briefed on what each rubric descriptor means and what indicators of student work might look like to inform their communication with students and to support marking moderation. A tutor training session may be needed to cover how to effectively use the online marking system in the LMS and to give tips for efficient marking e.g. the effective re-use of feedback comments. The rubric feedback could also be supplemented with additional personalised feedback at scale.
The rubric should be made available to students before they complete the assignment. Rubrics are a communication tool between the teaching team and students: ideally, students should be taken through the rubric in class to clarify their understanding. A tutorial activity where students practice applying the rubric to an exemplar can be helpful here. Student questions could also be addressed in a Live Q&A.
After the first round of marking with the rubric, solicit feedback from tutors and students. Rubrics are living, breathing documents – they should be developed and refined iteratively with each implementation.
Examples of pattern in use
Foundation in Data Analytics for Business
The approach was iteratively developed and implemented. We acknowledge the unit coordinators and lecturers involved from the Business Analytics discipline, including Anastasios Panagiotelis, Andrey Vasnev and Simon Loria.
Context
This pattern was implemented in a large postgraduate foundation unit in the Business Analytics discipline (QBUS5001) with a cohort ranging between 700-1600 students across semesters. It was implemented in the context of broader developments in the unit where traditional lectures were re-shaped into self-paced online modules. This design addressed issues around increasing student numbers causing inconsistencies in learning experiences. The unit re-design was undertaken using a co-design approach which was initiated through a Connect:In Workshop with students, teachers, educational developers and learning designers. A theme that emerged from the workshop was that students felt they were learning statistics, but not statistical thinking. Students wanted more opportunities to apply the theory they were learning to business contexts. In response to this, as well as to support student learning around Unit Learning Outcomes that involved problem-solving, communication and collaboration, a new authentic data project was introduced with an individual component that fed-forward into a group component. The introduction of an authentic assessment opportunity into the unit supported CLaS Principle 3 of authentic assessment and feed-forward.
Teacher feedback was facilitated via online rubrics and this feedback was augmented through a peer review opportunity. The peer review was part of the feed-forward design, as students could use the peer feedback from the first, individual part of the assessment, as well as teacher feedback, and apply it directly in the second part of the assignment. Students used the online rubric to give their peer review feedback, so this design also supported their familiarisation with the process of applying a rubric to make evaluative judgements (Tai et al., 2018). The assessment was authentic because it required students to select and work with real data sets and to undertake the same kinds of tasks they would need to do as business analysts working in industry (Villarroel et al., 2018), such as producing data visualisations.
Before the re-design of the unit, the assessments were all exam or quiz type assessments. There had previously been an authentic type of assessment when the unit was smaller, but this was removed before the CLaS project due to growing cohort sizes and issues with managing group work in very big units, and the workload issue of providing feedback. Whereas exams and quizzes can be automatically marked, reviewing student presentations and reports is more time-consuming for teachers. Members of the broader team of tutors were also more experienced with giving feedback on exam and quiz-type questions, rather than authentic assessments modes such as presentations and reports which had not been the norm in the unit.
Implementation
To address this problem, the co-design team developed detailed online rubrics for both the individual and group components of the assessment. The educational developer on the team created a generic draft rubric. The Unit Coordinator then developed the rubric into a new, more detailed version. The rubric was then distributed to the teaching team and the head tutor collated feedback to share with the core co-design team to develop the final version. The Unit Coordinator also developed exemplars of poorer and stronger assessment responses which were provided online in Canvas along with the brief and rubric. A tutorial activity was designed where students evaluated the exemplar videos against the rubric before completing the assignment. This design helped to increase transparency and manage student expectations around each rubric criterion in relation to clear examples (O’donovan, Price & Rust, 2004).
Technology/resources used
Addressing the challenge of scale, we leveraged the Canvas LMS to facilitate group work, peer review and feedback through the online rubric feature. The use of online rubrics in Canvas enabled teachers to use SpeedGrader for more efficient marking.
Findings
The new assessment design was evaluated with a student survey and student focus group, receiving positive feedback. A theme that came through in the feedback was students appreciated authentic assessment tasks where they could put the same skills required to succeed in that assessment into practice at work, or when they were relevant to future work. While students found the feedback in the unit supported their learning, they noted that the timeliness of releasing online feedback to students was important. Feedback from the Unit Coordinator was also provided through the form of a semi-structured interview. The coordinator noted that the initial development of the rubric was time-consuming, but that investment of time at the outset was worthwhile as it made things easier in the future. The coordinator also noted that the co-design process supported the development of the rubric through feedback and input from others.
Leave a Reply