Context & Scale
This pattern describes an assessment structure that supports sustained course participation habits by students. Educators can scaffold and encourage student participation to prepare for assessments and to collaborate with peers, with the aim of developing self-regulatory skills for learning. Components of this assessment can potentially be automatically marked, making the assessment structure suitable for large cohorts.
We know that student engagement is strongly correlated with learning success (Krause & Coates, 2008). Authentic content and assessment can go some way to stimulating student engagement. However, we often don’t know how well students have engaged with content, or what their assessment preparation has been like, until they submit their first assessment. Inexperienced students may not have the skills to manage complex assessment structures that take sustained effort to complete. As students often are motivated by assessment requirements (Rust, 2002), this pattern scaffolds and rewards habits that generate sustained participation.
This pattern describes an assessment that marks students on their timely completion of a set of authentic activities that build towards completing their formal assessments. The assessment does not consider the content of the activities – just that they have been completed on time.
There are two parts to this pattern – deciding on the participation artefacts, and communication to students.
Identifying authentic participation artefacts requires you to identify what students should be doing regularly to thrive in the course. For instance, if there is a final reflective portfolio then regular reflections should be included. If there is group work, regular artefacts such as meeting minutes or personal action lists arising from meetings, should be included. For a research assessment, collaborative brainstorming of ideas on a discussion board, collaborative annotation of bibliographies, research journaling could all be considered. You might also consider including attendance at workshops, as activities such as group discussion and collaboratively working through examples occur there which support success in assessments. The content of these activities is not marked, only whether students have participated in the activity in a timely manner is marked. The activities should be clearly useful to students in their other summative assessment pieces, which also supports mapping of this assessment to learning outcomes.
It is fundamental that the reason these interactions are being marked is very clearly and frequently communicated to students, as students resent being asked to complete what they see as ‘busy work’, and it is often not clear to them how the pieces of work they complete fit together until late in a teaching period.
Examine the formal assessment schedule to identify regular activities students should be undertaking to prepare for their assessments.
Choose three to five activities to include in the assessment. Activities should be simple, such as a one-line reflection, adding a post to a discussion board (not mandated replying to a post), adding a note to a shared document. If there is a group assessment, include some form of collaboration activity.
Choose how often activities should be completed. If weekly, consider giving full marks for completion in anything above 75% of weeks, allowing students some agency in time management. If a weekly activity will be closed to support regular completion, don’t enforce this in the first few weeks so that students may gradually form good habits without penalty. Not all components need to have the same timing schedule.
Deploy supporting technology (e.g., discussion board, reflection journals, blog, peer review software, reading annotations software). Ideally a timestamp and activity indicator should be readily available from the technology to allow at scale marking. You can also consider closing the activity once the suggested time for completing it has passed to encourage the development of time management skills.
Communicate structure and intent of assessment clearly and often to students, on the LMS, in announcements, and in tutorials, with reminders each time the activities are due.
Allow a small amount of time in tutorials, especially in the first weeks, for completion of activities to start modelling participation habits and to support self-regulation.
Examples of pattern in use
Succeeding in a Post-Crisis World (OLES2210) Semester 1 2023
We acknowledge unit coordinator Robyn Martin for her work in developing this assessment form.
This pattern was evaluated in one semester of an Open Learning Environment unit (Succeeding in a Post-Crisis World). Aggregated student data was analysed, including analytics on each component of the assessment, and the text content of reflection posts, discussion board, and group feedback comments. Two focus groups (N=6 and N=3) were also held.
The unit had 595 students, most of whom attended face-to-face tutorials, some of whom chose online tutorials. All course content was online.
This unit aims to develop leadership and collaborative skills in students. The full assessment structure included a group presentation and a final reflective portfolio. The sustained unit participation assessment was worth 20% of the final grade.
The sustained participation assessment components were (all worth 4 marks each):
- Content Reflection – A short weekly reflection on course materials. Develops self-regulation and a connection to content, closed off at the end of each week,
- Workshop Reflection – A short weekly reflection on tutorial activities. Relates to engagement with workshop content, tutors and fellow students, closed off at the end of each week,
- Discussion Boards Contributions – A regular contribution to the course discussion board to encourage cross-cohort engagement
- Workshop Participation – A tutor participation mark covers both attendance and engaged behaviour in class
- Group Member Interactions – A mark allocated by students to team members in their (separate) group assignment
Technology and resources
The Learning Management System (LMS) was Canvas.
Components 1, 2 and 5 of the sustained participation assessment were enacted through the Student Relationship Engagement System (SRES). This allowed automatic marking of these components, feeding straight into the Grade Book.
The discussion board was held on Ed Discussion, which allowed for more discussion options than the native Canvas discussion board. Analytics were downloaded to calculate this component.
An SRES field that grouped students into tutorial classes was set up in Canvas to allow tutors to quickly enter engagement marks for students in the fourth component at the end of semester.
The assessment was supported with a rubric, and was referred to in the slide deck for every tutorial.
Time was given in each tutorial (around ten minutes) to complete reflections in the first four weeks.
The reflective posts were to be used as raw material for the final reflective portfolio. Full marks for these components could be gained if students completed them on time 75% or more of the weeks. They had to enter a minimum of one word to gain the mark. Despite initial fears that this requirement would be gamed, or that generative AI would be used to create entries, the evaluation showed that this was not the case, and that it was strongly engaged with. Student focus groups revealed that students appreciated the structure of these components and could see the value in them. Student marks for these two reflective components of the assessment were strongly correlated with their total unit marks.
The discussion board post requirement did not relate to a specific assessment but was promoted as a way of engaging students across the cohort. This was the least successful component. Despite having a mark attached to it, only 10% of students scored full marks on this component. Focus groups showed that students did not feel that this was authentic activity in the course, and despite the stated intent, students thought that the cohort was too big to engage with in this way. This component did not correlate with students’ total unit marks, and we do not recommend using it without significant redesign.
The tutor participation mark was entered by tutors at the end of semester, and necessarily had some relation to attendance. It moderately correlated to total unit marks.
Students were given time in tutorials to enter a mark for each of their group members directly after their group presentations. They were also given the option to add a supporting comment, knowing that these comments would only be read by tutors. An overwhelming majority of students not only graded their teammates, but also entered a supporting comment, showing strong engagement with this activity. This component also strongly correlated with the total unit mark.