Abigail E Shaw



A3: Assess and give feedback to learners.

I work to the principles: “assessment should be authentic, accessible, appropriately automated, continuous and secure” (Jisc, 2020) (D2v, V1, V2, V4). As a Learning Technologist, I am responsible for ensuring this by developing, training in, and advising on the university’s digital assessment policy, as well as the construction and support of appropriate assessment (Evidence 1). I provide academic, professional and student training in effective use of a range of assessment platforms, currently:

  • Moodle (VLE, includes Assignment activity enabling submission of any digital file,
  • Quizzes, Forums, extensive feedback options)
  • Turnitin (used for submission of all academic written work, generates Originality Report itemising text-matching)
  • Grademark (via Turnitin, enables rubrics and written feedback)
  • Mahara (digital portfolio software)

I led a Turnitin upgrade in 2018, establishing new default submission standards, creating and completing user and scenario testing, ensuring the setup tackled common quality and logistical issues, creating documentation to support academic staff in use of the new version (Evidence 6) (K2, K5).

I continue to revise advice as needs evolve, including in multiple formats, consistently revaluating best practice. After a decline in the suitability of Turnitin for certain assessments (e.g. multipart anonymous submissions, no longer supported). I have learnt not to rely on a single solution for assessment delivery, and come to understand the importance of backups, alternatives, and contingencies to ensure successful submission, accountable by all required metrics (e.g. anonymous submission, plagiarism-checking) in the event of software outage or feature demise (K2, K4, K6, V3).

Initially I was highly reactive in supporting assessment and feedback - investigating day-to-day issues via online discussion, through other institutions' support documentation, and peer support. In 2018, St. Mary’s was moving from analogue to digital assessment: my understanding of the assessment and feedback requirements of staff and students alike expanded rapidly, supported by participation in both FADC (see A1) and programme boards (which incorporate written and verbal student feedback) allowing me to reflect and act on critical observations in numerous circumstances (Evidence 1, D2v, K1, K2, K3, K6, V3).

As programmes digitalised assessment, many retained ‘final essay’ assessments whilst also incorporating smaller, continuous exercises as formative exercises, leading to more, rather than improved, assessment (as observed by Gibbs, 2006). I have shifted to pro-actively supporting strategically-developed continuous assessment to ensure a more balanced approach (Evidence 2, Evidence 3). In programme design I begin with required graduate outcomes, consider how these distill into modular learning outcomes, then discuss how these are demonstrated by tasks, activities or other types of assessment following “constructive alignment” (Biggs, 2011) to ensure a rigorous and consistent depth of productive student engagement and experience, as “that which is assessed, gets taken seriously” (Knight and Yorke, 2014) (D2v, K3, K4, K5, V3).

As a result of increased exposure (e.g. Mahara workshops and one-to-ones for staff CPD (Evidence 7)), many programmes are revalidating with portfolio assessments requiring the compilation of multiple tasks carried out across the semester. These can “allow students to make claims to complex learning achievements” (Knight, 2005), and support the inclusion of a wider spectrum of autonomous and initiative-driven learning, developing students’ “ability to make their own decisions about what they think and do” (Boud 1988).

It is a vital part of my role to support staff understanding of the constraints and benefits of available platforms for assessment, for, whilst portfolios provide excellent opportunity, they are “notoriously difficult to assess reliably” (Knight, 2005) and require a rigorous marking plan or framework, established at programme level, to university standards, applied universally by all markers (D2v, K4, K5, K6, V2, V3).

This concern for consistency and range of feedback also extends back across all assessment, and whether working in design or consultation, I ensure staff are aware of the types of feedback possible, and the ways to give students what’s required to help them progress, whilst satisfying both internal and external requirements, ensuring both staff and students see value in the feedback received (Bailey and Garner, 2010). Feedback types I support include:

  • Annotating / in-text comments
  • Text box comments
  • Rubrics
  • In-person/Skype tutorials
  • Comments/cumulative (portfolios, forum)
  • Video
  • Audio
  • Screen capture
  • Grading sheets
  • Feedback files

Being involved with software user communities allows me to directly participate in user development and feedback, and ensures I am aware of forthcoming releases, ensuring I am promoting usage from an informed standpoint. The next Mahara release provides significant technical enhancement, e.g. enabling assessment template design ahead of time, and pushing the template directly to the user, greatly reducing take-up time for new students, allowing a wider range of courses to adopt Mahara, which, in turn, contributes to the TEL Team's strategic goal for 2020/21: increased, improved usage of existing services (D2vi, A4, K1, K2, V3).

References

Bailey, R. and Garner, M. (2010) Is the feedback in higher education assessment worth the paper it is written on? Teachers' reflections on their practices, Teaching in Higher Education, 15:2, 187-198, DOI: 10.1080/13562511003620019

Biggs, J. and Tang, C. – Teaching for Quality Learning at University (4th ed.) (McGraw-Hill Education, UK)

Boud, D. – Developing Student Autonomy in Learning [2nd Ed.] (1988, Kogan Page)

Gibbs, G. – Innovative Assessment in Higher Education, (2006, Routledge)

Jisc, The future of assessment: five principles, five targets for 2025, Spring 2020 accessed via https://repository.jisc.ac.uk/7733/1/the-future-of-assessment-report.pdf on 20th February 2020

Knight, P. and Yorke, M. – Employability: judging and communicating achievements (2014, HEA Learning and Employability Series) accessed via https://www.advance-he.ac.uk/knowledge-hub/employability-judging-and-communicating-achievements 23rd February 2020