Original vectors from pikisuperstar

Remote Research: Student assessment and feedback

Jordan Marshall
UX York

--

In January 2020 York had the first UK Covid-19 case. Nobody could have foreseen what was ahead of us. Not least the digital transformation for students and staff in just one year.

Student assessment, grades and feedback is a critical part of the university experience. A process which occurs regularly per semester with heightened emotion for all involved.

With exams moved online, practical work becoming written essays and a reliance on software in scenarios never envisaged it was time to reflect. How is this whole process working in academic departments? What do students think about this? Are the tools working? What does a post covid assessment and feedback experience look like?

Working collectively with Craig Adams, Charley Bayley, Patricia Bartley and Claire Richards we set about answering some of these questions.

Preparation

To answer some of our questions a research plan was formalised. Our focus was exploration, with the freedom to deep dive without constraints. The best way to facilitate this was a qualitative research approach to understand the Why and How behind user actions.

For those unfamiliar UX research should not be mistaken for market research. UX research is focused on what people do, not what they say they do, think or feel. Driven by observation and storytelling to understand individual behaviours and interactions. Market research looks at the broader trends, conversion and commonly quantitative led.

We focused on three departments, Management, TFTI and Psychology. Giving sufficient contrast in both ways of working, students enrolled and internal processes to make it all work.

The original scope was to look at the submissions points, grades & feedback across formative and summative assessment. However it was important to acknowledge the student journey started far sooner. How did they find out about the assessment? When did they know the deadline? What tools are used in preparation for submission? For this more holistic end to end journey the research scope was broadened to capture the learning journey.

Software is a channel, not the focus

When software is involved, we naturally gravitate our focus towards this. Yet activities often occur behind the scenes, hidden yet critical to accomplishing our user goals. For example we captured some administration tasks which occurred once per year. Others are time critical, needing completion within a 2 hour timeframe. These dependencies show the process was far more complex, but would have remained hidden if our focus was on the submission software only.

By treating this as a service design project early on, the software was objectively reviewed as a touchpoint within a far wider learning journey.

Adapting our approach by embracing our limitations

The covid restrictions mandated fully remote sessions. It was acknowledged this approach had limitations compared to our preferred method of contextual inquiry and shadowing. We expected participants to be more heavily reliant on retrospective thinking to describe the journey and the key moments within this. Inducing inadvertent bias within our results. To mitigate we delayed the research until students & staff entered the submissions and marking period. Secondly, we insisted on viewing as much of this process in the moment for both staff and students. This involved screen sharing, document sharing, remotely shadowing interactions and in some situations shown physical items within the participants living space.

Remote discovery sessions

The recruitment

We initially recruited 16 students across the 3 departments, using pre-screeners to select from across degree programs, year of entry and home / international students. With provisions for drop outs and no shows we engaged with 10 students using our mixed method approach. Team members were encouraged to drop in throughout the sessions, allowing them to build understanding first hand.

In addition, we used contextual inquiry interviews with SME’s, understanding the inner workings of the assessment and feedback process. This complemented concurrent work by Patricia, Charley and Craig to understand the administration, IT and academic prospective.

In summary we conducted over 32 hours of UX research over a period of 4 weeks with students and SME’s.

Remotely shadowing an internalised process with a SME

Making sense of the data

We collected over 1620 minutes of video footage, 13 pages of written notes and a digital archive of resources used by staff and students covering all three departments. How do we make sense of this data?

Affinity Mapping

We used thematic analysis to decouple, code and then find patterns / themes in the qualitative data. We maintained the divide between departments to allow for direct comparison.

The output was an affinity diagram. Adapting to remote working led to the first affinity sort done individually into overarching themes. This allowed the subsequent sessions to focus on collaboration. With focus on identifying the key insights together, cross pollinating the key learnings from our research across the team.

Secondly, in absence of whiteboards, post-it notes and sharpies the team used Miro. With infinite space it allows us to put all our data in one place, can be viewed at any time and easier to read compared to handwriting post-its. However, Miro continues to have an unpleasant learning curve with those familiar and unfamiliar so be prepared for some bumps along the way.

Affinity Diagram

User Journey Mapping

Data captured in the affinity diagram was repurposed into As Is User Journeys, the aim being to visualise our data. Communicating the students experience via storytelling to aid stakeholder understanding. This builds empathy with real students on how assessment & feedback is really experienced. Removing assumption, guesswork and challenging internal preconceptions.

Although common, we avoided journey mapping for each participant in this project. This was because the scenarios followed a fairly generic path. Divergence was limited throughout. We therefore focused on showcasing the most common path per academic department to create the overview desired. A sensible compromise when timelines are tight.

User Journey Map — TFTI
User Journey Map — Management

Service blueprints

The service blueprint, commonly regarded as the second half of the user journey was used. Following the scenario in the user journey, but with focus on the internalised process. The end result is a picture of how the experience is actually delivered.

Often captured for the first time, this process reveals unexpected interactions, work arounds and cross collaboration between functions. In combination with the user journeys, the team truly get a 360 understanding of how assessment and feedback is not only experienced but delivered.

In total we produced 3 service blueprints, one per department. Normally created collaboratively in full day workshops, some can take weeks to complete. Significant covid changes in education and the start of the assessment period meant securing collaboration time across departments very challenging. Secondly, full day workshops over Zoom would have been too mentally exhaustive. We therefore adapted, engaging SME’s at an individual level in short meetings to slowly populate the blueprints.

On reflection it was important we remained flexible. It may not be our preferred route, or the normal way. However, it would be equally wrong to refuse because the conditions aren't to our liking.

Service Blueprint - TFTI Department
Service Blueprint- Psychology Department

What we learnt

We captured a wide range of insights. From student planning and submission to the internalised processes to make the whole thing tick. Here is a small highlight.

  • Students experience of assessment and feedback is different per department. A standardised process across the university does not exist.
  • The submission process for formative and summative is different. requiring students to learn two submission methods.
  • Confirmation and preview after students submit the assessment are manual and easily ignored. This is currently resulting in a small number of students per year submitting the wrong assignment version or in the most extreme cases the incorrect assessment for that module.
  • Students are not receiving formative grades & feedback in a consistent way. For example we captured tools such as email, in class, the student records system, shared drives, VLE, notification and other third party communication platforms used. Differences exist at department and module level.
  • The presentation of feedback differs per department and for some per module. For example some have feedback sheets and inline comments within the submission. Others never receive the assessment back after marking, getting a feedback sheet only.
  • The current process doesn’t work well for creative submissions. Restrictions on file size, type and quantity has resulted in department specific work arounds. This has created the need for students to submit in multiple locations.
  • Plagiarism checks are heavily reliant on the markers. Using tools to support plagiarism checks was limited in our study for both staff and students.
  • The internalised process is heavily dependant on Excel sheets, shared folder areas and manual administration. Workarounds, duplication of tasks and content was found throughout.
  • Students are not clear when grades / feedback is due back. This was driving student dissatisfaction. Particularly when believing grades are late although still within the target turnaround.

This is a very small sample and somewhat restricted by what we can share currently. For anybody at the university who would like to know more please get in touch.

Final thoughts

  • Remote ux sessions are easier to run. Without the hassle of booking rooms, organising observation space and recruiting participants. It allowed us to book a participant and execute the session the next day. Students also felt more relaxed, conducted in a space they felt comfortable at a time they selected. It did have some disadvantages. We had significantly more drop outs and no shows compared to in person studies. Secondly these sessions often occur outside the context for which the product / service is normally used. Finally, trying to build a relationship with participants over a screen is difficult. Limiting the areas we could probe. In summary remote sessions work well but be prepared to adapt, acknowledge limitations and watch out for screen fatigue.
  • Select the right participant, not just any participant. It can be tempting to select easy targets. In our case a student rep the student union or the dreaded focus group. But by putting extra effort upfront to recruit the right people, you will extract far richer and more representative insights. Remember, just because they are a student doesn't automatically mean they are right for your study.
  • Consider remote collaboration tools carefully. For example although I was familiar with Miro others had an unhelpful learning curve, taking precious time away from an already tight schedule. Mural, Lucid and jamboards may be better depending on the audience.

--

--