Online exams: not just a question of invigilation

Bryn Jeffries
Grok Learning
Published in
9 min readAug 13, 2020

When COVID-19 hit universities at the start of 2020, there was a sudden shift to running exams online for students connecting remotely. The “COVID-19 Exam Software Survey 2020”, conducted by ACODE across every public university in Australia and New Zealand, revealed a wide range of technologies used to varying degrees of success.

The response was possible because the necessary technologies were already maturing. Besides their ability to be taken remotely, online exams are appealing because they can be delivered on a large scale without the administrative burden of paper-based exams, and the risk of mixed-up or lost papers that sometimes occurs. They can also be marked more efficiently (also online), and often automatically. Students and markers may also prefer the move to typed submissions rather than illegible handwriting.

Much of the concern about online exams has been on preventing cheating, by using remote proctoring technologies such as ProctorU. But online exams present more challenges than are generally appreciated. This article looks at the wider issues, and is intended as a primer for anyone thinking about running an exam online, and ends with some suggestions for approaches that don’t require remote proctoring. Since our own Grok Learning platform also had to make some rapid changes to support exams in code-based subjects, I’ll also briefly explain how its capabilities fit into these considerations, but mostly this article is intended to apply generally to online exams for any subject.

Preventing Student Malpractice

In an online exam, a student accesses the exam using a computer. The exam is hosted by a server on the internet, or “in the cloud”.

“Online exams” can cover a variety of assessment methods. The common characteristic is that the student uses software on a computer, and the exam is delivered via the internet. The exam is served from a remote (e.g., cloud-based) system such as Canvas or another education platform. Typically the student uses a web browser to connect to the exam system. The exam could be taken on university computers on campus, or (popular in the COVID crisis) on a student’s own computer in a remote location such as the student’s bedroom.

The following assumptions are typically made in an exam:

  1. The exam is attempted by the declared student; and
  2. The student only makes use of approved resources.

There are various “malpractice” methods that could in principle be used for a student to violate these expectations, which I’ll go through in turn.

Participation by proxy

Instead of the intended student taking the exam, another person undertakes the exam, masquerading as the student

Before the move to online exams, there were concerns that some students would pay for someone to attend an exam on their behalf. With remote exams such malpractice becomes far easier, since the student need only give their login details (e.g., user name and password) to a proxy for them to impersonate the student.

Some remote proctoring services (discussed more later) protect against this by keeping a visual record of the student, as well as requiring the student to present a form of photo identification. This can be done without a professional service — a Zoom session with the student could suffice. However, it’s important to ensure that, having verified the student’s identify, you then verify that this is the same individual starting the exam. This can be done through monitoring the student’s desktop (more on this later), sometimes in conjunction with a password issued by the invigilator.

External participation by proxy student

A student can share their credentials with a proxy, who can then attempt parts of the exam at same time as the student.

Even if steps are taken to verify the identity of a student, remotely or within a controlled exam room, it might still be possible for another person to log in as the student from another location and attempt the questions on the student’s behalf. This then circumvents any form of invigilation or remote proctoring, since only the student is observed.

Guarding against this form of malpractice therefore relies upon the exam service. If the exam is held in a designated location, it may be sufficient to restrict access to specific computers (e.g., by IP address). In the case of remote exams, the service can limit a student to a single connection session with the exam system — Grok Learning takes the latter approach by default. But it is important to note that students’ home networking configurations are diverse: computers often jump between multiple WiFi networks in the same home; some students may attend an exam while in transit using a mobile broadband connection; and many students have cause to use a VPN service that masks their true location. Defining what constitutes a “single connection” in such heterogeneous situations becomes a dark art.

Using unapproved external resources

The student gains an unfair advantage by making use of resources in their local environment, such as textbooks, looking things up on their phone, or asking other people for help.

As with any exam, it’s possible for a student to make use of other resources to answer questions, but the opportunities are far greater when you don’t have control over the student’s environment. Not only can the student use textbooks and crib-notes, but they could even have people in the same room who can give advice. And with access to a smart-phone the student can ask a much wider community. Collusion, where two or more people work on a problem that should be attempted independently, fits mostly in this category.

Preventing this form of malpractice is one of the big roles of invigilators, who monitor students throughout their exam session. It’s therefore most easily managed by requiring students to undertake the exam at a specified location with controlled (invigilated) conditions.

If the student is able to undertake the exam at home, the only way to control for this risk is remote proctoring: monitoring the student and their local environment via, e.g., a web-cam. This can be done informally by teaching staff via videoconferencing software such as Zoom. Alternatively, companies such a ProctorU provide a service that scales up to large cohorts. Some students have argued that this approach is an invasion of privacy, and this must be weighed against the benefit of taking exams remotely. Be warned that a remote proctoring service won’t necessarily be compatible with your chosen online examination tools (such as Grok Learning’s) — for instance, some automated services require communication via an API to notify when an exam has started and ended.

It should be noted that there have always been ways for students to get around the constraints of regular invigilation, and there are naturally fewer guarantees for remote proctoring when a concerted student has greater means to conceal resources.

Use of unapproved online resources

The student only uses their computer, but makes use of other online resources such as search engines and chat forums to answer questions in the exam.

Even with control of a student’s local environment, a student could potentially access unapproved remote resources using the device on which they are attempting the exam. This may seem like an arbitrary distinction, but invigilation or remote proctoring that simply monitors that a student continually works at their computer may not track what activity is actually being done.

Remote proctoring can address this issue if the service also monitors the student’s computer. Services such as ProctorU do this through requiring software to be installed on students’ computer, which has been criticised by some as too intrusive. Such software does not need to remain installed permanently on the student’s computer, however.

If an online exam is run under controlled conditions (in a dedicated examination room, for instance), network access can be limited to specific resources through firewall technology. When the COVID-19 pandemic finally abates, many tertiary institutions may prefer to use this approach.

Another alternative is to use a lockdown browser to control allowed activity on the student’s computer. Exams using the built-in quiz facilities of LMSs such as Canvas can use services such as Respondus Lockdown Browser for this purpose, but these are not automatically compatible with external elearning products such as Grok Learning, and institutional licences may only apply to specific applications. Other stand-alone tools such as Safe Exam Browser may be easier to support. However, all such tools face the same criticism as above of needing the student to install intrusive software on their computer.

The different forms of student malpractice described above cover the main issues that an examiner may wish to be addressed by the tools used for conducting an online exam. To summarise:

  • A controlled invigilated environment or remote proctoring is required if you wish to verify a student’s identity and monitor their use of external resources;
  • Lockdown browsers, network restrictions, or other monitoring software (sometimes available as part of a remote proctoring service) needs to run on the student’s computer if you wish to monitor use of online resources;
  • The examination software also needs controls to limit how many people can submit answers on behalf of the student — technologies like remote proctoring and lockdown browsers cannot protect against this.

What about Plagiarism Detection?

Aside from remote proctoring, the other popular technology in the academic honesty arena is plagiarism detection. Although not specifically aimed at exams, this can still be used to deal with some of the forms of cheating that occur. It also provides a second layer of defence, for the inevitable situation when a student manages to slip past the gaze of an invigilator.

A plagiarism detection tool looks for similarities in a student’s work to other students’ submissions, or to external content. Services such as TurnItIn provide similarity reports for text documents, drawing upon a large corpus of documents to ensure students have not plagiarised from other sources. Code-based submissions need specialised tools such as MOSS, and typically just search for similarities between submissions for the same task.

Grok Learning’s internal tool takes this further and compares all the versions of a student’s code, since regular snapshotting captures the full history of the student’s work. The tool can also include comparisons to known external sources, to cover cases where students have made use of material from assignment cheating websites.

Plagiarism detection cannot prove actual plagiarism, just identify candidates that appear suspicious. Particularly for smaller code-based tasks, there is a reasonable chance of two students independently producing the same answer. Conversely, a student can copy another’s work and then modify it to mask the similarities. However, there are many situations where the evidence of plagiarism is compelling, for instance through distinctive comments or mistakes that would be unlikely for people to produce independently.

An important limitation of plagiarism detection is that it cannot detect cases of work being done by somebody else: there must be at least two submissions featuring similar work for a candidate case of plagiarism to be detected.

Doing without invigilation

As mentioned earlier, no remote proctoring service is perfect, even for the types of cheating it is intended to prevent. On the other hand, exams are stressful, and safeguards against cheating are only likely to increase this stress, potentially to the detriment of a student’s performance. So it’s worth considering whether there might be other approaches that can take a lighter approach. Here are a few ideas, based on conversations with several other lecturers.

  • Run an online exam without invigilation, and follow up with a short 1-to-1 viva voce interview to check that responses are consistent. This can work especially well in the exam responses can be automatically assessed, so that time only needs to be invested in the interview.
  • Do without an exam, and do 1-to-1 code-walkthroughs with each student using a coding problem, or the student’s project work.
  • While not a guard against many forms of cheating, serving a randomised selection of problems to each student helps to reduce collusion and increase students’ perception that the exam is being conducted securely.

In conclusion

In this article I’ve set out some of the issues that should be considered in order to ensure that students participate in an exam without cheating, along with technologies available to address them. I’ll stress that none of these technologies will prevent or capture all cases of malpractice, and some are viewed as too intrusive by some students (and teachers). An examiner, or the institution setting the policies for exams, must decide which of the forms of malpractice it needs to prevent, and to what extent it wishes to do so. There are also ways to assess students without the need for detection of cheating, and the whole teaching and learning experience may benefit from doing without them.

Grok Learning provides an online platform for teaching coding-based courses and conducting related exams and assignments. For more info visit Grok Learning or contact uni-support@groklearning.com

--

--

Bryn Jeffries
Grok Learning

Tertiary Product Manager for Grok Learning, helping lecturers to help students to learn better.