Plagiarism, proctoring and co-pilots: university assessment in the AI era

NAXN — nic newman
20 min readJun 19, 2024

--

Digital assessment in HE market map, by Emerge Education.

We’re building our annual list of the top emerging edtech companies in higher education for 2024, in collaboration with our Higher Education Edtech advisory board which is convened in partnership with Jisc and chaired by Mary Curnock Cook, CBE. As we do this, we’re diving into the trends and opportunities for tech-powered innovation along each step of the learner journey → from student recruitment to staff and student experience, teaching and learning, assessment and graduate employability.

In this fifth article, we’re shining a spotlight on assessment. Who could have predicted in early 2020 that the word ‘proctoring’ would become part of common parlance beyond universities, or that decades of assessment practice would be jettisoned overnight in favour of open book online exams?

Now, assessment is in the throes of potentially even greater disruption with the arrival of a new generation of AI tools. This time, the disruption may not only affect how assessment takes place, but also what is assessed, and why. What vision might we have for a future that offers not a quick fix but a managed transformation to a well designed assessment system fit for the world that today’s students are heading towards?

The student journey in higher education.

Read on for:

  • Challenges, trends and opportunities, including our predictions for the transformative impact of genAI
  • Views from sector experts, plus tips for founders
  • A mini-market map of key players and top emerging startups in this space

Keywords: assessment, exams, essays, proctoring, academic integrity, plagiarism, AI detection

💡 Why it matters

The science fiction author William Gibson is often attributed with saying ‘The future is already here — it’s just not very evenly distributed’. Apocryphal or not, this observation certainly resonates with the AI era in universities.

When it comes to assessment, a Higher Education Policy Institute survey of more than 1,000 UK undergraduates found that 53% were using AI to generate material for work they would be marked on. One in four is using applications such as Google Gemini or ChatGPT to suggest topics, and one in eight is using them to create content. Whether or not the intention is to cheat (evidence so far suggests not), this will change everything: how assessments are designed, how student outcomes must be validated and authenticated, and even what we consider to be cheating at all.

🏈 State of play

  • Unlike primary and secondary education, universities are free to define their own approaches to assessment, with each university able to innovate as suits its circumstances and mission in the context of changing expectations from learners, assessors, providers and employers.
  • In practice, this means digital assessment is an umbrella term used to describe a range of activities, from scanning and workflow of exam scripts through to use of simulation, virtual reality and AI in the grading process.
  • In March 2020, as the impact of Covid-19 and the lockdown measures put in place to contain it became clear, universities confronted a stark challenge: how do you transform long established assessment processes, at speed and at scale? Hundreds of thousands of students at universities who were expecting, in a matter of weeks, to sit in ranks in exam halls, completing the pen-and-paper exams that would decide their final grades after three or more years of study, faced an uncertain future. Universities took a variety of different approaches based on their current contexts, their goals for the immediate period, institutional values and, for some, their longer-term digital assessment trajectory. Every approach involved trade offs.
  • Since then, it has been a rollercoaster ride. The first six months of 2023 saw a shift in attitudes towards generative AI in higher education, from total bans on its use to the publication of the Russell Group’s AI principles, committing universities to ‘the ethical and responsible use of generative AI and to preparing our staff and students to be leaders in an increasingly AI-enabled world’. The five guiding principles state that universities will support both students and staff to become AI literate; staff should be equipped to help students to use generative AI tools appropriately; the sector will adapt teaching and assessment to incorporate the ‘ethical’ use of AI and ensure equal access to it; universities will ensure academic integrity is upheld; and share best practice as the technology evolves. Within assessment, the sector is starting to see some creative uses of AI but as pockets of experimentation and innovation rather than widespread changes to assessment design.
  • Broadly speaking, university responses to AI fall into three categories:
  1. Revert → return to closed book exams or verbal assessments (vivas), despite extensive research showing that invigilated exams are not conducive to meaningful or inclusive assessment. The development of AI tools should push us to consider different types of examination, such as increased use of formative portfolio assessment.
  2. Outrun → design questions or tasks which cannot be answered by AI technology, although this will become more and more difficult as AI tools become ever more sophisticated.
  3. Embrace → find ways to incorporate AI tools into the assessment process, not just the assessment output, which pushes learners to develop different skills. Here, we can envisage a sliding scale of AI permissiveness:
  • a) AI is not permitted — AI tools may not be used to complete any portion of the assignment.
  • b) AI can be used in specific ways — AI tools may be used by students in certain ways, but not in others; this will have to be clarified to students.
  • c) AI is permitted — Students are permitted and/or encouraged to use AI tools to support their learning as they complete the assignment (for example, brainstorming, planning, drafting, revising).
  • Concerns over capturing academic misconduct are likely to subside again as long-standing plagiarism detection tools adapt to new developments and innovative startups such as Cadmus pave the way to identifying text injections, keyboard fingerprints and the progressive development of an essay. This will leave institutions free to focus on developing programmes and pedagogy that embrace generative AI, ensuring their students know how to use technology judiciously.
  • However, the pace of change in AI is wicked. Any faculty member in charge of assessment and regulation needs to begin watching what the next phase of AI is going to look like. As has been pointed out before, in the case of generative AI tools like Microsoft’s Copilot, which will be seamlessly integrated into workflows, asking students to declare AI use would be more like giving every student a car and then asking them to declare if they intend to use the brakes.

🚨 Challenges

  • ChatGPT is capable of producing high-quality essays with minimal human input, leading to concerns about new forms of plagiarism, academic integrity and the feasibility of continuing to use essays as a means of assessment. Paul Taylor, professor of health informatics at UCL, tested ChatGPT on an exam question he had written for a course on using digital technology in healthcare and found that the AI’s answer was ‘coherent, comprehensive and sticks to the point, something students often fail to do’. (UCL is now reimagining assessment and feedback in response to the opportunities and challenges AI brings.)
  • While text-based assessment is seen as the primary immediate risk area, AI’s potential to disrupt media production through its ability to create convincing art, video and audio extends its reach into far more of the curriculum. It’s a rapidly moving field and these issues are only going to increase as AI becomes ever more sophisticated. Beyond academic integrity, there are also concerns around the inherent bias in AI models and the risk to diversity in the curriculum.

“As educators, AI holds us to account to deliver on our promise that we won’t just tell people what we know and ask them to regurgitate it back. AI will encourage, perhaps even require, us to deliver more active, participatory experiences and process- and behaviour-focused assessment. In the process, we innovate our pedagogies in a way that makes it possible for our students to develop the critical thinking, creative, communication and AI literacy skills that they need to participate meaningfully in the workplace and the democratic system.

But, AI is not an automatically positive force in education. Educators need to beware of a new and growing generation of AI-powered edtech which makes it faster and easier for us to deliver sub-optimal teaching and learning practices. By far the biggest risk of AI in education is that it shores up content-heavy, knowledge-check based learning experiences which we know from 30+ years of learning science are flawed and under-serve our students.”

Philippa Hardman, creator of learning design engine DOMS, affiliated scholar at University of Cambridge

  • Recognition by professional, statutory and regulatory bodies of a range of higher education programmes is critical to the career paths of many students. The QAA has been convening conversations between PSRBs and universities to ensure that there can be variety, flexibility and innovation in the way students are taught and assessed, while still meeting the required professional standards.
  • Certain subjects offer particular challenges for digital assessment. Visual and performing arts, as well as STEM subjects, present their own challenges, whether that’s the need for large amounts of storage space for some visual or media subjects or the difficulty of showing ‘working out’ online in STEM exams. (There is a real opportunity here for startups to set themselves apart by having a deeper understanding of domain diversity in order to offer solutions to these very specific challenges.)
  • Scaling up to go beyond pilots and trials, meeting the need for digital assessment across an entire institution and a wide range of subjects in one fell swoop, is a major challenge in this category, adding a new level of complexity to the situation.

🔥 Trends

  • Rather than continuing the unwinnable arms race of making essay-based assessment plagiarism-proof, the rise of AI presents educators with the opportunity to create more authentic, relevant assessment techniques that focus on critical thinking, problem-solving and reasoning skills. In fact, as pointed out in this short but excellent presentation by Dr Philippe De Wilde, if HE as a sector shifts from seeking out plagiarism at every opportunity to rewarding originality, many of these fears would be allayed. If academic integrity is viewed as an abstract ideal — something immovable and unchanging — then a student using Microsoft Copilot to help with an assessed presentation should be punished. But the opportunity to shift towards portfolio-based assessments which better prepare students for the kind of activity they will actually be carrying out in the world of work should not be underestimated; it should not be viewed as the gradual decay of academic integrity.
  • The World Economic Forum’s white paper Defining Education 4.0: A Taxonomy for the Future of Learning specifies that new technologies have changed the way in which people interact with raw information; as a result, there should be less direct emphasis on knowledge and information and more focus on skills and abilities. AI is simply the next iteration of this. (Indeed, any university still hoping that AI detection services will be the answer to assessment plagiarism concerns may be alarmed by Conch, an AI-powered, subscription-based ‘writing tool’ aimed at students, which promises to ‘run your writing through our proprietary algorithm and have us rewrite it until it becomes detection free’.)
  • Pedagogically-sound AI could also support assessment processes, reducing time and workload for faculty for some currently time-consuming tasks such as generating multiple-choice questions for question banks, as well as supporting marking and grading. However, to make use of this potential, staff need to have the support and time to redesign their assessments (and redesign the curriculum, as assessment design is part of the wider curriculum design process), and a greater understanding of the affordances of the systems and platforms they have in their institution. There also needs to be a broader focus on the role of assessment across the entire assessment and feedback lifecycle, rather than simply digital exams.
  • Data is beginning to trickle through about the effects of the move to digital open book assessment on inclusivity and accessibility for different groups of students. Brunel University London, for example, uses the WISEflow digital assessment platform integrated to its student record system, which enables sophisticated analysis of outcomes for students. While there has been no grade inflation, Brunel has discovered that students who come in with BTEC qualifications — and, in particular, Black students with BTEC qualifications — benefited from the changes to adaptable, digital assessment and did significantly better in terms of degree outcome than in previous years. Degree outcomes were not improved for white, Black or Asian students with A-level qualifications.
  • For disabled students, the picture is nuanced. In the UK, according to the Disabled Students’ Commission, the most notable feedback provided by disabled students on blended learning more generally was that the flexibility and support they had been requesting for years (and had previously been told was not possible) had actually been implemented by their provider in a short space of time as a result of the pandemic — but some disabled students reported issues with blended learning. The concerns were nuanced and often differed by impairment type, highlighting the need to not treat disabled students as a homogenous group, and to recognise that the support requirements differ in complexity.

🌍 Key players

We have identified the leading startup players in this category across four key dimensions: online assessments, proctoring, credentialing, and marking and feedback.

These dimensions have been identified as key areas where external providers can add the greatest value for universities. In the market map, we have highlighted standalone assessment tools that universities can procure, rather than the larger technology providers who also offer assessment modules as part of their wider ecosystem.

Digital assessment in HE market map, by Emerge Education.

This has been a busy category for M&A in recent years. In 2023 alone, Multiverse acquired Eduflow (PeerGrade) and Inspera acquired Crossplag, at the heart of a new partnership with remote proctoring software Proctorio to add Inspera’s new similarity and AI-generated text detection to Proctorio’s suite of solutions. One year earlier, Pearson acquired Credly at a $200M valuation.

Coursera has just announced that it will launch its own plagiarism detector, which does not use … AI to identify … AI. Instead, the tool’s AI bot asks students five questions about choices they made while completing an assignment. Depending on the answers, the bot may ask five more questions. It then sends the answers to the instructor.

Turnitin, among the most well known providers in this space, launched its AI-detection tool in April 2023. While some institutions use the program, several, including Vanderbilt University, said they would be turning it off, amid ongoing sector-wide doubts about the efficacy and ethics of using AI tools to detect plagiarism.

🔭 Who is getting ahead?

Online assessment platform Cadmus has had a partnership with the University of Manchester (UoM) since 2021, when the university rapidly moved to online learning and assessment and needed a solution to:

  • Assure academic integrity in an online environment.
  • Enhance opportunities for underrepresented and minority groups who were disadvantaged by non-inclusive teaching and assessment, improving the degree awarding gap.
  • Maintain teaching and learning quality in a disparate environment.
  • Support students and educators facing change fatigue to adopt and execute new learning technology with ease and efficiency.

Cadmus’s response to generative AI has been to emphasise its long-standing academic integrity capabilities, which provide proactive learning support throughout the assessment process to avoid academic integrity breaches, rather than trying to catch misconduct at the point of submission.

Hong Kong University of Science and Technology (HKUST) moved quickly to embrace AI as fully but responsibly as possible. In February 2023, faculty members were offered four options for how they would prefer to address AI in their individual courses: completely ban it from assessment tasks; limit the ways in which it could be used; limit the types of AI that could be used; or allow its use with no restrictions beyond maintaining academic integrity and honesty. Around 80% of faculty chose the fourth option.

The result has been an explosion of creativity in relation to assessment tasks. For example, in the business management school, students on one course now use AI to design, create and then deconstruct a case study rather than simply discuss a case study chosen and purchased by the school.

In addition, the Centre for Education Innovation team and HKUST’s IT department are designing an AI platform that will act as a chatbot for faculty members to use to generate lesson plans, quizzes and other course design elements. Rather than educators facing the ChatGPT blank user interface, HKUST’s bespoke AI platform will be trained by the university with relevant literature, guidelines and best practice. It will ask guiding questions and also answer questions if faculty members are struggling, for instance, to write intended learning outcomes and map those to actual learning activities and assessments.

🔮 Predictions

  • In February 2020, the Jisc report ‘The Future of Assessment, Five Principles, Five Targets for 2025’ set out to suggest five key goals for digital assessment: more authentic, more accessible, appropriately automated, more continuous and more secure. For the next five years, we see three crucial elements coming to the fore:
  1. Relevant — Enabling universities to go beyond traditional forms of assessment, dictated by practical limitations of analogue exams, and to build systems that are relevant to contemporary needs and reflective of the learning process.
  2. Adaptable — Effective in addressing the needs of a growing and diverse student population, a range of providers and any number of geographies.
  3. Trustworthy — Based on solid foundations of academic integrity, security, privacy and fairness.

These three components can be visualised as a pyramid, highlighting at the top the ability that fully digital assessment will give us to accomplish things that may be seen as too risky or costly to pursue at present; taking into account the practical considerations of delivery so that the system can adapt to the scale and variety of higher education in the future; and underpinned by fundamental principles of trustworthiness, reliability and validity.

What might this look like?

  • Traditional assessments, such as dissertations and exams, fall short when it comes to evaluating soft skills, are poorly aligned with the behaviour based assessments increasingly used by employers, and impose structural constraints on developing creativity and divergent thinking. The shift to digital assessment will enable universities to re-imagine how and why students are assessed.
  • Relevant and meeting the needs of students and employers — There is growing consensus that the value of higher education is not just in knowledge imparted to students but in the skills and competencies they develop throughout their studies. As lifelong learning rises up the agenda of employers, education providers and policymakers, so does the importance of capturing whether students are building the foundations they will need to succeed in future life. Digital assessment will power the shift from memory recall to assessments that get to the heart of the new foundational skills of the future economy. How might this happen? Virtual reality can be used to assess a junior doctor’s communication skills not simply through what they say in response to a patient’s question but also how: the time it takes them to respond, whether they are looking at the patient, their tone of voice and much more. Similarly, we might envisage remote IT workplace simulations (similar to today’s Slack workspaces) populated with a mix of student users and machine learning-powered bots playing out scenarios that uncover the students’ ability to collaborate across teams in such an environment. Comparative judgement and peer grading, known today to be effective and accurate assessment methods, will become easier to implement at the scale of hundreds and thousands of students, improving the quality and depth of assessment for subjects in arts, humanities and social sciences.
  • Student-centred and personalised — Currently, assessment tends to follow a ‘one size fits all’ model. The shift to digital tools will make it possible to redesign elements of assessment from first principles, meeting students where they are and adapting to their individual circumstances, particularly those from traditionally underrepresented backgrounds. Adaptive assessment, which responds to students’ knowledge, learning gaps and abilities, is particularly useful in formative assessment where feedback can be immediate and gamified, thus underpinning the learning process. How might this happen? Assessment is a major source of stress to students, impacting their wellbeing and academic performance. A redesigned digital assessment system must be more compassionate. With advances in emotion detection and personalisation, digital assessment systems may also work to detect changes in a student’s stress levels and adapt to them, for example by changing the order of questions or offering a break (especially in formative assessment). Digital assessment will also make it easier to allow practice and preparation on the student’s own terms.
  • Anytime and anywhere — Unlike existing approaches, digital assessment is untethered to the physical infrastructure of exam halls and university buildings. While appropriate identity verification measures need to be taken, the pressure to concentrate all assessment activities within a very narrow timeframe and a particular location is significantly reduced. How might this happen? Universities need to deliver a growing range of courses and modes: residential and distance learning, full undergraduate degrees and stackable micro-credentials, apprenticeships — as well as self-directed and lifelong learning for students of different ages, backgrounds and nationalities. This will make truly global universities more feasible.
  • Efficient and manageable — By some estimates, global demand for higher education by 2030 will have increased to between 350 and 500 million students, almost doubling current student numbers and vastly increasing the administration of assessment. Current approaches to assessment at scale often involve the digitisation of analogue exam papers, effectively replicating existing assessment practices with marginal savings in effort. How might this happen? Fully digital assessment systems will allow large global institutions to mark millions of answers consistently, fairly and rapidly, providing substantial time savings and so freeing up resources for better student support, teaching and research. AI copilots create the opportunity for improved quality assurance of assessment with second and third marking to ensure uniformity of marks allocation. It can also be used to look at parity of questions to ensure they are of equivalent difficulty, thus paving the way for more question banks with randomised on-demand assessments.
  • Academic integrity Issues of academic integrity are a hot topic at the moment with a widespread sense of concern over plagiarism and the proliferation of essay mills. A range of existing digital solutions make use of large databases of student-submitted work as well as online search to detect cases of plagiarism, and advances are being made in the use of machine learning to discern a student’s ‘voice’ and flag submissions inconsistent with previous pieces of work. This technology is in place and widely adopted, but we must be mindful of the barriers that students from disadvantaged backgrounds. How might this happen? Moving forwards, we expect these tools not only to become a standard and invisible part of the assessment toolkit but also see a shift to a more student-centric approach through co-design and the development of informal or formal codes of practice — improving trust in the system as a whole. Once we can map from clicks to constructs, tracking how students work not only what they can produce, we come much closer to making educationally meaningful claims.

🎯 Opportunities for startups

GenAI engines of opportunity for universities.

In this category, we see particular opportunities for AI-driven solutions that offer:

  • Classroom recordings (skills assessment and feedback) Problem: So much rich data is now captured on our performance yet we only get feedback based on what our educators see and at fixed points of assessment. Solution: Tracks all of your classroom engagement and gives you automated feedback against a series of assessment dimensions.
  • AI-automated academic assessment → Problem: Assessment is where the majority of educator time goes. Saving time on assessment, both formative and summative, would save institutions billions of dollars. Solution: AI-supported grading could be that solution.

💎Tips for founders

  • A vision of why: It is not unheard-of for an edtech company to approach an institution with a pre-set menu of platforms and solutions, saying ‘pick one’, while the institution has no idea why or what the benefits are, feeling grateful that the need for these ‘solutions’ is over and they can get back to business as usual. While situations are not usually as polarised as this, the description carries more than a grain of truth about many provider-university relationships. What’s needed is a genuine vision shared by all parties of why they are doing this, why they are talking and what the goals and outcomes should be. The first question from any edtech provider should be ‘tell me about how your learners learn’. The technology is there to support the pedagogy, not the other way around. Unless this principle is acknowledged and feeds into every aspect of digitally enhanced learning, outcomes and benefits will be limited. Finally, ethical considerations must be paramount and ALT’s Framework for Ethical Learning Technology (FELT), which is designed to support individuals, organisations and industry in the ethical use of learning technology across sectors, is a good starting point.
  • Find the middle ground: Universities sometimes complain that edtech providers are too inclined to wade in with absolute visions, asserting that ‘this is how it’s going to be’ in the future of higher education, and how their products and services serve that vision, while the universities do not share or even believe in such a vision. Edtech providers are predicting inevitable disruption at a time when some universities are struggling to embrace digital in a strategic way — let alone keeping up with AI. There is a middle ground in which the best innovations, ideas and content that have emerged from the past few years are truly valuable to the universities that have adopted them and offer such value to others. There is also a middle ground in which edtech providers have a point about how the world is changing and about the way today’s students interact with that world, and about the value of bringing into universities their innovations and content, developed by pedagogically aware specialists.

🔗 Read on

Read more news, views and research from the only fund backed by the world’s leading education entrepreneurs, in Emerge Edtech Insights.

📣 Call to action

We are now building our list of the top emerging edtech companies in HE in 2024.

👇 If you have seen an exciting company in this space, please tell us in the comments 👇

Our list analyses 100s of companies operating worldwide, using public and private data — it is crowdsourced, and voted on by our Higher Education edtech advisory board, led by Mary Curnock Cook.

Please share companies you think we should consider in comments 👇and join us on 27 June to discover who has made the final list!

🙏 Thanks

At Emerge, we are on the look-out for companies (existing and new) that will shape the future of learning in higher education over the coming decade.

If you are a founder building a business addressing any of these challenges in HE, we want to hear from you. Our mission is to invest in and support these entrepreneurs right from the early stage.

If you are looking for your first cheque funding do apply to us here: https://lnkd.in/eWi_9J5U . We look at everything as we believe in democratising access to funding (just as much as we believe in democratising access to education and skills).

Emerge is a community-powered seed fund home to practical guidance for founders building the future of learning and work. Since 2014, we have invested in more than 80 companies in the space, including Unibuddy, Cadmus, Engageli and Mentor Collective.

Emerge Education welcomes inquiries from new investors and founders. For more information, visit emerge.education or email hello@emerge.education, and sign up for our newsletter here.

Thank you for reading… I would hugely appreciate some claps 👏 and shares 🙌 so that others can find it!

Nic

--

--

NAXN — nic newman

I write about growth. From personal learning to the startups we invest in at Emerge, to where I am a NED, it all comes back to one central idea — how to GROW