The Need For an Open Assessment Platform in Higher Education

Bob Bodily, PhD
7 min readOct 30, 2018

--

There are a variety of assessment platforms used in higher education. These include publisher homework systems (e.g. Pearson Mastering, Sapling Learning, McGraw Hill Connect, etc.), standardized exams (e.g., ACT, GRE, SAT, etc.), quizzing systems (e.g., iClicker, Quizlet, Kahoot!, etc.), and Learning Management System (LMS) quiz systems (e.g, Canvas, Moodle, Desire2Learn, Blackboard, Sakai, etc.). Each quizzing system tackles a different portion of the assessment experience: formative assessments, summative assessments, exam preparation, or in-class assessments. The questions for these systems are either proprietary (copyrighted with restricted use within a platform), home-grown (created by an instructor out of necessity and may not have an explicit license), or open (this is the least common category and includes questions with creative commons licenses). The current state of assessment in higher education right now is fragmented and siloed.

One potential reason for a fragmented assessment space is creating high quality questions and sharing questions in a good way is hard. Consider the following scenarios:

Scenario 1 - An instructor creates homework problems and exam questions for a new course they are developing. They create high quality questions that are specifically tailored to their teaching style and content emphasis. They may share these questions with other faculty teaching the same course, but will usually share the questions in Word Documents or Excel Spreadsheets over email. The impact of these questions is relatively low as only a few courses are using the questions. Furthermore, the questions can only be slowly improved over time as the data from each successive course is collected and used to improve the questions. These questions may not be explicitly tagged to learning outcomes, so if another instructor happened upon the course website, they would have to figure out the learning objectives for the assessment items and would not have any historical student data for the items, essentially starting over with the improvement process.

Scenario 2 - A group of instructors receive an internal grant from their university to develop an open item bank for a subject. They spend months writing, re-writing, testing, checking, and improving assessment items. Finally, they finish with hundreds of items that are placed in an assessment item bank and hosted somewhere (likely a proprietary system or university sponsored website). This item bank would contain peer-reviewed high-quality questions, but the impact of these questions would again be fairly low. The discoverability of this item bank is low, and even if an instructor found the item bank, the questions are usually not tagged appropriately (tied to learning objective or even sub-learning objective), and no prior aggregate student data is available for the items. Or the items are summative, in which case they cannot be shared at all! This type of exercise is helpful for the university at which the questions were developed (more visibility and awareness), but likely not far beyond this context. Furthermore, if an instructor teaching a similar course found the item bank, it would not be trivial to extract the questions they needed from the item bank. It would likely require manually sifting through the questions, hand-picking the ones they felt were appropriate for their course.

These two scenarios show the difficulties that arise from trying to create and share assessment items efficiently. These problems are summarized below:

  1. Questions are not formatted in the same way
  2. Questions are usually not tagged to learning objectives within the question itself (or as metadata for the question)
  3. Previous aggregate student performance data on the items is not provided
  4. It is not trivial to find questions (scattered Word Documents, Excel Spreadsheets, and item banks)
  5. It is not easy to copy, edit, or reuse questions from these sources (requires manual filtering and copy-paste)
  6. It is not easy to filter questions based on specific criteria (e.g., learning objective)

I’m sure there are more problems that I missed. Feel free to chime in below in the comments. Next up, a potential solution.

I believe a possible solution to these problems is an Open Assessment Platform. Here are the main components of an open assessment platform:

  1. The platform must allow for open licenses on questions, which allows instructors to easily use, reuse, share, modify, and retain questions they encounter in the platform
  2. The platform must allow for filtering and sorting questions based on difficulty, learning objective, and course
  3. The platform must have easy importing and exporting of questions using a variety of question formats (Word, Excel, Quizzing and Testing Interoperability format, etc.)
  4. The platform must have two distinct communities: (1) A formative, public-facing question repository and (2) a summative, teacher-only question repository. The summative question repository would not be public-facing, which would allow instructors (who have access to the summative questions) to more effectively collaborate and share resources in a secure way. The public-facing questions could be used for formative assessments and other self-check questions.
  5. The platform must track aggregate item-level student data. This means questions could be improved much more rapidly as many more students would be using the questions (assuming multiple courses were simultaneously using the questions). This also could potentially provide difficulty estimates for questions using Item Response Theory parameters or Classical Test Theory metrics.
  6. The platform must support a variety of item types (e.g., multiple-choice, free response, essay, multiple-select, fill-in-the-blank, etc.)
  7. The platform must have assessment item quality metrics to assess the quality of questions in the platform. This is one of the most important qualifications for the platform.

The Brief Prendus Story

To try to make this a reality, me and a few colleagues attempted to build this open assessment platform (yes, this was a huge undertaking). We called it Prendus, and we made a lot of progress. We supported QTI. We supported Learning Tools Interoperability (LTI). We allowed for question creation, editing, uploading, downloading, and revising. We tracked students as they interacted with questions (for item-level statistics). We built a feature so students could create and review other student questions as part of a class. We had a few thousands students use the system. We had a few thousand questions created in the platform. Despite our successes, our grant funding ran out, and although we had revenue greater than $0, we did not have a sustainable business model. We couldn’t finish the project due to lack of funds. Though temporarily disbanded, we still very much believe in the merits and vision of an open assessment platform.

Other Possibilities

Another group has attempted to build a system similar to this. Their system is called Peerwise. It allows students to create questions and then review other student-generated questions. The platform is free and has been implemented all around the world (I don’t have any use statistics on me at the moment, but it’s in the ballpark of thousands of classes). Unfortunately, there is no community sharing aspect to the platform (questions are locked within the course in which they were created), which really prevents this system from supporting sharing and discoverability as an open assessment platform.

Quizlet is another quiz system that is close to being an open assessment platform. Quizlet allows students to create flashcards, share flashcards, discover other student-created flashcards, and create quizzes from the flashcards they create or find. However, there is no teacher-only summative repository (teachers cannot discover or share summative assessment items securely), there is no quality control (no metrics are provided to judge question quality), and questions cannot be imported or exported in an easy way.

The final system worth mentioning in this post is Proola. Proola’s goal is to “connect educators and measurement specialists in the development of peer-reviewed, open, online, learning assessments.” This sounds pretty close to an open assessment platform. Proola is a relatively new endeavor, so only time will tell if it can be sustainable and create a community around the platform.

Trying One More Time

Because I truly believe in the vision and benefits of an open assessment platform, I’m starting on the problem again from a different angle. One problem with my initial attempt at developing an open assessment platform was finding a short-term business model while I was developing the platform. Unfortunately, building a crowd-sourced assessment platform was an enormous project, and I could not get to a minimal sellable product before I ran out of money.

This time around, I am starting with one subject rather than many subjects: JavaScript. There is a high demand for coding skills which makes it a good candidate for an initial content domain. We are building out an assessment platform with hundreds of JavaScript questions across all concepts that fall within the basic JavaScript domain.

Our code is all open source; our questions are all CC0 (least restrictive creative commons license); and we plan on providing access to anonymized data from our platform licensed as CC0. We are offering incentives around the development of JavaScript questions to support the further growth of the platform. You can follow our progress at JavaScriptPractice.com. We recently released an alpha version of our product and we’d love your feedback.

If you know of any open assessment platforms (or something relatively close) that I missed in this post, shoot me a comment because I would love to check them out.

--

--