The goal of the system is to not place trust in any one specific assessment process (like an exam, a multiple choice test, or anything that concrete) but instead to trust in the social mechanisms and incentives that will motivate assessors to preform as appropriate an assessment as they can. This means the system can be infinitely more flexible, and can achieve its aim of being truly universal, as social mechanisms aren’t tied to any specific domain or format.
So when a user pays tokens and initiates an assessment in a specific arbitrary concept, multiple assessors are called randomly from the pool of people who have previously been assessed favorably in the same concept or related ones.
Each of these assessors then puts down a stake and interacts the the student/assessee individually using whatever means are available to them, text chat, video calling, or even an in person meeting if its possible.
Each assessor then forms their assessment, and tries to match it as closely as possible to what they think all the other assessors will do as that is what their reward hinges on. So they’ll try to be as secure as possible and not allow the assessee to cheat through their own systems of invigilation.
This system allows you to not be concerned with the formal systems of assessment, as they are far too varied to be able to be pinned down, but instead trust in the social interaction between people who are motivated to give as an accurate assessment as they possible can.
This trust obviously won’t be built all at once. The system will have to grow slowly and build up a reputation of itself as assessor within it build up their own reputation. But by having our assessment model by process-agnostic we’re enabling the system to function for anyone and anything.