While building Quire, I found myself being drawn to one component: feedback. Goals and check-ins can be powerful (Check out Red Panda if you’re interested in goal setting within a learning context) but the continuous feedback component, and in particular adaptive feedback, is an area I want to explore in more detail.
The next iteration of the Quire experiment (currently in development) is dropping goals and check-ins and doubling down on personalized feedback. The initial focus is around courses to see if we can provide timely, personalized feedback to students as they progress through a course.
After using services like Zapier and IFTTT (If this, then that) to automate various workflows, I’m adopting this approach in Quire to provide teachers, professors and trainers with the ability to define their own IFTTT rules that will inform the core algorithm, helping determine the best feedback to send to individual students and the best time to send it. We’re also experimenting with Google’s machine learning services.
Thanks to xAPI (and Learning Locker), the service harnesses a range of data points for each student such as their participation in an LMS through to attendance and then uses this data, in conjunction with IFTTT rules, to work out what feedback to send and when. Whenever feedback is triggered for a student the criteria responsible is appended to the student’s persona, which is constantly building over time.
Students will be able to provide opinion on the feedback they receive; was it helpful, annoying etc which will help the instructor (and Quire) build up an effective bank of feedback options.
Similar to the first iteration of Quire, feedback can be pushed via email, sms or a range of messaging apps.
As is the nature of experiments, I’m not sure yet if this approach will provide any real value, or even work, but I feel it’s worth exploring and would welcome anyone interested to get involved, just reach out @davetosh. Thanks.