Designing experiments is dangerously similar to conspiring awful things!

Today we were designing an experiment with my colleagues, and one of them made a comment that my brain picked up and screamed “gonsbiracy!”

The experiment involves users of a website participating in a real-life event. The website has a number of questions on certain subjects, and users—rather, teachers and students—use the site as homework, practise, and maybe preparation for University. The questions come in three formats: multiple choice, numeric, and symbolic. There is no incentive to give the correct answer straight away, and there is no punishment either. This effectively turned multiple choice questions into multiple attempts questions, in that students just try all the possible answers until they get the right one. Instinctively, we thought to get rid of such questions altogether, but since we work at a university, we figured we’d rather not behave instinctively, but back our decisions with some research, so we set out to design an experiment, collect some data, and see if they corroborate our feelings.

The experiment goes like this. We take some students and divide them in two groups. We give them some homework using multiple choice and symbolic question types. Group A gets topic 1 as multiple choice and topic 2 as symbolic, and group B gets the reverse. Then we test their performance on these topics with a “test” that only uses numeric and symbolic questions. The key here is that this test has to resemble an actual test, students are not allowed more than one attempt, so this has to be done “offline”. We happen to run live, offline events for students who want to improve their knowledge and prepare for university, so we figured we’d set the homework as preparation for one such event, and run the test first thing on the day of the event. This is where it all went south.

As we are part of a major university, and we are in fact running these events to encourage and prepare students who want to apply for university, we became concerned that students may think that we are testing them as part of the university’s admission test—which we are not! In essence, we can not disclose to students beforehand that they are taking part in an experiment to assess whether a certain type of question is good or bad preparation for a test. This resulted in a long-winded discussions of the many different ways we could deal with this problem. As we went through a checklist of ways we could get shot down by the Ethics committee — oh, the joys of getting ethical approval for human studies, I could bore you senseless with stories! — we decided that the Admissions office may take exception with our plan—remember, students might think we are testing them for admission purposes!

That’s when my colleague said “oh, [the Admissions officers] need to be in on it too!” At that point, the only people that needed to “be in on it” were the five of us in the room. Bringing “in on it” the people in Admissions made that number so very much larger. This is where I realised two things:

  1. we were plotting a bona fide conspiracy;
  2. the effort to keep the conspiracy from seeping through and become public knowledge is ridiculous, and the more people are involved in the conspiracy, the more ridiculous the effort becomes.

I am a big fan of conspiracy theories. I thoroughly enjoy reading about them, and learning the million ways in which people deceive themselves into thinking that several horrible humanity-culling, geo-engineering, alien-led, Zionist plans are in operation at any given time. Granted, sometimes one or two of the least ridiculous conspiracy theories turn out to be true, but that’s why they are the least ridiculous in the first place, and they mostly involve just some regular people become stupidly rich at the expense of others—hardly anything new or far-fetched under the sun.

But everything else…?