REWARD SECRET SAUCE
Fair rewards,satisfied contributors,happy challengers
When you ask about a girl’s prettiness and the answer you get back is: “She’s really funny”, you know instantly that you’re going to have a lot of fun with that girl, but you’ll never think about making love with her. Which was the aim of your question…
It means that, in the “girl-assessing-market” (we’re not sexist, you can say boy as well), we agreed to act as if funny is equal to pretty, otherwise we wouldn’t be offered funny when we asked about pretty. That’s the official position, but in truth we know that there’s a non-official but real ranking that puts prettiness above all. The others follow.
In the crowdsourcing world, unfortunately, it’s the same. Every startup or company that provides crowdsourcing services usually says its mission deals with the value of the crowd, which consists of a bunch (little or big) of people working together. That’s what they say. But the real attitude is to reach people, then diminish their collective power — and individual average reward — by choosing a single user proposal. Crowdsourcing sites like that are often called crowd-listing sites.
We believe in a different approach for using crowdsourcing to solve problems. We use all the diverse and independent expertise of the group during every challenge, without calling everybody at the beginning and then rewarding only one in the end. Therefore, we have to reward more than a contributor; more than only the few who stand out with their proposals. We reward all the others that help the ideas flow by writing proposals, commenting and generally playing their role with passion within the challenge. Every one of us could be a synapse and the role of a synapse might only be to conduct the signal from one neuron to another. This kind of conductivity is not as simple as it seems. So, how do we do that?
First, we divide the reward into two lines, fifty-fifty:
- Content based reward
- Contribution based reward
The first deals with the ranking achieved by your proposals.
The second with the indicators that track your effort in the challenge.
- Content
We have further split content into two assessment lines: agreement and quality.
Agreement is the heavier (in terms of score), because it’s the engine that drives our accordance to a proposal. But quality is relevant too, because it pushes every contributor to be clear in their exposition. Agreement and quality are weighted in a certain percentage.
2. Contribution
In order to keep the live leaderboard lit, and let everyone know his or her own position, every contributor’s effort is tracked by our algorithm.The trackers are:
a) Time spent on the platform (with some anti-gaming controllers to prevent people from gaining points while sleeping on our platform).
b) Number of votes given to other contributors’ proposals. The more a contributor votes, the more she or he learns on the challenge.
c) Comments provided to other proposals in order to let the others learn and improve their own.
d) Comments provided on her or his own proposals, to improve them, feed the discussion and let the other contributors put their effort into the challenge.
e) Questions provided in the discuss phase. French philosopher Voltaire said: “Judge a man by his questions rather than his answers.” We totally agree with that, so we want to praise the priceless role of posing questions that prompt others to go on with their own thoughts and achieve more powerful outcomes.
f) … ok every secret sauce must have a secret ingredient!
Our girl or boy is as pretty as she or he is funny.
So it’s difficult not to laugh while making love, or not to fall in love when laughing together.
fabrizio@oxway.co