Does Silicon Valley VC tournament manipulate the scores to pick winners?

Loresome
5 min readJul 26, 2019

--

Pioneer paints a pretty picture of a fair tournament for ambitious people of the world. In reality though, the organizers will adjust the scores at the last minute to not let the projects they don’t like win. Below I will show how it works with numbers.

What is Pioneer

Pioneer is an online tournament for projects founded by a former Y Combinator partner. Here’s the pitch (taken from the tournament website):

Apply with any type of project you need help with. It could be a company, physics research, journalism, or art. All you need to do is convince other participants that your project is worth doing.

I believed in the vision above wholeheartedly until this week. So much so that I wrote two articles about my experience, and together with a teammate we invited so many new players that at some point we had close to half of all referral points. Currently we are still at #1 on the referral leaderboard.

Here’s how the tournament works, quoting the website:

Submit weekly updates · You’ll submit a progress update once a week.

Vote on updates · Get points from other players based on your progress. The more impressive your work, the higher your score will be.

Become a Pioneer · Place as a weekly finalist 3 times and you’ll receive the Pioneer offer.

Sounds fair, what’s the deal?

Let’s look at the tournament leaderboard (disregard the pioneers at positions 5 and 14 — they won in previous tournaments). This is the current state, after Week 5 was complete.

Pioneer leaderboard during Week 6

First 3 levels are gained simply by submitting weekly reports. Then, the next three levels (the trophies) are obtained by placing in top-10 at the end of the week. This was week 5, so people with two trophies had placed in top-10 at least on weeks 3 and 4.

Pioneer levels

One interesting pattern is that out of 12 people who were in top-10 in two previous weeks only 2 remained in top-10 this week. This discrepancy becomes more jarring if you compare the points awarded to different projects this week (datasource). Notice the position changes and the point grants in comparison to other players.

Week 5 point changes

Ok, so basically almost all past leaders were pushed out of the top-10. Suspicious, but could happen — maybe they all got sick and wrote so-so reports?

Let’s look at another metric — points difference between adjacent positions. Keep in mind that the points were accumulated over 5 weeks, so it’s unlikely that many of them would cluster together. Right?

Points difference between adjacent positions

Ehm… So basically all people beyond the first top-10 are placed within single digit range from each other while people in top-10 have a pretty varied distribution. This pattern was not present in any of the previous weeks. Now remember that this was the first week when it was possible to actually win.

What does it tell us? Well, that there likely was a manual intervention to push most of the candidates to win out of the prize reach.

But players receive an email after each voting round with the number of upvotes/downvotes they have received from other participants and experts. Such blatant manipulation would render the score grants unrealistic! Let’s ask around and see.

Upvotes/downvotes to points earned

Can you think of a sane scoring and matchmaking system that would render 10/7 upvote to downvote ratio into 615 and 18/3 into 345? I can’t.

Well, that doesn’t leave any doubt in my heart — the points are fake (they are calculated by some algorithm, but then are adjusted as needed), and the winners are chosen by hand. The project ambition and productivity have little to do with the placement on the leaderboard. The moment the most productive projects became eligible to win, the scores were changed to push them down.

A note on “experts”

If you look at the story comments on HackerNews, you will see that multiple commenters, including the Pioneer founder refer to the FAQ section about “experts”:

The final step in the tournament has our experts providing a final review on the top applications, based on leaderboard rankings. After their votes have been applied to the leaderboard, we select the top-scoring players as Pioneers.

Let me explain why it’s not relevant to what I described above. First, whenever an expert casts a vote, the player is notified about that. You can see that the expert votes had little to no power over the scores last week:

  • 10 upvotes/7 downvotes, no expert → 615 points
  • 10 upvotes/5 downvotes, +expert vote → 608 points
  • 13 upvotes/2 downvotes, +expert vote → 325 points

Second, Pioneer demonstrated the expert voting interface during the April tournament. Experts vote based on isolated project descriptions, not the overall state of the leaderboard. Which makes sense, you wouldn’t want a particular expert manipulating the scores to boost their favorite project’s score. Nor would you want expert “wars” which would result from giving the expert such power.

How Pioneer might actually work

I think it works roughly like this: players are used to feed the funnel of new participants from around the world and bubble up potentially interesting projects with their votes. Then, the projects are reviewed by the tournament organizers, and only the ones they subjectively deem worthy are allowed to win. All the scoring systems are completely opaque and all discrepancies are explained with “you just don’t understand how it works”.

Should you still play? It’s for you to decide.

P.S.: If you still decide to try, here’s our referral link ;)

--

--

Loresome

Loresome is a self-improvement and motivation hub with deep game mechanics and storytelling acting as the core drivers. Check it out at https://loresome.com