Reversim Summit Moderation Process

Ran Tavory
7 min readFeb 6, 2019

--

Or: How sessions are selected for the conference.

In 2019 this is going to be the 7th Reversim Summit, the largest non-commercial developer conference in Israel, hosting 1500–2000 developers and software product professionals.

As the person heading the conference since its incarnation one of the questions I get repeatedly asked

How are session selected to be presented at the conference?

This question is many times asked out of curiosity, desire for self-improvement and sometimes with frustration by those not selected.

Let me try to answer as open and as wide as I can.

Background about the conference

Reversim Summit is a yearly conference for the Israeli software developer community. The conference is organized by volunteering developers and is not for profit.

The number of participants consistently grows, and likewise the number of submissions. This is both flattering and challenging. We have a wonderful community here and this is yet one more proof of that.

This post is mostly about how we deal with the (admittedly good) challenge.

First some numbers

In recent years we see around 400 submission (of various types), from which typically we can fit into our schedule around 50–60 sessions (of various types, including full, ignites etc).

Quick math: 400/50=8 so in high level only 1 out of every 8 sessions gets selected. This is tough b/c we’re not talking about garbage submissions, the submissions are all very high quality (and closely monitored, in realtime).

The history

The conference stared in 2013 and had gained local popularity. Over time we developed and improved the moderation process. A previous (out of date) post about how moderation process was performed in 2016 can be found here.

This year we host a team of 14 moderators from various backgrounds, including software developers of various expertise, ops, security engineers, and HR.

We make sure we have a good mixture of disciplines, genders, employers and experience.

What type of content are we looking for?

We are looking for inspirational content. We want you to walk out the session saying “wow, this really blew my mind, I didn’t know X can be done”, where X could be a technology, a challenge, a cultural approach, managerial approach or other. We focus on software developers and content that interests software developers and this content is at times highly technical, but sometimes highly cultural, managerial or other. We’re looking for a mixture of contents.

How is the team assembled?

When we start planning the conference we create the leading quarto which consists of the head of the conference (myself in that case), head of content (Shlomi Noach this year as well as previous year), head of operations (Gilli from outbrain) and head of website (Neta this year).

From there we recruit the team of 12–14 moderators. We accept volunteers from the community, we set expectations (making sure they can commit to this non-trivial effort) and if we’re lucky enough to not have scare everyone off then we filter out based on experience in mentoring and public speaking, professional background, and make sure we have a good mixture of veteran and new moderators on board (40–60% veterans).

What is the technique/process behind the scenes?

While we do not expose the data e.g. how many votes each session received or how close did a runner up come, we are in fact completely open and transparent with regards to the process itself.

Submission

Submission is open to anyone. We allow up to three submissions per person and we allow submitting in couples.

We sometimes approach certain persona asking them to submit, not before making it clear that soliciting submissions does not imply acceptance (and as reality shows it really does not).

We may sometime encourage certain speakers who we think might have an interesting story to tell or certain under-represented population which we think did not send enough submissions, but as noted before, this is where it stops, e.g. we ask them to submit but clarify that there’s no guarantee for acceptance whatsoever. We’ve done that enough times to know that it works.

Public votes

When submission ends we open up all submissions for votes.

The purpose of the public voting is for us to get a feel of what’s interesting to the community and not less important for allowing the community to take part in the process of making and have them involved in the content early on. The conference is all about the content and getting the community involved in the content early on is a true gain.

There are a few important things to keep in mind WRT votes:

  • Votes are confidential. We do not expose the results
  • Votes are used by us to measure interest
  • We realize there are many biases in the votes and we try to take them into consideration. To name a few: speakers from large companies often get more votes just b/c employees voted for them. Clickbait headers are good indicators of copywriter talent but not necessarily of good content.
  • Beyond the known biases, we also have our own agenda of content and so for example even if the five most voted sessions are about GraphQL, that doesn’t mean we want to present five different sessions about GraphQL. It is of course likely that at least one will.

Coupling up to review teams

When submission ends we couple up moderators to form moderation teams. We try to couple veteran with newer moderators when possible but this of course also depends on expertise.

Initial filter phase

Each couple typically gets about 50 sessions for review. They then read the abstracts, look into the speaker’s previous speaking experience (we look for both experienced and non-experienced speakers), in some cases reach out to the submitters for clarification and eventually come up with a list of 12–14 sessions that that passed that initial filter.

Each moderator team surfaces three session of interest to the greater moderation crew. For example, they’d discuss their deliberation in choosing between two sessions of similar content, or whether a specific session might not be able to deliver its objectives, etc. We are able to provide guidance to each other and agree on a common baseline.

Cross-team review and internal voting

Next the rest of the team reviews: first look into those that got disqualified to see whether some got overlooked and second on those that did pass the first phase in order to say what they think is more fitting

Internal voting

We then run an internal vote on all the screened sessions.

Last filter phase

Eventually each session owner (couple) looks at the feedback and the votes from the rest of the team and comes up with the short list of accepted sessions. In most cases it’s about 5–8 sessions per track.

Head of content reviews to see there’s no conflicts between the different teams/tracks (same speaker? Repeating content?)

From that we build our schedule.

Guiding principals and rules of thumbs

We developed a few guiding principles or in some cases rules of thumbs that we share with all moderators.

  • No sponsored content. This is very different from many other similar conferences. We do not accept sponsored content. It sometimes comes as a surprise to new sponsors, but this is something we absolutely insist on.
  • Public votes are useful for measuring interest, but they also suffer from a few biases. We listed some above.
  • No nepotism. A few examples:
  • If a moderator happens to get assigned a session submitted by friend, family, or co-worker, then this session gets reassigned to the partnering moderator in the same track.
  • We don’t allow submissions by moderators. (we did in the past, but not in recent three years)
  • No submissions accepted after the deadline. We want to be fair to all submitters.
  • In case of last moment cancellation, we only choose replacement sessions from the existing pool of submissions.
  • Co-review. Just like code review, for each and every decision there would be at least one more person reviewing. For all selected sessions three would be more than two reviewers.
  • Speaker only once. We allow a speaker to speak only once in the conference. Or at least we prefer so (this is a rule of thumb). In the past we allowed more than once but we learnt our lesson.
  • Employer once per track. For any specific track we do not allow two or more speakers from the same employer.
  • We look for:
  • InnovativeA complete new take on a known problem, something that screams out-of-the-box
  • Promote new speakers — Our agenda is to promote new and inexperienced speakers
  • Mind BlowingA sufficiently complex and interesting subject that when described verbally provides an added value
  • Inspiring — Something different we haven’t seen before. A do-good project, or something that stands out as remarkable
  • In-depth personal experienceAn item that the speaker had deep dived into for practical use, and not for the sake of the session, and can provide meaningful inputs
  • We don’t look for:
  • Intro to X — If a Google search will provide more results than you have time to read, it’s not interesting
  • Sufficiently battled subject — “Why Should I TDD” kind of sessions
  • Purely Academic
  • Way Too Broad — example: “Comparison of JVM based Languages”
  • We try to balance between content we think will appeal to a greater audience, and content that will only appeal to a niche group (e.g. in-depth technical discussion on a specific topic).

Written by Shlomi Noach, head of content and myself.

--

--

Ran Tavory

The voice @reversim, head of Data Science at AppsFlyer