Reversim Summit 2016 Selection Process
This is the fourth year Reversim Summit is taking place and this time it’s bigger than ever!
In this post we describe how sessions are selected to be presented at the conference.
Reversim Summit is a free yearly conference for the Israeli software developer community. The conference is organized by developers and is non profit.
2016 is the fourth year that this conference is taking place and over the years it has gotten quite big and popular. This popularity is exhibited by the impressive number (297!) of proposals submitted during the Call For Papers. Of these, just about 30 full sessions, 12 ignites and 8 open source sessions were selected.
The selection ratio is somewhere between one out of ten to one out of three, depending on the topic and the session type.
The moderation team is comprised of community volunteers who replied to the call for moderators made a few months ago by myself. Led by Adam, Victor and Lidan, the entire team and their bios may be viewed here.
Everyone on the team comes from a technical background, with many years of experience, working in tiny startups and full size enterprises. Working with diverse technologies spanning from C++ to all the way to iOS, Python and Ruby, web development, ops, management and more. A team of highly talented developers bringing to table both knowledge and experience in addition to passion for what we all do.
Following is a description of the selection process.
Track Categorizations and initial selection phase
Once the CFP was over, team members reviewed all sessions, tagged and classified them into tracks in order to divide the work. We intentionally attempted to do so after the submission phase, so that the submissions themselves will create the tracks instead of the other way around.
Once classification was complete, moderators matched themselves to head each of the tracks with the task of diving into further details for each of the sessions, and selecting 2–5 of the best as potential candidates for acceptance (full tracks) or 8–12 (OSS and ignites).
This was by far the hardest part of the process as there were very few, if any, bad submissions. A moderator of a full-length track had to choose 1 session for every ~8 she had to reject.
The tracks and moderators are:
- Frontend — Adir
- Culture — Karen
- Big Data — Shlomi
- Architecture — Yoav
- Devops — Nati
- Product — Tal
- Programming — Dina
- Testing — Gil
- Other — Lital
- OSSIL — Eran
- Ignite — Avner and Shay
A moderator assigned to a track is an experienced developer and his personal taste is an important factor. However, as a team, we came up with detailed guidelines to help in this selection phase. The following are soft rules, but act as a good starting point.
- Innovative — A complete new take on a known problem, something that screams out-of-the-box
- Promote new speakers — Our agenda is to promote new and inexperienced speakers
- Mind Blowing — A sufficiently complex and interesting subject that when described verbally provides an added value
- Inspiring — Something different we haven’t seen before. A do-good project, or something that stands out as remarkable
- In-depth personal experience — An item that the speaker had deep dived into for practical use, and not for the sake of the session, and can provide meaningful inputs
- Intro to X — If a Google search will provide more results than you have time to read, it’s not interesting
- Sufficiently battled subject — “Why Should I TDD” kind of sessions
- Extremely esoteric (or unrelated to our main agenda)
- Purely Academic — We should promote practicality
- Way Too Broad — “Comparison of JVM based Languages”
- Try not to have too many sessions from the same speaker. Preferably, just one session from each speaker
Asking for more info from speakers
In many cases (but not all) we asked submitters to provide additional details for their sessions. Examples include:
- A bulleted list of topics
- A draft or a skeleton of the slides.
- Previous speaking experience (track record)
- A five minute video
The purpose of this was to better understand the topic and/or the level of experience of the speaker and how well he/she is versed in the topic. It was up to the moderator to decide what information was needed to get a better picture, not all speakers were asked for all of the items above.
Each track’s moderator has the superpower of selecting the sessions on her track. With that, a few other inputs were considered:
- Public (anonymous) votes on the website
- Internal team votes
- All input from the speaker as mentioned above
- Cross-validation and second opinion from other moderators where required (e.g. I’m moderating my co-worker, I need a second opinion)
- Internal team debates about specific speakers and and sessions.
Finally, as a team, we reviewed all selected sessions and created the agenda.
Acceptance emails were sent to accepted speakers and rejection emails were sent to those who were not accepted. In many cases a session was rejected not because it was lacking, but simply because there were too many proposals to choose from and we wanted to make sure the content is versatile.
Thank you all submitters
I want to take this opportunity to thank everyone who submitted a session, to those who will be speaking this year and those who’ll be speaking next year ;)
We realize it takes a lot of effort and we hope you are happy with the selected content. We hope to see you at the event.
We want to hear from you!
We are working hard on making the conference better year over year by learning from our mistakes and getting feedback from the community. Getting your feedback will help us improve the process even further.
We are very much open to any feedback, please let us know what we can do better next year!