Image ©Norman Posselt

How we review proposals

Gerry Leonidas
ATypI notes
Published in
6 min readOct 11, 2018

--

In my introduction to this year’s conference I wrote that ATypI strives to develop a diverse program that combines established expertise and inclusive mentorship. These words represent both claims and objectives. In the context of our conference, these values are reflected at first instance in two things: the content of the final programme, and the manner in which we arrive at it.
Up to now, we have provided occasional explanations, with some information embedded in the instructions for proposers. We are now moving to be as open about our processes as possible, by describing the process here, independently of the annual submission and review cycle. So, here is the “two-and-a-half stage process” we use.

Stage One

Every year we get between 120 and 200 proposals. These are submitted through a system (currently Dryfta) that allows proposals to be reviewed anonymously. We aim to assign four to five reviewers to each proposal, according to the areas of expertise of each reviewer. Reviewers grade each proposal numerically, and provide short texts of commentary or feedback to the programme committee. The system calculates the average scores for each proposal, and the results together with the reviewers’ comments are compiled in a spreadsheet by our executive team.

This first stage involves many tens of people, putting in hundreds of person-hours to read and comment on their assigned proposals, and is critical for the success of the programme planning. In order to be confident of the results of this first stage, in the spring of 2018 we went back to our list of reviewers, and compiled it from scratch: we now have a new College of Reviewers of around 70 people, who represent areas of expertise across the subject areas of the proposals. The members of the CoR are also people with experience of the Association, and at least some experience of attendance at our annual conference. The membership of the CoR is now more diverse, reflecting the evolving composition of our programmes. Above all, we are now confident about making this important group a public list, with a high level of trust in their evaluations.

Stage Two

The second stage involves a very small team, comprising members with experience in the programme compilation and substantial conference attendance, and the executive team. (In the last few years this team has included José Scaglione and myself, a member from the local team, and our executive team — currently Tamye Riggs and Liron Lavi Turkenich.) The proposals are ranked according to the ratings of the reviewers, and assembled in three groups: strong positives, strong negatives, and mixed or marginal review ratings; this last group attracts the most of our attention. Since the team can now see the names of proposers, we can check whether someone has spoken at too many recent conferences, and also whether a proposal corresponds to a presentation already delivered at another event. We can also compile some metrics on the balance of new vs. experienced speakers, location of speaker, gender, and so on. This is a “vertical” review, of each proposal individually — and allows us to ensure that people who may have less experience in conference participation (and therefore in writing proposals) are given the opportunity to present. Similarly, we can moderate the feedback from reviewers, who cannot be aware of a proposer’s experience and their familiarity with the conventions of conference proposals.

We also review “horizontally”, across proposals in related themes. This ensures that there is a range of topics covered, and there is parity across related themes. A typical example might be an over-supply of good proposals on a specific script; in that case we may prioritise proposals by originality, and whether a speaker has presented before (fewer times is better). We will also review proposals that appear to be by collaborators or colleagues: in such cases we will aim to select just the best of the group.

This second stage involves a lot of discussion, checking of available work by proposers, and review of other events, presentations, and the profile of our speakers. It is at this stage that the conference programme begins to take shape, and we can identify the strands that will be covered. (For example, we can see whether we have enough good proposals for a topic to bundle them together to form a distinct track.) The work carried out in this stage is a key element of our curation of the programme to support a balanced and diverse representation, and to give a voice to new speakers. But it’s not enough — this is where the “half” stage comes in.

Stage Two-and-a-half

This stage does not apply to all speakers — therefore the “half” — but it is a critical extension of our support to new speakers. We go back to proposals that may have had issues flagged by reviewers, but we have good reasons to include in the programme, and write directly to the proposer. We provide specific points of feedback, and give advice for revising the proposal. In most cases this has to do with the scale of the subject: many proposals have too much content for the time allocated, so we give suggestions on what will makes for sufficient depth without hurrying the speaker. In other cases, proposals suggest that some work will be carried out in the time between the Call for Papers and the conference, so we suggest ways to refocus the proposal on the process.

An example of a new initiative coming out of this stage was the “Type insights” strand we piloted in Montreal: we identified three proposals by young professionals on specific projects, which should be particularly helpful to those starting out. So we bundled them into a strand aimed specifically at students and beginners, and gave the presenters enough time for long Q&A sessions.

If you’re wondering about the workload: reviewers in the first stage see only a small number of proposals each. The second stage requires that all proposals are read, and discussed at a series of conference calls; this takes about 56 hours for each person on the team. For the rest, it depends. But this is essential in order to produce a programme that is rich, balanced, and diverse.

Not there yet

Our system is still not where we want it to be. Firstly, the tools we use are not optimal. The information provided at the submission stage could be improved, saving frustration for proposers and additional communication for the team. There’s far too much collating and recompiling of data on spreadsheets by the executive team throughout the review and curation process. This slows things down, and keeps the team from doing more productive things.

Dryfta gives reviewers a scale of 1–10, which we realised allows too much divergence, reflecting the background of reviewers. We’re looking to move to a five-point scale, with verbal descriptors for each rating, so that there is a better understanding of what we expect at each level.

Another area we are looking to improve things is in the possibility of giving feedback to all proposals. At the moment reviewers give verbal reviews only for the committee. This is not a novel idea: it is common practice in peer-reviewed conferences for reviewers to write in two different fields: feedback for the proposer, and for the committee.

We are also looking to improve our support for proposers before they submit a proposal, through publishing some anonymised examples, and pointers for good practice. Although a good amount of information is already included in the Call for Papers, it does not seem to reach people at the right time or manner. We are also discussing the possibilities of “proposal workshops”, probably on some online environment. (We are aware that some proposers seek feedback from peers informally before submission, but we want to provide a framework for anybody to have access to such a resource.)

So, there you have it: this is what it takes to get an ATypI programme together. (Once we have the list of talks, there’s more to do to get things to a state we can publish things online or in print, but that’s the subject for another note.)

--

--