Theory of Model UN — Awarding Delegates

Foraging the Social Sciences — “The Optimum MUN Award Criteria”

Yuji Develle
MUN Theory
10 min readFeb 18, 2017

--

Woodblock print depicting ‘blind men and an elephant’

This article is divided into four parts:

  1. It will discuss and critically analyse the results of a survey conducted amongst prominent chairs in the MUN circuit answering the question: “How do you evaluate delegates for an MUN award (best-outstanding-honourable or diplomacy awards)?”
  2. These results will be used to highlight the significant flaws existing in our current appraisal methods.
  3. A new appraisal method will be presented in detail and made available to you for viewing & download.
  4. After field testing, critical feedback on the testing will be analysed.

This article would not have been possible without the analysis of Eugenia Caracciolo-Drudis and the participation of the following chairs in the survey:
Alexia Sideris, George Mullens, Dorota Saitzova, Joseph Carroll, Carl Giesecke, Isabel Victoria Barker, Daniel Gindis, Fahmida Faiza, Fozan Ghalib Butt, Heather Pickerell, Ibrahim El Kazaz, David-Jan Bosshaert and Peter Tse.

As a delegate, I always wanted to know what exactly was going on in my Chair’s head when they were tasked with choosing that “Best Delegate”. Initially, I thought MUN conferences might have had strict guidelines or metrics for Chairs to follow concerning awards. Later, when I came to love the wonderful ambiguity that surrounds this obscure (and taboo) topic, I believed that the lack of metrics was a way of protecting the discretion and better judgement of delegates and Chairs in MUNs.

When I began Chairing at large conferences such as LIMUN (London International Model UN) or OxIMUN (Oxford International Model UN), I came to realise that perhaps the reason behind this ambiguity was grounded in a lack of resources.

In conversations with friends with dozens of conferences in Chairing experience, we often agreed that choosing one, two or three “Best” delegates in competitive committees could be difficult, if not nerve-racking. We often disagreed on the type of delegates we were looking for: Was he/she a power-delegate, a mediator, or a diplomat? While we all had access to the same tools for analysis — our eyes, our ears and a few measly metrics — we all seemed to reach different conclusions like the blind men from John Godfrey Saxe’s the blind men and the elephant.

What if we found a way to give Chairs the tools they need to better decide on delegate awards?

What if delegates could finally know exactly how their Chairs define success?

What if Chairs could finally know more about the dynamics of their committee room, beyond speeches, policy and the committee room?

Click here to skip directly to our answer.

After collecting responses from a generalizable sample of twelve prominent chairs from across the British, EU, American and South Asian circuits (in that order), we grouped the various policies into keywords. These keywords accounted for various differences in the categories of ways respondents determined awards:

  • Synonymous words were grouped up: e.g. “Foreign-Policy” would include mentions of country policy, policy accuracy, and foreign-policy;
  • Differing approaches were separated: e.g. qualitative and quantitative approaches to the same performance indicator. “Speeches” for example refers to qualitative or non-descript policies regarding speeches and rhetorical ability, whereas “Times-Spoken” refers to Chairs counting the number of times delegates have spoken in a given timeframe (quantitative);
  • Differing levels of analysis were also separated: e.g. “Writing” refers to Chair impressions on a delegate’s general writing performance, whereas “DRs” or “Amendments” refers to specific elements of writing that Chairs pay special attention to when determining awards.
Word Cloud based on Survey Results

Survey results showed a high variance in the approaches Chairs had to evaluating delegates but, surprisingly, much more similarity in what Chairs were evaluating in the first place. The word-cloud above shows the relative importance the sample placed on each keyword, with Draft-Resolutions, Speeches and Foreign-Policy leading the pack. Delegate behaviour, knowledge and times-spoken were also very popular criteria for award policy.

Surprising, given the amount of importance conferences place on ‘diplomatic demeanour’ and ‘the spirit of compromise’ is how those elements (Compromise, Atmosphere, Honesty) were considered secondary (albeit still important to Chairs). This being said, ‘Behaviour’ often comprehends the personality elements of “diplomatic demeanour”.

Of course when examining Draft Resolutions, Chairs are not solely focusing on the content of the writing, they are also using the policy papers as systematic “proof” of other delegate qualities: leadership (Is the delegate a sponsor?), foreign policy (Are the clauses aligned to country policy?), diplomacy/lobbying (did the delegate manage a merge to fashion this DR?). Similarly, “Foreign Policy” can mean an individual delegate’s consistency with country policy, but it can also mean “the extent to which this delegate managed to keep his/her bloc’s agenda on his country’s terms throughout the conference”.

Furthermore, there is a noticeable divide in the ontological approaches taken by MUN Chairs (split 50/50). One side prioritises an “effects-based” approach, placing importance on the way a delegate performs and/or behaves over the course of the conference. The other side observes an “object-based” approach, focusing on what a delegate has achieved in the conference. Also, some Chairs prefer sticking to one set of criteria which they can use systematically at every conference. Other Chairs adopt a contextual approach, specifying the uniqueness of every conference and committee’s award-policies. Finally, two Chairs mentioned sometimes offering awards to delegates whom showed significant improvement, moving into the realm of meta-contextualism.

While each approach has its costs and benefits, the vast majority of the award-policies sampled share a set of significant flaws that this article will hope to address by introducing an award policy founded in theory and aligned with the award-policies of leading MUN Chairs. We will justify every criterion in the next parts of the article and give you an opportunity to test our criteria at your next conference (should you want to try it out).

Why do we not have the tools?

By “tools” we principally refer to objective and preferably systematic ways to evaluate delegate performance. Elements outside this definition will have an asterisk beside them.

The tools Chairs often use to appraise their delegates

In addition to the flaws identified in the table, our lack of tools contributes to a set of serious errors related to bias and blindness. Regarding bias, temporal bias holds an important place. Delegates often criticise Chairs who’s award decisions rely too heavily on delegate performance in a single session, most often in the 4th session near the end of the conference. Others will point to the fact that Chairs have, after the 1st session, subconsciously chosen their power-delegates and favoured those said delegates throughout a conference. These errors are principally due to a lack of structure.

If Chairs were incentivised to treat each stage of MUN debate as distinct, they would be less inclined to making such cognitive mistakes. Of course, any system would have to apply a greater weight to performance in Sessions 3, 4, and 5, due to the fact that debate is usually more difficult in the latter stages of a conference.

Linked to the previous point, existing metrics can usually be grouped into inputs and outputs. Apart from qualitative “impressions”, Chairs have no way to evaluate delegates on their work besides whether they managed to produce great writing, speeches, or get their name on a Sponsor’s list. There is currently no way, apart from the subjective practice of ethnographic observation, of judging the process of coalition building, mergers, bloc dynamics, lobbying and other elements underlying the practice of this political melee. A delegate should be rewarded for his/her ability to brave the tormented waves of committee.

We should value speaking more, especially in the more competitive environments. If we have a writing-based or output-based appraisal system, delegates will focus their efforts on Unmod and backroom negotiations, using speeches simply as megaphones for what may be happening outside committee. Placing importance on the quality of speeches will incentivise delegates to place more content in their speeches, improve the use of rhetorical devices and keep things fact-based. This will in turn, bring power-delegates back into committee, and move the centre of attention back to where Chairs hold observatory power.

Our System

To avoid the pitfalls above, Eugenia Caracciolo-Drudis and I sat together to devise an optimal delegate appraisal system able to cover the elements Chairs are looking for in delegates in a systematic, repeatable and equitable manner. The system covers, inter alia:

In addition to these elements, Speech Quality, a key factor in maintaining quality control and oversight in the committee room, will act as a multiplier (hence taking the most important place in the metascore). After long hours of discussion between Gigi and me concerning the essence of speech quality, we brought it down to one important criterion: originality.

The reason why we believe Chairs should look for originality in delegate speeches is due to two reasons: the instrumentality of speeches and the nature of originality. Firstly, speeches are principally used instrumentally in achieving the benchmarks mentioned in the table above. They can help a delegate promote a Working Paper to committee, or call out the points of an opposing bloc. However, facts and originality are not prerequisites to successful speeches in the instrumental sense of the term.

It is thus important for Chairs to account for speech originality as well as of their instrumentality. Secondly, “originality” refers to the delegate’s ability to use speeches to push debate forward, to suggest innovative solutions aligned with foreign policy, and to find novel ways to compromise. Originality also refers to a delegate’s MUN-acumen when it comes to motions and points of information. All in all, the question is: To what extent is the delegate using his/her speaking time to heighten the quality of debate in the committee room?

The Delegate Metascore

Where the highest scores wins a Diplomacy Award or Best Delegate Award.
“The Delegate Metascore” = [(PS + RS + CS + MS + WS + AS) • (SQ) • (OC)]

is made up of the following elements:

  • Preparation Score = (Position Paper Grades) • (Correlation Score)
  • Reception Score = (Number of Positive & Neutral Substantive Citations)
  • Coalition Score = (Analyse Movement of Power Dels into DR Blocs) • 1.25
  • Merger Score = (Relative Bloc Strengths Use Decision-Tree Paths to Score) • 1.5
  • Writing Score = (5x5 DR Quality Table Score Out of 25) • 1.5
  • Amendment Score = (+/- 5pt. scale based on Amendment’s Impact) • 1.5
  • Speech Quality Score = (Notable Speeches in each Session)
  • + Optional Coefficient = (If you want to reward people for improvement, and other circumstances, you can do so using this optional coefficient)
Note* Mentions have to be substantive (ie. they must refer to actual points made by the referent)
Note* Sometimes Coalition Scores cover early-stage Mergers

How do the scores look like on a scorecard?

Example scorecard

Above is a scorecard measuring the performance of a theoretical delegate at an MUN conference. The delegate did relatively well on his/her position papers (B+ → 7pts.), managed to bag 9 citations while presenting positions in the Working Paper phase (hence 9pts. in Reception Score). Perhaps being aided by the previous phase, he/she went on to become a Sponsor in a Draft Resolution forming Bloc (25pts.) but only succeeded to merging with the one of two Blocs he/she’s Bloc wanted to merge with (15pts.). The Draft Resolution was overall decent, and listened to many of the points raised in committee. Furthermore, it was well-aligned with the delegate’s foreign policy goals, which warrants a high correlation score (hence a 9.8pts. in Preparation Score). The amendment phase was rough for the delegate however, and despite some important friendly amendments, he/she succumbed to a minor unfriendly amendment that passed. This delegate’s speeches were exceptional during two sessions and of quality throughout getting a coefficient of 1.7x. Overall, a very decent score of 140.76 points.

Lowest Possible Score? (for a delegate on the Chair’s radar)

PS (1) + RS (0) + CS (0) + MS (0) + [WS (7.5)/AS2 (5)] = 2.5
2.5 • SQ (1) • OC (non taken) = 2.5 points in total

Highest Possible Score? (for a delegate on the Chair’s radar)

PS (15) + RS (10) + CS (25) + MS (30) + [WS (37.5) / AS2 (1)] + AS1 (12.5) = 130
130 • SQ (2) • OC (non taken) = 260 points in total

If you are interested in testing out this methodology, we have an excel spreadsheet that we will be using during LIMUN 2017 to field test this out.

If you wish to take part in the field trials, please contact me. We would love to have more testing and more opinions on the conducting of the trials.

--

--

Yuji Develle
MUN Theory

Founder of @WonkBridge | Follow me on Twitter: @YDevelle