Game of Grants

Dingle
11 min readAug 24, 2022

--

Photo by K. Mitch Hodge on Unsplash

“Your application has been unsuccessful”, 5 words no academic (or job seeker) ever want to hear. Especially after giving their all to the application process. Yet unlike a job seeker, who can receive direct feedback on their application and interview process from the employer upon request, academic grant agencies offer no such support. Academics are met with the same message every time — “due to the large number of applications, individual feedback is unavailable.” In a system where open knowledge is being widely encouraged, this form of knowledge gatekeeping is unacceptable.

When discussing open knowledge (OK) in higher education, there has been a great deal of focus on teaching, also covered in OKHE Topic 2, and in research outputs, covered well in Nick Turnbull’s blog post. However, there has not been much discussion about the grant writing process, feedback and determinants of success vs failure.

Photo by Finn Hackshaw on Unsplash

In order to understand this in more detail it is first important to consider, what is OK? OK is a very wide ranging concept leading to many different definitions. That of the Open Knowledge Foundation is relatively broad: “Open data and content can be freely used, modified, and shared by anyone for any purpose”. These include three key features of openness:

  • Availability and access — data should be wholly available and easily accessible, and in a modifiable format
  • Reuse and redistribution — permission should be given with no or little restrictions
  • Universal participation — everyone should be able to access and use the data, without any discrimination

When considering the grant process with an OK view point, it is important to keep these features in mind and reflect on whether the grant agencies are sticking to the concepts they are themselves encouraging others to undertake.

At this point it is important to note that sharing specific grants submitted to research agencies is not feasible under current sharing policies as the contents of reports are quite rightly confidential and contain information that if shared would be detrimental to the authors future research. The focus of the discussion here is on the information generated from and about a submitted grant application (whether successful or not) and how that information (which is currently not shared) could improve the grant writing process for everyone but especially those new to academia.

The UK Research and Innovation (UKRI) are pushing for open research and realise that “transparency, openness, verification and reproducibility are important features of research and innovation.” Yet, when it comes to grants that are submitted to the UKRI, openness seems to be all but forgotten. If we consider the above three features we quickly falter at the first hurdle; data associated with a submitted grant is not wholly available and easily accessible to the author (or anyone else for that matter). Akin to a job seeker repeatedly failing with job applications without guidance and support on how to improve, how can academics be expected to improve without informed and direct feedback from the grant agencies themselves.

Grant success is directly linked to promotion criteria, academic performance as well as national and international standing — similarly to publishing academic research. Obtaining grants leads to opportunities to develop and grow (through direct supervision of new hires etc.), allows for new equipment to be purchased that will benefit a research group over long periods, and leads to new and exciting research outputs, patents and innovations. All of these are not only good for the direct recipients (or new hires) of a successful grant but also to the university, wider scientific community, government and economy. Many grants also now include funds to host meetings, conferences and other collaborative events to share and disseminate findings, further supporting an open research environment. So shouldn’t we be doing everything we can to make the grant process as open/transparent as possible such that all applications can be judged equally and fairly. In such a system of openness, grants would be judged purely on their scientific content/merit and not on superfluous errors such as the formatting or style, which can only be of benefit to all.

I’ve come to realise that the grant process is more like a game, where you don’t know the rules — so sometimes you win and sometimes you lose.

Photo by Andrey Metelev on Unsplash

When you start out on the grant writing journey, full of optimism, you will often be guided to more senior academics who seem to be able to successfully write grants in their sleep. Often feedback involves phrases such as “I’m not sure what made that grant a success/failure”, “you should try a shotgun approach” or my all time favourite “it’s a lottery”. I’ve come to realise that the grant process is more like a game, where you don’t know the rules — so sometimes you win and sometimes you lose. Playing the game more frequently can improve your chances of winning but you still don’t know the rules or exactly what it takes to win. Experienced players (Senior colleagues) have both played the game more in the past and who can try more times per year are more successful. The only person who knows how to win is the ‘gamemaster’. In this instance the gamemaster would be the grant agency e.g. UKRI. If the gamemaster lets you play but doesn’t offer any guidance or support when you fail, they are gatekeeping that knowledge, in direct contrast to an open research approach.

What this means is that you rely on other ‘players’ to guide you and give you their own self-discovered tips/tricks to win. Of course, you take any and all feedback offered to you at this stage and all of it comes from a desire to help but it relies on the kindness of senior colleagues (the other more successful players of the grant game). The support they are offering is not recorded in their own academic performance models nor does it come with any direct rewards. Often, in an academic environment that is already highly demanding of an academics time, senior colleagues just don’t have the time to offer the best and long term support for new grant writers. This often means that those starting to write grants are left to their own devices and because they don’t know the rules of the Game of Grants, are often faced with failure and disappointment. This can be directly linked to why many leave academia for greener, and more supportive, pastures elsewhere.

Photo by the blowup on Unsplash

“Failing is learning” is a basic philosophy that relies on the principal that: Failing is not bad, it is an opportunity for personal growth and learning. This is something that you will often be told by colleagues when you get an unsuccessful grant email. Perhaps best summarised by Thomas Edison when he said “I have not failed. I’ve just found 10,000 ways that won’t work”. However, a key process of the failing is learning concept is feedback. Studies have shown that when positive feedback is provided people learn better for the next attempt. The importance of feedback has already been realised by large international organisations e.g. Netflix has a strong culture of feedback and have developed their own ‘4A guidelines’. It is also something that, within higher education, has been shown to be important for student learning. Providing opportunities for formative feedback during courses gives students opportunities to learn from their mistakes before summative assessments. There is also evidence that providing students with opportunities to view their exam scripts following an exam allows them to improve for their next exam. If we are able to clearly see the benefits of feedback in a range of situations, including within academia, why are we still not implementing it when it comes to successful and unsuccessful grant applications?

Photo by Jacqueline Munguía on Unsplash

Not every grant can be successful so failure is to be expected, of course, but when that failure is supported by direct, useful feedback on how to improve next time, it is significantly less soul destroying. It gives grant writers something to focus on, something they can improve in the next grant and direct comments to discuss with colleagues who can offer more guided support with the limited time they have. Even when a grant is successful, feedback is important to re-enforce good practice for future applications and still provide an opportunity to learn.

The support grant agencies currently provide focuses on the preparation of a grant i.e. page limits, formatting and they offer feedback on whether the research focus is a match to their grant. During and after the grant, academics are told to support an open research approach when publishing and with dissemination of new findings all for the betterment of science and innovation. However, that middle ground of trying to obtain a grant is not currently supported by an OK approach.

One way to try and learn more about the Game of Grants is to join grant review panels. The grant review process relies on these grant panels, formed of senior academics from a wide range of disciplines. Each grant is championed by an academic and its merits and shortcomings are discussed before all are ranked. Review panels are typically comprised of senior academics, who often already know how to play the game. This is a case of the “rich get richer”, in this instance both in terms of knowledge and grant income. Senior academics who can write many successful grants a year gain further understanding of the game through these panels and are thus rewarded with more grants. It’s a self-supporting cycle that discriminates against the inexperienced academic. Such panels, which appear to take place behind closed doors for only the privileged few, also do not fit with the idea of OK or open research practices. Do they need to be conducted behind closed doors? There is an argument to be made that this is important to protect the confidential ideas and information contained within the grants, however, sharing of information/knowledge from such events is achievable without sharing any of this confidential information.

Often, it is thought that by making successful (and potentially unsuccessful — although this is trickier) grants available, either to everyone or via a universities intranet, this provides support to those looking to write grants. This is well explained by Alick Deacon in his blog post. While general concepts and approaches can be gleaned from these shared documents, they are often unhelpful. Can a successful grant in Egyptology help someone writing a grant on nuclear fusion? Even within two closely related (or the same) fields, the differences between a successful and unsuccessful grant is never clear. This is because the gamemaster hasn’t provided their own feedback on each application and so its just a case of experienced players leading inexperienced players and everyone hopes for the best. At this point it becomes more of a guessing game as to what worked and what didn’t and assuming this is what made the difference between success and failure.

Photo by Olesya Grichina on Unsplash

So what can be done? Grant agencies need to start providing direct feedback from grant panels to applicants. They cannot continue to hide behind the “too many applications” statement. Real resources need to be shifted towards making this a reality and to bring such information into an open research format. Learning from others i.e. Netflix, and implementing their own version of the 4A’s of Feedback would be a great start to this process. Feedback is of great use to not only the author of the specific grant being discussed but to their colleagues, Universities and the wider academic community. This in turns benefits the wider economy and society.

Even if only grants that went to panel were provided feedback, you would have thought that it would be relatively simple to convert the notes taken during these sessions to feedback. Each grant is discussed and then ultimately ranked meaning that the information on what made it good or bad is discussed and then recorded somewhere (you would hope!). This would de-mystify the process for those unfamiliar with it. It would also mean that comments made during the panels would need to be well considered and justified as they would be provided to the applicant rather than disappearing into the ether. Providing the right feedback is just as important as providing some feedback and so some effort would need to be made here to ensure feedback was useful and constructive rather than just generic.

These sorts of comments, over time, could be collated by institutions into a resource that can form the basis of a Game of Grants walk-through of sorts. This resource, in keeping with the ideals of OK, could be widely shared and made available to anyone interested in the typical comments made during grant panels and how to avoid common pitfalls. This would need to fit with the Wiley 5 Rs of openness:

1. Retain — the right to make, own, and control copies of the content

2. Reuse — the right to use the content in a wide range of ways

3. Revise — the right to adapt, adjust, modify, or alter the content itself

4. Remix — the right to combine the original or revised content with other open content to create something new

5. Redistribute — the right to share copies of the original content, your revisions, or your remixes with others

Another option would be to include more junior academics within grant review panels, which will allow many to get experience of the inner workings of panels in order to write more successful grants in the future. Being able to see how things work internally, as well as by seeing a relatively large number of grants in a short time frame, would allow them to develop rapidly and implement changes to their own grant writing process. This learning by doing approach would unfortunately be limited to only a handful of candidates per year but with rotations this process could be mutually beneficial to both the academic and the grant agency. The academic is able to access/achieve an accelerated learning process and the grant agencies have panels populated by a more diverse range of academics who can offer a more rounded viewpoint on each grant.

Photo by Kenny Eliason on Unsplash

Concluding thoughts. Obtaining research income from successful grants is a key metric for most academics and is directly linked to career advancement. However, learning to write successful grants is a challenge faced by all that start on the academic track. This is a process that is currently poorly supported on all levels but is significantly restricted by the “rules of the game” being kept secret. By this I mean that feedback and support on specific grants is restricted by the grant agencies and that this means that an individuals development is similarly restricted. If the key concepts of OK and open research can be applied to feedback associated with grant applications, there are potential for a wide range of benefits. No one is expecting a cheat sheet for the Game of Grants but a widely shared walk-through would definitely help others on their own paths of self-development and learning and this would benefit everyone.

--

--