An AI startup’s experiences of applying for government funding
In this post, I’ll share lessons learned from an unsuccessful bid for innovation funding to develop some of the core features of Scholarcy, the AI research tool that reads papers and reports for you
In the previous post, I wrote about how I built and deployed the initial version of Scholarcy for very little financial outlay. While this was enough to get a useful product out there that addressed a real problem, I knew that at in future I would need to invest R&D funding to improve on this, particularly in the areas of summarisation and automatic table identification and extraction.
With that in mind, a few months ago I responded to an InnovateUK open call for proposals for emerging and enabling technologies with a bid to build on the existing Scholarcy service and improve its large-document summarisation engine.
The application process
The online application form is divided into 10 sections. You can write a maximum of 400 words in each, although some sections allow you to upload supporting material as an appendix. Each section has guidance notes with a number of points that must be addressed — which can be difficult to do in 400 words!
Remember, the assessors only have what you have written to judge the application. They can’t know what is going on in your head. Keep this in mind when reviewing the feedback: there were some responses where I thought ‘they don’t get what I’m trying to do’ — but ultimately that’s my fault for not communicating clearly enough.
In the sections below I’ll briefly describe what I wrote for each section, along with the reviewer responses.
1. Need or challenge
While there are many tools to help you find and organise research papers, there are currently few tools that help people actually read, understand and contextualise the full text of the document in front of them. Most reviewers agreed with the need to make it easier to read research papers and long reports in general, both for academics and lay audiences. There were a couple of unexpected comments though, such as ‘this application only looks at paper or paper equivalent content whereas most content is video’ and ‘Scopus already provides a summary of the article’ (perhaps they meant the abstract?).
2. Approach and innovation
I wrote that there are currently no tools that will read an analyse the full text of any research report in real time. However, most reviewers thought the project was not sufficiently innovative. One said that use of a corpus to learn a summarisation model ‘has been around since the Sixties’. Others wanted a better description of which machine learning approaches and which datasets would be used to build a summarisation model for research papers. Another objected on the grounds that the summaries provides currently generated by Scholarcy ‘were not genuine summaries’ — but, that’s why I was seeking funding, in order to improve this.
3. Team and resources
As a sole applicant and founder, I fell down here. Even though I stated how I would use the money build a team, a common reviewer objection was that ‘this project is essentially based on one individual’. They were concerned that my need to hire a data scientist to help build the models was a risk.
4. Market awareness
I detailed three key markets: edtech, publishing, and industry. A couple of reviewers suggested there may be other markets. Also, I only addressed the market size for 2 of these. More detail was needed here.
5. Outcomes and route to market
I really fell down here. I was going for a freemium model but did not describe in enough detail how this would work and how much revenue this would raise. I did not adequately discuss channels or pricing, or provide a clear route to profitability, or timelines for ROI. In hindsight, this section really needs to be written in conjunction with a marketing professional.
6. Wider impacts
I suggested that the project would improve people’s productivity, reducing the time taken to read and understand complex reports, and also open up complex research to a wider audience and so increase understanding of science by the public and media. However, reviewers wanted to see some quantification of these benefits, which I did not provide.
7. Project management
I produced a Kanban plan here but this was not sufficient. Reviewers thought the plan was well structured, but task interdependencies, milestones and performance measures were not well defined.
8. Risks
I classed all the risks identified as low or medium risk. These included things such as loss of expertise due to departure of key staff, or that the project fails to deliver expected benefits. Reviewers suggested that I should have included items that were high risk, and also consider commercial risks such as data loss due to theft or hacking.
9. Additionality
This section requires you to describe how an injection of public funding will impact the project. I said how public funding would reduce risk of failure and make us faster to market. However, reviewers were looking for how public funding would impact future commitment to R&D. Again, they were also looking for quantification of the difference that public funding would make.
10. Costs and value for money
Because I was not clear on the commercial potential of the project in section 5, some reviewers flagged that the value-for-money proposition was unclear.
Lessons learned
Overall, my application was scored at 66%, with an average reviewer score of 7/10 for eight sections, with a 5 and a 6 for two of them. So not a complete disaster for a first attempt. The reviewer responses were mostly very helpful, and from those responses, here’s some learnings that may help towards a successful bid in future:
Have a team. I was a sole founder and was looking for funding to build a solution which would involve hiring people to help with this. This did not fly with the reviewers. You need to have people in place already or identify who you will be working with. This goes for writing the proposal also: ideally, it should be written by a team that includes technical, project management and marketing expertise. Additionally, consider hiring a professional funding application agency to help.
Be clear about where the innovation is. You can’t know in advance any assumptions the reviewers might have, so you need to detail how your project differentiates itself from your competitors. Don’t be tempted to cover too many — focus on one or two areas of clear innovation.
Already have funds in place to cover 30% of the project costs. This was a bit of a Catch 22 for me. InnovateUK will fund up to 70% of the project costs and will pay quarterly in arrears. You need to say how you will meet your share of the costs. So I bid for an amount relative to the 30% that I knew I could afford. But, you also have to bid for enough money to make the project a success. One reviewer suggested the budget was underestimated — the implication being that I should have bid for more.
To me, it feels that with the way this is assessed, only companies that can already afford to fund much of the research themselves will meet the criteria. I really think that there should be another option for early-stage startups without match funding.
Have a laser-like focus on your market. Be clear who your market it and how your project solves a clear problem. I realised I was too vague in places and was not clear about who the tool was aimed at for the purposes of this funding proposal. In one section I was talking about the utility of the tool for researchers, in the next, for lay audiences to help in the public understanding of science, and in other places, about compliance and regulatory documents.
Be clear about market opportunity addressed by your product. Don’t just consider the overall market size. For example, will your product be used by 5% of the market? Are there other markets you haven’t considered?
Have a clear, quantified route to market with timelines. What are your distribution channels and how much revenue will each bring? How will you access the market?
Quantify, quantify, quantify. Any statement, claim or benefit must be quantified. I thought the ‘Wider impacts’ section was more about qualitative benefits, but the reviewers wanted figures. It wasn’t clear which figures they wanted, but probably they were looking for numbers on how many people would benefit or have their productivity improved. The same goes for the ‘Additionality’ section: you need to quantify the difference that public funding would make to the project.
Clear risk management strategy. Reviewers want to see projects that include a high degree of uncertainty, so be clear about the risks and how these will be mitigated or controlled.
So, would I apply again to future funding calls? Yes I would, but as part of a larger team, and with help from a professional bid writer, and with match funding already in place. In the meantime, other funding options for science startups are available.
If I was to choose just one lesson, it would be this: have a team who can help. With this in mind, I’m absolutely thrilled now to have Emma Warren-Jones of Edible Content on board as Scholarcy’s content marketing partner. Emma brings many years experience in the edtech and publishing industries and will be key to our future success.
For those looking to apply to future funding calls, I hope you find this post useful, and I also suggest reading the excellent article by Sense Media on lessons learned from their unsuccessful bids for funding.