I work in the area of eHealth. In our department, we develop digital interventions (websites, apps) to encourage behaviour change, with a view to improving health. I absolutely love it — technology is so ubiquitous these days, that it seems the ideal way to target people with health promotion. Your doctor can advise you during that 10 minute appointment, but your smartphone is in your pocket almost 24/7.
The problem is, academia isn’t the only profession that’s picked up on this. There are a lot of people who want a share in the market — ‘Health and fitness’ apps represent a huge proportion of the apps within the iTunes appstore. For quite basic tools (e.g. tracking or monitoring devices), the judgement of the user is most likely sufficient in selecting whether it’s worth using or not. However, when people are investing in these apps and expecting a result (e.g. quitting smoking), it seems imperative that there is at least some evidence of efficacy. Otherwise, users may be wasting their time — and potentially, money.
This is exacerbated by the fact that such apps aren’t currently regulated. The FDA recently began to regulate medical device apps (e.g. for checking blood pressure) to ensure accuracy; however, health promotion apps do not fall within this remit.
Take, for example, smoking cessation applications. A quick search for ‘quit smoking’ on the Appstore yields 363 results. While some of these apps are based on evidence-based methods (e.g., NHS smokefree) a 2009 review of stop smoking apps  concluded that:
‘iPhone apps for smoking cessation rarely adhere to established guidelines for smoking cessation’
It’s not just guidelines that apps don’t adhere to — they often don’t fully reflect or encompass widely supported behaviour change theory. A 2012 review of health and fitness apps on the Appstore  concluded that while apps contained certain theoretical factors that are important for changing behaviour (e.g., providing information), they were lacking in others (e.g., reinforcing positive behaviours). Interventions which more strongly incorporate behaviour change theory are more likely to be effective, and so it is important that app developers take such theory into account.
While things may have improved since then, this exemplifies the need for ensuring apps are evidence-based. One solution to this is for academics and researchers, working in the area of behaviour change and health promotion, to develop the apps. As I stated earlier, this is exactly the area in which I currently work. This means that extensive knowledge about health and behaviour can be channelled into the intervention, and experience with research methods can be applied to both developing an evidence-based product, and then evaluating it to ensure it is effective.
My issue is, that as it stands, this is not an ideal solution. While the academic approach is incredibly rigorous - grounded in evidence, theory, and fieldwork - it has a major downfall. To be rigorous and to gather that evidence and data takes time.
Within academia, the journey from taking an idea to a product that is deemed ready for market takes around 7 years , due to the time taken to get a strong team together, apply for and obtain funding, develop the product, and to recruit sufficient participants to evaluate it in a randomised controlled trial. When one is competing with the fast-paced world of technology and software development, this is simply too long. This has two consequences: a. there will most likely be others out there who can get their competing products out onto the market faster; and b. by the time your product does get to market, it will probably be somewhat dated, and potentially no longer work well on current systems and devices.
I am by far not the only (and by no means the first) person to come to this conclusion. In my own academic circles, we have been discussing these issues for quite a while. Timothy Baker and colleagues recently published a paper outlining these very problems, and offering potential solutions . To name a few examples:
- Use other methods of evaluation besides randomised controlled trials (e.g., factorial designs, small sample studies, experimental studies)
- Measure more proximal outcomes (e.g. mediators of behaviour change, such as self-efficacy)
- Evaluate the impact general eHealth approaches that can then be applied across interventions (e.g. a standardised method of increasing motivation).
One recommendation that particularly caught my eye was that for helping consumers to identify quality. This is essential — it is impossible to stop non-evidence-based interventions being released, and so we have to educate the consumer, to assist them in making good choices when selecting a health promotion intervention (app). Baker et al. point out that informing consumers of the findings of formal evaluations would be incredibly difficult — and I agree. Not only are such findings often not straightforward, these evaluations often aren’t even carried out (as they take a lot of time). An alternative is suggested, whereby apps are evaluated using other, ‘transparency criteria’, such as who developed and funded it, the evidence-base behind it, and the methods used to ensure information is accurate. Recommendations are made for research organisations to conduct this using government funding, utilising a standardised rating tool. The reviewed apps/websites could then display their approval rating, to allow consumers to make judgements.
I think this model is fantastic — and there are already organisations attempting to do this. For example, Mindapps (disclaimer: I do act as an unpaid consultant for this organisation) publish professional and public reviews of mental health apps on their website, thus providing a resource for consumers to get some idea of quality. I am not currently aware of a similar initiative for health promotion apps, but if it’s out there, someone please enlighten me!
The problem is, reviewing all health apps (or even the most popular ones) is a huge undertaking. Having worked with Mindapps, I am fully aware of how much work they put in to trying to wade through the ever growing list of apps. However, currently, reviews are quite qualitative in nature — the use of a standardised rating tool (as recommended by Baker et al.) could facilitate this process (although I imagine it’s still easier said than done).
Essentially, there’s four take home messages here:
- Researchers and academics should seek to find ways of evaluating interventions more swiftly, so they can be put into use more quickly
- Software developers, working outside of academia, should seek to gain consultancy from experts in the area, to ensure there ideas are evidence-based (even if they don’t apodt the lengthy development and evaluation processes of academia)
- Consumers should be critical when selecting an intervention, and use websites like Mindapps to gain information regarding quality of apps
- Researchers should consider undertaking brief evaluations of apps and making these publicly available — seeking to expand on the work already being conducted by organisations such as Mindapps.
 Abroms, Lorien C., et al. “iPhone apps for smoking cessation: a content analysis.” American journal of preventive medicine 40.3 (2011): 279-285; http://www.ncbi.nlm.nih.gov/pubmed/21335258
 West, Joshua H., et al. “There’s an app for that: content analysis of paid health and fitness apps.” Journal of medical Internet research 14.3 (2012): e72-e72; http://europepmc.org/articles/PMC3799565/reload=0;jsessionid=ytYUIM1uVJoEcwBRJTkg.16
 Baker, Timothy B., David H. Gustafson, and Dhavan Shah. “How Can Research Keep Up With eHealth? Ten Strategies for Increasing the Timeliness and Usefulness of eHealth Research.” Journal of Medical Internet Research, (2014): e36. http://www.jmir.org/2014/2/e36/