Thanks so much for the thoughtful feedback. You’ve raised some truly excellent points.
The good news is that we’re are aligned in many ways. Omada Health is just as interested in, and determinedly working toward, the ‘gold standard’ VDPP RCT as you are. Our 1 and 2 year papers as well as our health economic work and low-resource outreach are stepping stones to help us build toward that goal. I think the fact that we put so much effort into ‘traditional’ peer-reviewed research and publications should allay any fears of this article downplaying their importance.
With regards to your question as to whether this article suggests peer-reviewed clinical research is no longer needed in lieu of payment based outcomes: Absolutely not.
I’m highlighting payment based outcomes as a fantastic mechanism that facilitates complete internal alignment for the development of the systems and practices that fully utilize outcomes based data to optimize a digital intervention.
The ideas presented above seek to highlight the power and potential new pace of behavioral science discovery facilitated by the tools of data science — a compliment to the long history and established methodology of RCTs, while potentially mitigating some of the well known ‘cons’ in RCT methods (not in any way claiming these cons outweigh the pros!).
We also agree that the differential efficacy of virtual vs. in-person DPP is yet to fully be understood. In fact, I can imagine a future where the virtual and in-person work hand-in-hand. The in-person DPP either offered as a preference to those who want the face-to-face interaction or for key phases of a program which offer largest impact in that environment.
What I am suggesting is that, facilitated through data and aptly applying the data-science tool-kit of analytics, machine-learning and experimentation, we have the opportunity to create a new kind of DPP, based on the virtual platform. A program designed solely by following the trail of outcome-based breadcrumbs that are left from quick, iterative, experimentation. A “super-charged” flavor of experimentation only possible with a firehose of behavior data and a system built to capture, process, and learn from it.
If you’ll indulge me a bit, I do believe that the effects of an individual’s social ecological/network can at least be attempted to be measured at the individual level. In fact, so much data science work these days is focused on network interactions and influences, it sometimes seems hard to find data scientists that can be pulled away from the Facebooks and dating apps to use their graph/network analysis powers for good! Of course, practically incorporating this data in the medical context comes with its own stickiness (e.g. “Log on using Facebook account to view your test results!“ … cringe). But there is growing evidence that patients are willing to offer up data when the benefits are made clear.
You astutely point out that Omada’s population is a self-selected and loss to follow-up data set. I’d first point out that behavioral modification, for the large part, will/should always be a self-selected intervention. This being the case, our commercial population stretches from the young (18) to the very old (90+) with the +40k participants coming from all walks of life, geographies, SESs and ethnicities. These participants are not incentivized to participate and the program is provided free of cost through their health plans or employers. If our most ‘general’ population are those with some diabetic or heart disease risk who are at least curious in lifestyle change, I think we’re getting pretty close :)
Additionally, in traditional studies, lost-to-followup means exactly that — participants vanish never to be heard from again. A digital approach gives us ways to stay in contact with the participant, whether they are actively engaging with the program or not, and let’s us react and be available if we receive signs that they might be ready to engage again. Our scales are the most obvious example of this, becoming part of our participant’s life, passively collecting longitudinal weight data each time they step on. Often we don’t have to follow-up with our participants, they follow-up with us.
Overall, I think we’re very much aligned. I see “Super-charged RCTs” as an ally to traditional methods — a tool for data science and digital health product teams to quickly get at the core of what works, and what doesn’t, when deploying their intervention. Properly collecting, structuring, and analyzing the unprecedented levels of data can bypass some (emphasis on some) traditional generalization challenges extrapolating trial interventions to broader populations. Rather than being limited to theory based behavioral science to guide our discovery process, we have ability to quickly test and iterate using these results as hypotheses from which we can build upon at unprecedented velocity.
Or, to put it more simply, I believe Omada can be both “based in the best of behavioral medicine”, while simultaneously using digital tools to boldly push into new frontiers of behavioral medicine discovery.
Thanks for engaging with us so astutely on this topic. Let’s hope we bump into one another in a (near-term) future where our mutual concerns have been emphatically addressed — and our common hope for the potential of digital behavioral interventions, exceeded.