Top 10 Principles for Advancing the Science of Digital Health

Brennan Spiegel, MD, MSHS, MPH.

There are pervasive claims that digital health innovations are positively transforming healthcare (a notable example). However, despite this enthusiasm among technophiles, there are very few peer-reviewed and methodologically rigorous studies addressing the accuracy and impact of digital devices in clinical care (with notable exceptions - example). The NIH recently convened a workshop to examine the role of mobile and personal technologies and develop a blueprint for future research. Our research laboratory at Cedars-Sinai also studies how digital health improves patient outcomes. We develop and test wearable biosensors, mobile health (mHealth) applications, and other digital devices to understand whether and how they provide value in the clinical trenches – both for everyday practice and clinical trials. We routinely struggle with how best to validate these devices. I have previously written about how difficult this work can be, what pitfalls we routinely encounter, and why this is harder than we ever thought it would be (see here and here).

At the UCLA School of Public Health I teach a class called “Health Analytics,” and in that course we discuss the science of digital health. We teach our students about how to scientifically evaluate and determine the value of digital health innovations – an important skillset for future healthcare leaders. Here I describe the “Top 10” guiding principles we cover in the class, with a focus on how to advance the science of digital health.

  1. Approach digital health like any other biomedical advance – with rigorous research that respects the null hypothesis.

    We need rigorous, hard-fought, meticulous, sufficiently powered, controlled trials to figure out if digital interventions work. This is no different than for any other biomedical advance, whether for cancer chemotherapy, biologic therapy, invasive procedures, or anything else in medicine. We need lots and lots of data, of sufficient quality and quantity, to determine whether a digital intervention is worth our time and money. Our lab’s approach at Cedars-Sinai is to subject our homegrown sensors and apps to rigorous research, and then publish the results in peer-review journals (examples)).

    For example, when I first developed the idea for AbStats, the only FDA-cleared wearable to date for digestion monitoring, I wasn't thinking about FDA clearance. I knew virtually nothing about the regulatory requirements of biomedical devices. I just wanted the thing built, as fast as possible, and rigorously tested in patients. As a research scientist this is all I know. I need to see proof – evidence – that something works. That requirement should apply to my own research as much as anyone else’s work in digital health.

    In light of this background, I was thrilled to see the broadly publicized negative trial of remote monitoring with smartphone-enabled biosensors vs. usual care published by Eric Topol and his group. I wasn’t thrilled because remote monitoring failed; I was thrilled because we can learn from the failure and try, try again.

    When I teach my students at UCLA, I always emphasize the profound nature of the null hypothesis – it’s a bedrock principle of clinical research and biostatistics. The null hypothesis assumes that when we enter an experiment, we expect the intervention will be no better than the control. We expect failure, and are surprised when something works.

    We need to celebrate negative findings in digital health studies because they can be equally – if not more – important than positive findings. In digital health, we want to know what apps don’t work, what e-messages miss the mark, what sensors are irrelevant, what digital diagnostics are unrevealing, unhelpful, or even harmful, and anything else that may be terrifically non-contributory. Rigorous, peer-reviewed, controlled research is the time-tested way to determine what works, and what doesn’t.

  2. Move the needle on outcomes that really matter.

    In order for digital health to truly impact healthcare, we need it to move the needle on outcomes that really matter. Examples include patient-reported health related quality of life (HRQOL), symptom severity, satisfaction with care, resource utilization, hospitalizations, readmissions, and survival. For diabetes, it means reducing glucose levels over time. For rheumatoid arthritis, it means reducing fatigue and morning stiffness. For irritable bowel syndrome, it means reducing abdominal pain and bloating. And so on. In all cases, digital devices must make a difference that makes a difference; they must improve clinical outcomes that matter.

    Here’s the litmus test I teach my students: When you read a study about a digital health intervention, look at the primary outcome measure and ask yourself if it’s clinically important. If you’re not sure, then ask patients with the condition under study and see if they agree that it’s important. If it is not important, then you probably don’t need to read any further.

  3. View digital health through the eyes of patients and providers in the clinical trenches.

    This is a corollary of #2. When I learn about a new digital health solution, particularly apps and devices, I ask whether patients and providers were involved in its development. Our digital health team has repeatedly learned this lesson: unless a digital solution passes the patient and provider litmus test, it won’t get very far. We need input from everyone: patients, doctor, nurses, pharmacists… anyone and everyone who will be touched by or actually touch a digital innovation.

    When we built AbStats, for example, patients were crucial to our design process. We went through a series of form factors before settling on the current version of the sensor. At first, the system resembled a belt with embedded sensors. Patients told us they hated the belt. We tweaked and tweaked, and eventually developed two small sensors that adhere to the abdomen with Tegaderm. The input from nurses was also vital to get things right; they were quick to tell us when the devices were too hard to use or too difficult to clean. We needed all that input to make headway. Look for evidence that a digital device incorporates systematic and formal input from end users.

  4. Focus on digital solutions that provide actionable data.

    Digital interventions should pass the “so what” test. How will the results be employed in the clinic? How can we act on the results? The data should guide specific clinical decisions based on valid and reliable indicators.

    For example, when we developed our mHealth app for patients with gastrointestinal ailments, called My Gi Health, we learned that doctors needed the app to help them in the clinic. They wanted it to “interview” the patients, collect their data, and auto-compose a physician-grade history that could be presented within the electronic health record. The resulting history of presenting illness, or HPI, would have immediate relevance to the clinic, and would allow doctors to make better decisions in partnership with their patients. So that’s what we sought to achieve. The result was a pair of peer-reviewed articles showing how the app outperforms doctors at taking an actionable history (here and here). But publishing articles isn’t enough; we need to see if this is reproducible in other healthcare environments and whether it improves diagnostic decision-making, so we’re working on that now.

  5. Emphasize the context and hyper-personalization of digital health

    Because digital health is more of a social and behavioral science than a technical science, we all recognize that building an app or device is just the beginning. In order to make inroads with chronic diseases, like diabetes, heart failure, or obesity, we need to change behavior. We already have billions of sensors in our body; the issue is whether we heed their clarion call to action. Often we don’t, even though we know better.

    I am heavily influenced by Joseph Kvedar’s work at Partners HealthCare. Dr. Kvedar’s team not only builds and tests digital interventions, but also determines how to optimize apps and sensors within a biopsychosocial framework. His recent book, The Internet of Healthy Things, is a must read to learn why digital health is essentially a behavioral science. Kvedar’s team not only personalizes its apps, but _hyper-_personalizes its apps. By integrating everything from time of day, to step counts, to the local weather, to levels of depression or anxiety, Kvedar’s apps send pinpoint messages to patients at the right time and right place. As a result, they are making headway in solving some of medicine’s hardest behavioral challenges. We should all take note of this and look for it in research studies.

  6. Target solutions that deliver health economic value to the healthcare system

    Digital health solutions should provide economic value to health systems; they should be cost-effective compared to usual care. This is the tallest yet most important research hurdle to cross. For a hospital or insurer to pay for a digital solution, no matter how inexpensive it is, it should not only improve health outcomes, but also reduce resource utilization. It’s all about “juice for squeeze.” That is, the upfront cost of the intervention should be offset by downstream savings. As more and more digital health solutions roll off the assembly line, we need to see them subjected to formal health-economic analyses.

  7. Digital health research should acknowledge the “messiness” of the clinical environment by employing more pragmatic trial designs

    The clinical trenches are messy, gray, indistinct, dynamic, and emotional — injecting technology into that environment is exceptionally difficult. Digital health is a hands-on science, so researchers should turn to the clinical trenches and test their devices within the context of everyday care, not in highly controlled clinical trials.

    In contrast to traditional randomized controlled trials (RCTs), which are principally designed to explain whether and how a treatment will work (sometimes by setting up a straw man competitor in a highly selected and non-generalizable patient population), pragmatic clinical trials (PCTs) are designed to measure practical clinical questions within “real-life” environments. A typical PCT focuses on the risks, benefits, and costs of competing therapies within the context of usual practice settings. Whereas explanatory RCTs tightly restrict most all aspects of a trial, PCTs are more relaxed in their approach, and include a broad range of patients from diverse settings. PCTs may even allow for different patterns of care within a study arm. This laxity makes the scientist in us a little jittery, yet PCTs are designed to emulate clinical reality. And clinical reality is usually messy, non-linear, and erratic. Said another way, the only thing that really resembles a traditional RCT is the RCT itself – reality is something different altogether. Digital health research should include more PCTs.

  8. Digital devices should issue timely data with readily interpretable visualizations

    Apps and biosensors will only be clinically effective if they provide timely and easily interpretable data. Data should be delivered at the right time, right place, and with the right visualizations. For example, we spent months trying to figure out how best to visualize the data from AbStats. I'm still not sure we've got it right. This stuff takes so much work. In my class at UCLA we extensively review the science of data visualization and discuss effective vs. ineffective data displays. In the realm of digital health science, there should be more research into what visualizations work and for whom. The data needs of a provider are often different than for a patient; focus groups and cognitive interviews can suss out what’s best.

  9. Question extreme claims about digital health – in both directions.

    Both the digital health enthusiasts (i.e. the “Technophiliacs”) and naysayers (i.e. the “Technoskeptics”) are prone to making extreme claims. Look for these extremes and question them. The truth is usually somewhere in the middle. I’ve made extreme claims myself. When MobiHealthNews published its list of “77 of 2015’s most interesting digital health quotes,” I occupied the pole position with this sentiment:

    "The clinical trenches bear almost no resemblance to what’s being talked about. There are almost no examples of mobile health apps or wearable sensors that are being used at scale. I do not want all that information, and I’m a technophile."

    This statement might be perplexing when taken out of context. If I’m such a technophile, then why don’t I want all the information provided by wearable biosensors or mHealth apps? If the clinical trenches bear no resemblance to what’s being voiced in the digital health echo chambers, then just what is the disconnect?

    I need to remind myself to avoid using absolute language.

  10. Stay positive.

    These are terrific times for digital health. We should stay positive and assume the future is bright if we do things right. For outstanding examples of positive-leaning yet balanced digital health influencers, I suggest checking out these opinion leaders: David Albert, Manesh Juneja, Berci Mesko, Sherry Reynolds, David Shaywitz, Lisa Sunnen, Bob Wachter, John Torous, and Arshya Vahabzadeh, among many others. The recent list of “Top 100 Influencers in Digital Health,” published by Onalytica, is another good resource to consider.

Sign up for MyGiHealth to track your symptoms and prepare for your gastroenterologist appointment.

Download on the App Store