Are today’s wearables showing us the forest or just the trees?
Limitations of the current generation of quantified self apps and devices
At the end of 2015 I decided to look back on the year to see if I could learn anything of value. I am not a prolific poster on social media so reviewing the year that way didn’t tell me much. I do however track a fair bit of data from various fitness trackers, sleep trackers, driving behavior and the like. I also have an autistic son and we record a lot of information about his behavior, sleep, eating habits, medications and therapies in an effort to better understand and help him.
So I went and looked at the totals for the year.
For me, it looked liked this:
- 4 million steps (wow)
- 9000 flights of stairs
- 7.1 avg. hrs sleep
- 773 miles run
- 16,478 miles driven
- 14 lbs weight change (down, up, down again)
For my son it looked like this:
- 618 meltdowns/tantrums (ouch)
- 262 acts of aggression
- 9.3 avg. hours of sleep (min 3.5, max 12)
- 5 medications used/tried
- 5 lbs weight change (up, as you might expect)
This is interesting, and many of the apps I use have nice graphs and tools to slice the data. But what struck me was the limited use of this data. I already know I don’t get enough sleep (my body tells me all too often). I also know I have a rubbish commute and spend too long in the car.
As for my son, my wife and I are well aware of the frequency of his meltdowns and aggression — we live our life in a high state of alert anticipating the next instance.
So while it’s great I have this info collected by wearables, connected devices, dongles and manual entries into google docs, and it’s helpful to see trends over time, we could be making better use of this data. I’m not knocking the value of the data we have now — it’s great see how my runs this year compare to last, or to get motivation to get to 10K steps every day. But that is just the first step.
We seem to be at the point now where we should be able to use this data to drive deeper understanding, and more importantly, action. How do these data points interact and what conclusions can I draw? Is there a relationship between my sleep and my running? More compellingly, is there a relationship between my son’s medications and his behavior? If I can answer those questions, then I can change my behavior or my son’s intervention plan to actually improve our lives.
In the market today, we have a diverse range of sensors and apps with some ability to talk to each other and share data, but no good way to blend this data to create information that is more than the sum of the parts.
Last year I bought a promising sounding sleep tracker that advertised that it tracked both sleep and the sleeping environment — the levels of noise, light, and temperature in the room. Excited, I bought it only to find out that the device/software told me only about my sleep, and what my room was like when I went to bed. For me this totally missed the mark. What I had hoped for was the ability to correlate the environmental data with the sleep data. So when I woke in the night, was it because it got loud, or the heat kicked on, or someone turned on a light? This level of blended data might help me make actual changes to improve my sleep rather than just thinking “uh huh, I slept badly again,” and rarely opening the app after that.
In traditional healthcare, it’s even worse (although slowly improving). For years I have tracked my cholesterol numbers in a spreadsheet / graph so I can visually see if it’s better or worse. Recently my doctor finally got a website where I can see my test results after each six month test, but sadly (and kind of amazingly), each report is a separate file so there is no method to see the results over time. What would be great and make lots of sense would be for me to be able to merge my collected info on fitness activity, weight, and diet, along with my cholesterol test results so that I could see the impact of my lifestyle on my cholesterol.
For the next generation of activity tracking apps and devices, they should take this next step and begin evaluating the information being gathered. I’d like to see apps encourage the creation of plans or hypotheses and then measure the collected and consolidated data against these hypotheses to show progress or evaluate success.
As we collect more and more data through sensors and apps, and get better access to existing data like our medical records, the opportunity for insights will continue to grow. But we need better ability to blend data from multiple sources into shared platforms to be able to analyze it together. Clearly the bigger players are moving in this direction with Apple and Google both having health platforms. And Apple’s announcement a few months ago of CareKit seems to be a good first step on this road in the healthcare space. It has an activities tab (where you might record medication taken or therapy or treatments) and a symptom tracker where you can record symptoms and how you are feeling. An analytics dashboard then blends this data for insights on how the treatment impacts the symptoms.
It feels like we are very close to the next phase where all this data that is becoming easier and easier to collect starts to move from mild interest, bragging rights or personal goals, and into real actionable insights. But software and hardware developers are going to need to change their mindsets about the goals of the data they collect, and provide tools and data management abilities that let regular people (who are not data scientists) make use of it. They will need to think about the benefits of allowing users to merge their data from multiple apps and devices easily so that we can start to build a holistic picture rather than having this data on its own island serving only one purpose. And the data needs to be usable and meaningful — not just a flood that will overwhelm users and caregivers. I, for one, am eagerly anticipating this next phase.