A year ago I gained 20 pounds in a few short weeks.
My friends noticed. My parents noticed. My 8-year-old brother noticed.
You know who didn’t notice?
Fitbit.
Why should Fitbit notice?
I paid them to. $129.95 plus shipping for the Fitbit Aria scale. I used it almost every day for two years. It knew my weight, BMI, height, age, and more. When I gained 20 pounds in ~6 weeks, my Fitbit should have been in panic mode.
“Adil, what are you doing? Hello? Stop it!!”
But no—Fitbit sat on my data and lazily stuffed it into a dashboard.
It can be hard to define what the problem is here. Is it Fitbit’s job to prevent me from gaining weight? No. Did Fitbit make a promise it didn’t keep? No—in fact, if you look at Fitbit’s marketing page for the Aria, it does exactly what it said it would do, which is record my weight and show it to me in a dashboard.
What Fitbit could have done was notify me of a worrisome trend and help me reverse it.
This is a huge missed opportunity.
Companies that are sitting on mountains of user data—like Fitbit—have the chance to go above and beyond in serving their users.
Wearable health companies aren’t the only ones who can do this. The same applies to companies in education, finance, and more. Their products can use data to help their users make better decisions that improve their lives.
Let’s call this concept a positive intervention. A positive intervention is when a product uses data to guide its users to better decisions.
There’s a similar school of thought called anticipatory design. Anticipatory design encourages eliminating decisions entirely, thus reducing decision fatigue.
Designing positive interventions is different from anticipatory design. The goal isn’t to use data to eliminate decisions—the goal is to use data to help users make better informed decisions. Automating decisions can be a plus, but there are things that software can’t automate for the user—like making healthy lifestyle choices. That’s where a positive intervention comes in, using data to raise a red flag and pushing users in the right direction.
I’ll illustrate with some examples.
Say you work at a wearable company, like Fitbit, Misfit, or Withings. At a bare minimum, you can access how much your users move every day. You may also know your users’ heart rate, sleep duration, sleep quality, bedtime, wake time, height, weight, BMI, age, gender, and location.
What positive interventions can you design with this data?
If a user’s weight (or their daily activity, sleep, etc.) starts trending the wrong direction, first send them a notification and confirm that it’s a bad trend. This can be worded in such a way that isn’t insulting or embarrassing to the user—like so:
“Hey Adil, I noticed your weight has been increasing the last few weeks. Would you say this is a healthy, desired increase, or an unhealthy change you’d like to reverse?”
Or:
“Hey, Adil. I noticed your weight has increased recently, but your heart rate and step count seem O.K. Would you say this is increased muscle mass or is this unhealthy weight?”
If the user says the weight is healthy or desirable, then the interaction ends there. If not, then the scale can begin giving them tips on how to reverse that trend.
If the product has a companion smartphone app, it should integrate with the user’s location and recommend a local park or a gym where they can exercise. Go even further by integrating with their calendar and showing them times they’re free to go. (Google Calendar does this now! It’s almost embarrassing that activity tracking apps didn’t do it first.)
Don’t worry about coming off too strong. Users bought your product for a reason!
If a user is consistently beating their goals (such as activity goals, sleep goals, or weight goals), suggest intelligent and realistic adjustments that make their goals more challenging. If they’re consistently failing their goals, first make them more attainable, and then incrementally make them more challenging.
Don’t just reward users for all-time records (“most steps taken in a day!” or “10,000 lifetime miles!”), but reward them for trending in the right direction (“you’ve been more active every week for the last three weeks, keep it up!”). Improvement is not all about record-setting. In my experience with fitness trackers, I was more likely to set an activity record on a random day without meaning to, as opposed to setting a record due to healthy activity habits.
Most health companies write educational material to bolster their content marketing efforts. Reuse that content by sending relevant articles to your users when their data indicates they would benefit from it.
A user isn’t sleeping well? Don’t just show them a sleep score. That doesn’t mean anything to anyone. Automatically send them your blog post filled with tips on improving sleep quality instead.
As an undergraduate at Carnegie Mellon I used an (aptly-named) web app called Student Information Online (SIO). SIO knew all of the classes I was taking and my grades in each class. It also had access to the directory of classes offered at my school, what classes each professor taught, and so on. Despite having access to this information, SIO was a total letdown. It did nothing but show my courses and the school course directory as a list of links.
Say you’re working on SIO. What positive interventions could you include?
Based on the student’s major, GPA, past courses, and the courses available at the school that semester, SIO should suggest classes that it thinks the student may enjoy. I don’t know if SIO’s grade data is currently organized by class, professor, and/or student, but there would be relatively minimal technical challenges in implementing this—nothing a CMU computer science wizard couldn’t do. Bonus points if Carnegie Mellon incorporates results from Faculty Course Evaluations into SIO, as that would make SIO even more powerful for suggesting classes to students based on past ratings.
With access to students’ grades, SIO can predict how they might perform in a class—before the student even signs up for it. This could be accomplished by comparing the student’s past grades in similar classes compared to how similar students performed in the class. Netflix already does this with movies and TV. It predicts how I’ll rate a movie based on my previous ratings and what other users think. While Netflix has more user data than Carnegie Mellon does, even a loose grade prediction, like “expect between an A and B”, would be helpful. This would help prevent students from overloading multiple extremely difficult classes.
We’ve all signed up for “easy-A” classes. And the teachers (usually) know if they’re teaching one. Where both students and teachers lose is when someone signs up for an “easy-A” class about a subject they don’t care about. Instead, SIO should suggest “easy-A” classes that the student will actually be interested in based on the student’s major, their past grades, and historical grade data for the class.
A reasonable response here will be to encourage students to improve study tactics and flourish in challenging courses, not to sign up for easy ones. But stopping students from signing up for easy-A classes isn’t going to happen, so why not embrace it? Taking an “easy-A” class that a student is interested in is 10x better than one they don’t care about.
Universities pitch themselves as teaching students how to become lifelong learners, but for most students, a degree is primarily a means to a job. SIO should recommend jobs students may be interested in based on their past classes and performance. It should recommend everything from full-time jobs and summer internships to part-time research assistant or teacher’s assistant jobs.
There are a few possible reasons:
For interventions that require data from multiple signals, drawing conclusions is easier said than done.
For example: if my weight is increasing, my step count and active minutes are going down, and resting heart rate is increasing, I’m probably gaining unhealthy weight. But what if my weight is increasing and my step count is down, but my resting heart rate is also down? Am I gaining unhealthy weight? Or am I gaining muscle mass because I’m lifting weights more and running less often?
Where more signals are involved, 1) the signals may not all back up the same conclusion and 2) there can be interaction between the signals that make it difficult to draw conclusions.
This is not an excuse for wearable companies to be providing no positive interventions whatsoever. If I’m gaining weight rapidly but the other signals can not confirm whether or not it is unhealthy weight, I should at least get a notification that’s checking in on how I’m doing.
“Hey, Adil. I noticed your weight has increased recently, but your heart rate and step count seem OK. Would you say this is increased muscle mass or unhealthy weight?”
It’s OK to come off a little strong. If the user wanted a pedometer they would have just bought this for twenty bucks and called it a day. Instead they paid $100+ for a wearable fitness coach. Give them what they paid for!
There will always be someone who doesn’t like the heavy-handed approach. Improving their experience with your product is as simple as giving them the option to opt-out of positive interventions.
Alternately, interventions can be off by default, and users can opt-in to turn them on.
When designing the positive interventions themselves, just use common sense. Consider how you may approach the topic in-person with a friend, and try to translate that experience into your product.
Don’t forget—it’s OK to come off a little strong. There’s a reason why people paid for your product.
My money is on this being the main reason why these interventions aren’t ubiquitous yet. And it’s good news for startups—if you can be the first to be brave and provide 10x the value of existing companies in your space, you could make a lot of users happy—and make yourself a lot of money.
There are relatively few companies that design positive interventions, especially for sensitive issues. (If you’re working at a company designing such interventions, I’d love to hear about it in a comment or response.)
Wearables and education are only two examples of industries where companies can be brave with their data. There are many more.
We’re already using data to design positive interventions through personal assistants and social media. Let’s do the same in health and education. And financial planning, and energy usage, and travel, and public safety…
Don’t shy away from sensitive issues. They are usually the most important ones.
Be brave with your data!