Smart Apps: How to build them.

Ryan Redmann
Pandera Labs
Published in
6 min readMay 25, 2017

Our apps are smart. They can predict what our users want. They can automate tasks and decision-making. They can distill unfathomably complex data into simple insights. Some can even hold a conversation. Simply, our apps have all found elegant ways to utilize analytics in creating better, more intelligent experiences for our users — it gives them an unmistakable edge. While today it may look like we’ve mastered the smart app, that wasn’t always the case — we’ve had to think long and hard about how to establish sophisticated analytics as an evolving (often invisible) user experience paradigm (mostly learning the hard way) within our applications. In a recent discussion that I hosted at Chicago’s 1871, we shared our team’s experience in that pursuit, and gave some strategic pointers. Here are some of the highlights:

1. Start with your user (always).

“I want to build a neural net!”

— Someone stupid.

No, no you don’t. You really just want to leverage analytics to create a better user experience — for your user. Maybe, in some distant future, that requirement will manifest as machine learning, but today just focus on your user first — i.e. what does the user need to do? — — then — → how can we use analytics to drive that?

For example, we have a product called Traena that uses an “ai-driven” feed to deliver relevant learning experiences to a user, via a mobile app. Just like in any other application, we focused on the user story first. More specifically, we knew the user needs to learn (duh), and learning would happen best by feeding them learning content that aligns with their needs and interests (double duh). Understanding that user story, we decomposed it into a feature (the feed), technical requirements (react native, elastisearch, kinesis, data pipeline, etc.), and analytics requirements (collaborative filtering techniques). Simply stated, we start with the user and treat analytics requirements just like technical requirements — not that hard or different, right?

[See this cliché product dev diagram]

2. Be deliberate in collaboration / education / training.

Analytics is, for the most part, a foreign concept to product owners, UX / UI designers, and even engineers. This creates some obvious issues at multiple stages within the product development process. During ideation, product owners and creatives miss opportunities to incorporate analytics to drive a certain interaction or function; during architecture or development, engineers miss the opportunity to collect valuable data points that can fuel the app’s intelligence later on. On the other end, the data scientists are often left out, because early on there isn’t any data to look at or use.

So, my advice is simple: first, always have a data scientists involved in solutioning and be deliberate in collecting data / analytics requirements early on; second, engage in continuous education across teams to highlight use cases and basic techniques — having everyone understand “the world of the possible” will create better cross-functional synchronization, and enable better ideas to flow between teams and into products. Be deliberate in incorporating these into all stages of your product development process — most of the time, this just means having an analytics person in the room and asking their opinion — easy.

3. Make your UX / UI failure tolerant (invite failure even).

Remember this guy. Cringe.

Your AI / machine learning / rec engine / featureX is going to suck at first (maybe always) — it will be wrong a lot. Teams must actively embrace the UX implications of analytics failure. If the use case is personalization, how accurate will it be? What’s the damage done by an incorrect recommendation (e.g. Netflix, stop recommending Zootopia)? If the use case is decision / task automation, how does the user stay engaged? How do you still make them feel in control and avoid mistakes? If analytics is being used to personify an app (e.g. Siri, Alexa, Cortana, etc.), how do they feel about the ‘character’ (I’m kind of an Alexa guy myself)? We’re well past the age of Clippy, but we shouldn’t forget easily — and, to avoid the mistakes of our ancestors, we must be aggressive in studying how our use of analytics can affect the experience and psychology of our users. Ask those questions, especially when talking to your users — they’ll thank you later. More tactically, start by incorporating user override or alternate interaction paradigms beyond just an AI.

4. Let analytics evolve naturally (don’t rush).

There’s no such thing as “zero to deep learning”. You need data and you need discovery before you can really commit to building out sophisticated analytics services. Think of the growth of analytics services as being analogous to human cognitive development and education. Early on (childhood) you mature via broad discovery and observation; similarly, you need to begin to develop your analytics services through exploratory data analysis and user research, broadly exploring the realm of possibilities. Once you identify a value opportunity, you can further approach and develop your model or service via more structured experimentation and hypothesis testing, still testing / probing many avenues; this second evolutionary stage is similar to our development via structured primary education — still broad in topic and keeping many options open. Once you validate a specific opportunity, it’s time evolve, hone, and deepen it’s understanding and value via application in a production system and automate it’s maturation; this is the equivalent to learning within your profession — “real world” learning if you will. In short, analytics services mature like people — start broad and structured, and capitalize on opportunity as it presents itself.

By: Anthony Carmanati @ Pandera Labs

5. Fail with purpose.

“Failure is simply the opportunity to begin again, this time more intelligently.” — Henry Ford

This may seem obvious (repetitive from #3), but failure is a critical element to learning; I mean that in the context of both machines and humans. A machine learning algorithm, in essence, uses mass-scale failure to teach itself. Humans, the same. We must touch the stove to learn that it hurts. In the evolution of analytics (science in general), we find the same failure-based learning paradigm. You have to try out a ton of different analytical techniques and methods to figure out exactly what works and doesn’t. Sure, you can make some educated guesses as to what might be effective, but you never really know until you formulate that hypothesis, test it, and most of the time fail. Seeking out failure fast will help you iterate through opportunities, and eventually find the right route. Further, once you think that you’ve found opportunity / value, keep challenging your approach for something better — that continued failure will keep you moving forward fast.

[A cartoon I didn’t create]

Conclusion

In summary, this stuff is tough. It takes an evolved knowledge base, a disciplined approach, a different team structure, and a comfort with repeated failure (intentional / planned failure). Our team has found simplicity in principle of approach — it leads us to complex, high-value results. My best advice for you: start experimenting with your own approach — think, test, iterate, fail, evolve —and keep doing it.

--

--

Ryan Redmann
Pandera Labs

Trying to figure out the future of tech and how it impacts us. Based in Chicago. Founder @ PanderaLabs.com and Traena.io