There’s an old T. E. Lawrence quote about how the “messy and slow” act of fighting insurgents is “like eating soup with a knife.” Regardless of company size, stage of life, or industry, this quote should never describe an Analytics team.
After 9 months of working at a company the Bay Area would consider “traditional,” I’ve seen the importance of building an Analytics team that is focused on searching for and developing “step-function” improvements. We’ve iterated on the data, systems and business processes that we’re analyzing and developing, and in doing so we’ve also re-examined our entire approach. Our practices are still a work in process, but we’ve found our biggest successes by focusing on a few critical concepts for how our Analytics team should be doing business.
This is a “How-To” guide for doing Analytics — focused on the big picture team philosophies and not diving too deeply into the technical side (except for a few shout-outs). Hopefully these five key tenets can serve as a friendly passing of spoons to help other teams fuel their company’s growth.
Take a Timeout
One of the most important steps in our adoption of better practices was to first “stop” what the team was doing and think critically about our actions. By doing this, we were able to shift from a “ticket” approach maximizing throughput of ad-hoc requests to a “product” approach, where we build meaningful tools for the rest of the business over the course of bigger, longer-term projects. Yes, the switch was painful and created some short-term organizational frictions. However, it has allowed the whole Analytics team to really step up our game. By re-evaluating all of the output from the group, we were able to figure out what truly mattered to other business users around the company and put those products into their hands.
We need to be able to iterate on new ideas quickly. All that glitters is definitely not gold, and a good Analytics team needs to be able to move through sparkling ideas rapidly if we’re going to work our way to real insights. For the team to provide deep, long-term value, we need to be doing something akin to primary research. Only ideas that are workable and have a meaningful ROI are worth further time and energy. To determine which ideas have these qualities, we take the same data-driven approach that we bring to bear on the work itself.
Of course, prototyping isn’t just whiteboarding. We need a tech stack that matches this philosophy. By using flexible scripting languages like Python and R, as well as deploying new tools and databases using Docker, our team has been able to significantly reduce the time it takes to work an idea from conception, to proof-of-concept, to completed business tool.
Ability to Pivot
“Pivot” is so overused. But no business plan survives first contact with the customer, and we need to make sure our solutions and insights evolve along with the company. What we learn each day needs to — has to — inform our technology choices. The tools and software must be tailored to the data and questions at hand.
For example, our transaction and customer databases are built on SQL Server. While this has served the business well for years, the relational model heavily influences the way users think about data. Recently our team found that we needed to break away from this form of thinking as we dealt with some tough customer targeting questions. With the help of Docker, it was easy to spin up instances of Neo4j and OrientDB and put them through their paces with some proof-of-concept datasets. It broke us out of a traditional way of thinking about old data, and laid the groundwork for a better customer targeting system.
By keeping an eye toward a possible future pivot, we’re forced to be modular in our code and database design. We’re incentivized to develop clean, well-tested functions and queries that we can quickly plug into new situations. We’ve also found that prototyping on databases and software with Docker leads us away from building anything resembling a monolith. If our sunk cost in a particular solution is low (a couple of Docker containers deployed via Google Container Engine), we never feel the need to bolt something ugly onto it. It’s easier to tear down and start over — like wiping the whiteboard clean.
The only way to keep pace with the rapidly changing needs of our business is to ship new ideas and insights daily. This is a great goal, until you stop to think that there’s generally an inverse relationship between the time it takes to arrive at an idea and its profoundness (not accounting for Wolfgangian prodigy¹).
How have we tried to deal with this? First, we adopted a 2-week sprint cycle for our projects. This has kept us focused on our “product” approach to Analytics, and the short cycle keeps us moving and iterating. Second, our team has pushed ourselves to make at least one small improvement to processes, code, and documentation each day. We realize that our goal of continuous delivery is a journey. As we continue to add new members to our team and tools to our kits, we get closer to being able to deliver substantive ideas on a daily basis².
Hopefully this has been a helpful rundown of the keys that have kept our team moving. Perhaps it can help keep the soup off your shirt in your own Analytics adventures. If it does (or doesn’t), I’d love to hear your comments.
2. Note that the products from an Analytics team should be distinguishable from “reporting” tasks. I’ve also found that there’s a sort of “negative entropy” between these two concepts, and left unchecked, they seem converge to the same product over time — something which is probably too detailed and longwinded to be good “Reporting,” and not insightful enough to be good “Analytics.” This is a different conversation, and probably deserves another article entirely (^)