Achieving autonomy in the business analytics team aligning people and technology
There are many challenges when scaling a business analytics team at a fast-growing company, and here is what we are doing to overcome :)
The company challenge
The business is changing constantly, so objectives change every month, processes are lagging behind necessities and the organization changes a lot every six months.
New people are joining the team every month, and they are rushing to learn how the company operates, how the tools work. They don’t have an internal network, so communication can break.
The business analytics team challenge
At Liv Up, our business analytics team has the goal of fostering data-informed decisions across the organization.
People at the business analytics team don’t make the final business decisions, we empower decision-makers through data, analyses, and recommendations. To successfully impact decision-making, we have to get over two issues that are aggravated by hypergrowth.
- Understand the decision-maker context to deliver meaningful analysis. As the business is changing all the time, we tend to lose context, provide out of date analyses or projects that aren’t aligned with the decision-maker objectives.
- Have access to high-quality data and tools to develop analyses. As the business changes at a fast pace, data changes as well. Database schemas change every day. Besides, as the analytics team grows every week and there is always a fire burning somewhere, systems must be simple enough to be learned in days, not months.
To create a high-performance team in the middle of such change and uncertainty we have to guarantee that analysts have real autonomy.
Communication is expensive and many times inefficient. Business analysts are in the middle of decision-makers and technical staff such as data engineers. If they can’t understand the business context by themselves and can’t develop their analyses without creating an issue in the engineering backlog, they will face paralysis and, as time goes, get frustrated.
In hypergrowth, we have to trust people to make fast, well-informed decisions. If we need to sync the entire company to make each decision or pair data engineers and business analysts for each project, we are losing precious time.
To achieve autonomy, we have to guarantee that analysts have proper business context and are independent to consume data and develop their analyses.
Capturing business context
Liv Up is a vertically integrated company. This means that we control and operate all the value chain, from raw materials sourcing, production, distribution, sales, product development, customer support, and many other business facets.
This model has several advantages, such as agility to develop new products as we own development and production, higher margins as we remove intermediaries, and many others.
However, this also means more complexity. We have dozens of processes to control and nobody has the bandwidth to understand what is happening at the entire company all the time.
Business analysts in squads
To solve the complexity problem, we have to specialize. Our business analysts work inside multidisciplinary squads, they live their processes, face their challenges, and create relationships. They sit side-by-side with chefs, product managers, and logistics specialists to gather business context and develop better analyses.
This model radically reduces communication overhead, as many things simply don’t need to be said or written. If the analyst lives the day-to-day of that squad, she knows what is happening and what is important.
Of course, there are several trade-offs with this model. We have to align processes and tools inside the business analytics chapter, career progression is much blurred in this system and the analyst needs to be independent enough to discuss ideas with decision-makers without the support of a technical leader. Nevertheless, the pros outweigh the cons in our case.
Transparency with Objectives and Key Results (OKRs)
OKRs are another powerful tool to inform business context to analysts.
Well defined OKRs targets focus on what matters. If a decision-maker demands analysis of projects or metrics disconnected from OKRs, the analyst is much more inclined to challenge the decision-maker and suggest analyses focused on the squad’s objectives.
It is common in the analytics space, in general, to end a dashboard project or a presentation with a “so what?”. Analyses should be actionable and OKRs help with this focus.
Our business analytics team has a mix of technical-oriented and domain-oriented people. It is much easier to explain food product development for a food engineer than to a software engineer.
People with domain knowledge, mainly in specialized areas such as logistics or food product development accelerate business context gathering and generally generate good, deep analyses. However, it is important to master the technical skills and a profile mix of technical and domain experts enables knowledge flow inside the chapter.
Our challenge is to empower analysts with high-quality data and good analysis tools without creating data engineering dependancy.
To overcome this challenge, we leverage our modern data stack, combining powerful technologies that guarantee scalability, productivity, and efficiency, without compromising autonomy.
In addition to good technology, we have defined responsibilities between software engineers, data engineers, and business analysts.
Our data architecture
Our architecture is heavily based on Google BigQuery and other services of the Google Cloud Platform. We have 4 components in our architecture.
Ingestion Layer. Here is where all data is ingested. Following an Extract, Load, Transform (ELT) process. We first load all our data in Google BigQuery before processing anything. Besides MongoDB (our main transactional database), we ingest data from 10+ sources, using Stitch, BigQuery Data Transfer Service, and custom Python scripts.
Transformation Layer. Here the data is cleaned, integrated between sources and transformed to create our master dataset (we have one table per entity). Leveraging the power of serverless BigQuery with simple SQL select statements and control of Dataform, our analysts own this process.
This model has several advantages. Business analysts are independent to create new tables and columns, they don’t need a data engineer to adapt our data model to changes in the business. Besides, these new tables and columns can be used by other analysts (and data scientists), improving productivity. If something awkward happens in the visualizations or analyses, they know the entire pipeline and are empowered to fix the problems.
There are potential challenges with such freedom. Development of circular dependencies, wrong transformations leading to duplication of rows, and out of date data. To solve this problem, we are using Dataform, that provides a suite of scheduling, testing, and version controlling for SQL transformations.
Modelling Layer. After the data is transformed into our master dataset, we use LookML to model dimensions and metrics and to create “explores” to consume data. Business analysts own this process too. They can modify business metrics as we evolve and create derived tables if they need specific data.
Consumption Layer. The final destination is to extract value from data. Leveraging previous transformation and modeling, analysts can user Looker’s point-and-click interface to create dashboards, analyses, and presentations with maximum productivity. Besides, we also use Jupyter Notebooks for specific use cases, but most of our analyses are in Looker.
To achieve autonomy, the responsibilities over the above architecture must be clear. At Liv Up, business analysts are responsible for a large part of the pipelines.
Software engineers. They are responsible for data capture. Engineers set trackers to send events to our Snowplow pipeline, guarantee the accuracy of production collections and support business analysts to understand the schema.
Data engineers. In the business analytics context, they are responsible for the technologies used in the architecture, maintaining cloud infrastructure, and developing custom extractions or transformations for specific use cases that demand Python programming. They provide guidelines for SQL and LookML development. Besides that, they provide great support for data scientists (content for another post).
Business analysts (BA). As you can see, BA’s have two important responsibilities: to transform and model data and to generate business analyses. They are responsible for every SQL code inside Dataform and all LookML code inside Looker. After the data is in the data warehouse, they have the autonomy to use it in the best possible way following the squad’s needs.
Knowledge and learning
One challenge presented in the first paragraph is that business analysts have to ramp up fast, especially during hypergrowth.
Looking at our architecture, it seems that they need to learn a lot of things to become productive. However, it is simpler than that!
SQL. Analysts must master SQL. They write SQL code every day and it is the main interface between analysis and data. There are plenty of good online resources on SQL programming and it is the industry default, so there are many people in the market that know SQL.
Git. Both Dataform and Looker provide version control. Still, analysts must know git only conceptually, as these platforms automate the process in the graphical user interface, so they don’t need to interact with the command line.
LookML. The Looker modeling language is also mandatory on our stack. The language is pretty simple, there are good online courses and the documentation is awesome.
Dimensional modeling and denormalization. Although there are entire books written about these topics, analysts must know the basic concepts to start modeling data. Facts, dimensions, metrics, and star-schema. They can also learn by example with the modeled tables in Looker / BigQuery.
In our experience, new analysts take 1 ~ 2 months to become highly productive. After that, they can focus on the squad and learning domain knowledge to become experts ;)
Things that help
Some things that help to scale the team.
- Have clear SQL and LookML guidelines. Everybody should follow the same standards so the team can learn and evolve together.
- Use a tool to manage transformations such as Dataform or dbt. Managing dependencies, testing and scheduling are essential features, and these tools do a great job. (Airflow isn’t as simple and efficient)
- Have an event dictionary if you capture events on your application (using Snowplow or any other system). The dictionary helps new people to ramp up faster.
- Use BigQuery. Its serverless architecture enables analysts to develop transformations based on business needs not on infrastructure constraints.
Autonomy is mandatory for any company growing fast that values independent thinkers and high performers. It is the job of everyone to design systems, processes and the organization to allow team members to collaborate in the best way possible.
At Liv Up, we have several initiatives to empower people with autonomy, from microservices in the engineering team, data pipeline ownership in business analytics team to end-to-end food development projects in the product team.
If want to build the food company of our time and share our company style, follow us on https://www.linkedin.com/company/liv-up/.
In the next post, we will talk more about the data products team (responsible for data science). Follow us and stay tuned!