6. Product Analytics: Growth, Engagement, Retention

christian crumlish
PM4UX
Published in
23 min readAug 2, 2021

Of all the things product managers do that UX people tend not to do, data analysis is perhaps the most alien to the design orientation. Even the business aspects can be framed and understood as solving problems, designing solutions that address competing needs, and so on. And of course many savvy UX researchers, strategists, and designers consume data intelligently, use it wisely to inform their processes, and recognize the value of combining quantitative insights with qualitative ones.

But… almost uniformly, even among the UX practitioners who embrace data analysis fully as a tool of their, there is very little appetite for spending a majority of their time staring at columns of numbers, data tables, or analytical models (let alone building, testing, and deploying analytical models). Very few people got into design or UX out of a love of number crunching. Not saying it’s not possible, but it’s vanishingly rare.

Towards the end of this year Rosenfeld Media will be publishing my book Product Management for UX People: From Designing to Thriving in a Product World (you may sign up there to be notified when it is available for order), the culmination of a multiple-year project that has unfolded with the input and support of the Design in Product community.

During the editorial and production process I am sharing early drafts of chapters, and I welcome any feedback here that will strengthen the material.

It’s no exaggeration to say that product managers may spend a majority of their time analyzing data. There are definitely roles and even periods of time in most product roles when understanding the data is almost a full-time job. For a certain kind of geek, this is actually super fun!

Living in the data

I call this “living in the data.” You might call it swimming in the data. Diving deeply and exploring widely. This leads eventually to something like a feel for the data, for its grain, its granularity, its cycles and other “little ways.”

From the Trenches

Matt LeMay has a more humanistic way of describing this same sort of immersion, writing “I call it ‘living in your user’s reality.’ It’s important to remember that data is a proxy for other things — I’e seen a lot of product managers send forever on dashboards but never actually learn directly from their customers.

An obsessed product person will wake up wondering what the daily metrics look like, will peek at a north star metric frequently, will set up alerts to notice when key data points suddenly jump to values far outside of the usual parameters, will not be satisfied passively consuming a so-called dashboard and will instead be engaged constantly in an interactive practice of interrogating, massaging, pivoting, and slicing the data to tease out meaning wherever possible.

Or at the very least to tease out some clues, some hints worth researching qualitatively to go from a curious what to a satisfying explanatory why and from that to an actionable how.

Get to love SQL

A common concern for folks getting into software product management, especially those with humanities degrees or art skills, is “how technical do I have to be?” As discussed in Chapter 4, it depends on the role but generally it has more to do with conversancy with technical concepts and constraints than with being able to craft production ready code yourself or single-handedly design the architecture of a technical system.

But one technical skill all product managers should embrace is the ability to query databases and manipulate data. If you’re fortunate, you’ll work with product analytics software that produces charts and results and even some analyses at the press of a button, but there are times when there is no substitute for “getting under the hood” yourself to interact with and sift through the data directly.

You may even want to take a class in mySQL or any other SQL (structured query language) flavor to learn the basics of command-line database querying. This empowers you to ask your own questions of the data without having to ask an engineer to pull you a data set or for a data analyst to set up a special view for you in an app such as Tableau.

Or at least Airtable…

With the onset of tools such as Airtable that make database creation, import, manipulation, and querying much easier by providing a complete tabbed spreadsheet-type user interface, you may find you can get by without typing any SQL code at all into a command-line, but even low-code or no-code solutions such as this will only take you so far sometimes, so it empowers a product manager if they can fend for themselves when it comes to data.

Make instrumentation part of every feature

A common learning cycle for PMs is to ship a product or feature or fix and then realize afterward that it needs “instrumentation” (product hooks added to the code to capture user and system-driven events for analysis), so you do that in the next sprint and meanwhile you are flying blind for the first few weeks of your new product.

So then you get serious about instrumentation, add it to your spec template as an item to address. Connect it back to the goals you are trying to accomplish with this project and some thinking about how to measure success. Then you work with your developers to include this instrumentation with the feature at launch, which is a huge victory.

After this you tend to realize you are still doing this ad hoc with each new spec and that you need to work with your engineering peers to define conventions or adopt a system that enables instrumentation to be applied to any event and to follow naming conventions defined in a taxonomy somewhere (oh look, a taxonomy, something information architects know about!).

Eventually it becomes second nature that you never build and ship anything without fully instrumenting it in a way consistent with the rest of your product.

UX Superpower Alert

With your background in user experience, you are uniquely positioned to drive conversations from the “what” appearing in the data to the “why” that you can only really discover through qualitative research. If you have fully embraced the need for data, you are also that much more credible when you explain the lead to close the loop and investigate the signals by talking to people.

Funnel optimization

One of the most common ways to study data and gain insights into how to make part of your product work together is called funnel optimization. The funnel metaphor is a strange one if you think about it. It’s primarily visual, rather than logical. The idea is something like this: In most online processes (or really any sort of task a lot of people attempt), there are a sequence of steps and it is rare for everyone who embarks on step one to make it all the way to the final step in any process.

In fact there tends to be drop-off at every step along the way. This varies greatly of course, some steps are trivial or so pleasant that 100% of the people who complete the previous step complete that one, but as a very rough rule of thumb you’re likely to lose 10% right off the top every time you add another step to a process.

Why a Funnel?

OK, but why is it called a funnel? Well, image in it as shown in Figure 6.1, with a bunch of people starting on the first step of a process and much fewer making it to the end.

Figure 6.1: Imagine a five-step process where 9 people complete step 1 and only two complete the final step. If you look at how many were lost at each step along way (and maybe also squint a little bit) you can see why people started calling this pattern a funnel.

So it got called a funnel because of the superficial visual resemblance to something wide at the top and narrow at the bottom. Nevermind that a real funnel captures everything at the top and rushes it through the increasingly narrow hole at the bottom, which is definitely not what a “funnel” does.

Author’s aside type thingie : The apt metaphor is really a sieve but this term has yet to catch on.

How to analyze a funnel

There are several ways to look at a funnel to try to understand what is going on, why people drop off at various stages, and whether any of those patterns are changing over time or being affected by other factors.

One of the first things to do is to look for anomalous dropoffs at particular steps. Figure 6.2 shows the complete signup process for an online therapy service that involves a dialog with a chatbot, getting matched with a therapist, providing a credit card, and starting a free three-day trial.

Figure 6.2: You can see that some people drop off at nearly every step in this funnel, but almost all drop off at one particular step.

In this chart, generated with Amplitude, the dark blue bars represent the percentage of people in a given time period who completed that step. The lighter blue striped parts show the portion that dropped off from the previous step.

You can see a bit of drop off at the first few steps, with bigger drop off head into step 4 (just after the chatbot is introduced), and then a few more steps that carry nearly everyone along (they are easy to complete responses to chat questions).

Then there are a few steps where 1/4 to a 1/3 of the people are lost. These steps bear closer scrutiny to find out what is happening here. Are the questions more challenging? Is the tone wrong? Is there something actually difficult or partly broken preventing some people from completing or understanding the step?

But there is one step that stands out in particular because the funnel loses nearly everyone at that point. Can you guess what the user is asked to do at that step? (If you guessed “reach into their wallet and pull out their credit card” you’d be right.)

Investigating dropoff

Now, bear in mind that the moment of truth is always going to show larger-than-average dropoff. This is to be expected. Some people are just window-shopping. Many people do a final gut-check before committing to anything, let alone something that may cost money one time or even a recurring subscription.

Who among us has not abandoned an online shopping cart with something in it we decided at the last minute that we didn’t really need? In fact, e-commerce shopping cart analysis is basically the origin of the kind of funnel analysis we are talking about, although it works equally well with non-transactional processes and tasks.

So, at this stage it’s time to investigate those steps with unusually large dropoffs and come up with some hypotheses (as discussed in Chapter 6) about what is causing the problem and how this cause might be addressed, mitigated, or worked around.

Can expectations be set better up front to avoid “sticker shock” when the price is revealed? Would quotations from happy customers persuade some reluctant free-trial signups to complete the process? And so on.

Mind you, this is just one of the ways to investigate a funnel and the most obvious. Other questions to ask include “Are all these steps really necessary?” meaning, would more people complete the process if we streamlined it a bit and removed some steps?

UX Superpower Alert

The problem-solving techniques in the design toolkit can come in real handy when you’re trying explore ideas for why a funnel step is underperforming expectations but be careful at the same time not to spend your entire focus on the problems most easily solved by design. Look to the whole system, the larger experience, the options being presented, the transactional considerations if any, even data about the state of the world during the time you’re studying, outside of your tiny product experience.

Monitoring trends over time

Another way to look at a funnel is from the side, meaning instead of focusing on the aggregated motion through the funnel in a given time period, to instead compare results across a time series. This can be done at the level of any of the steps (what percentage of people made it from step 3 to step 4 this week and how does that compare to the percentage who completed step 3 last week?) as well as across multiple steps as well as the entire funnel (which is the easiest way to see how a funnel is performing over time at a glance).

Figure 6.3 shows the conversion rate of a funnel over twelve months. During this time a series of experiments tested hypotheses about ways the funnel could perform better, and you can see gradual improvement over the course of the year: starting out near 1.2% of people completing the funnel to approximately 1.7%, something like a 40% improvement in the conversion rate.

Figure 6.3: This funnel tracked conversion (completion of the final step) over time.

Abbreviated funnels

For a multistep process, you can sometimes also get lost a bit in the weeds, so another way to look at your longer funnels is to instrument all the steps but then produce a chart that shows only the “major” steps or key milestones along the way. This can sometimes help declutter the chart and reveal the sections that need the most work, as opposed to the specific steps.

All of these ways of looking at the changing funnel data over time give you hints about where you can find improvement and how you can optimize the “throughput.”

Funnel caution

As with any form of experimentation (as discussed in Chapter 6) be careful that your focus on perfecting on particular funnel doesn’t lead you to neglect other aspects of your product or even entirely different ways of designing the experience that might look quite different from your current funnel (different sequences, different interfaces or affordances, non-linear pathways, and so on).

Optimizing a funnel can be a powerful, almost addictive, experience so don’t let that blind you to the thing going on in your product that are not as easily quantified or as well represented in the funnel you’re working on.

Growth metrics

When working with product analytics nine times out of ten you are trying to find some way to make a key number bigger. Getting a data point to grow is about the most purely quantitative way you can view product work, and — as you may have encountered already in your own work — “Growth” can be a specialty or even department of its own, sometimes part of the product team and sometimes working at cross-purposes with it.

There are “growth hackers” out there who have made their careers through quasi-scientific experiments to rapidly or ferociously grow key metrics. These roles often combine product skills with programming chops (the hacker part can refer either to this hand-on technical competence or to the larger notion of trying out ideas aggressively to find new ways to optimize growth that “hack” the existing patterns).

There are also product manager roles that are explicitly “growth product managers” tasked with this area of responsibility exclusively. But most product managers have a number somewhere they are assiduously working on making grow.

The two most common growth targets are active user base and revenue.

Growth for pirates

A few years back, an entrepreneur and investor from the eBay mob, Dave McClure, boiled down growth levers using the piratical mnemonic, AARR (see Figure 6.4).

Figure 6.4: Dave McClure’s handy mnemonic helps us remember some of the key levers of growth (in a startup-oriented context).

As hardcore pirates, some say this AAARRR, including one further A at the top of this hierarchy:

  • Awareness
  • Acquisition
  • Activation
  • Retention
  • Referral
  • Revenue

You’ll see that some versions of this mnemonic flip the last two items, and of course revenue is not always the end goal in every form effort, but it usually is for startups (where this advice originated) and it does tend to be literally the bottom line in most forms of enterprise (but not, at least not in a transactional form, for nonprofits or government bodies, for example).

This sequence itself can be viewed as a sort of long-scale funnel, and instrumented as such as well. Each step along the way itself likely consists of multiple steps, so it can also be treated as a loose sequence of funnels.

Awareness

Awareness is the first step for any sort of user growth. Before someone tries your product they need to know it exists. They have to have heard of it. Someone has to tell them about it or they need to see an advertisement or hear a marketing message or find a link among their search results.

Awareness is about getting on the radar of your potential customers, and it overlaps with marketing (it basically is marketing) in the “product marketing” sweet spot. But of course awareness alone is not enough. Something needs to tip the person from knowing about your service to trying it.

Acquisition

Acquisition refers to turning a prospect into a user or customers, “acquiring” them for your user base. (The word feels a bit problematic but it starts with A so there’s that.)

This can be defined in various ways. Downloading your app from the store may count as acquisition. Visiting your web site and interacting with the content may count as acquisition. One unequivocal form of acquisition is signup. If someone makes an account on your service, you may safely consider yourself to have acquired them.

It’s easy to think of this as the key step. New customer acquired: Achievement unlocked! But even a member signup is no guarantee of continued engagement, or that this user becomes valuable to the success of the product, let alone revenue-generating. In order for that to happen, you need the person sampling your product to use it actively, to become what we call in the parlance, an active user.

Activation

A user is said to be “activated” or to have become “active” if they engage with the experience of a product in some meaningful way. A lot of product analytics software will default to referring to a user as active if they show up in the data at all. That is, imagine a user who downloads your app and tries it out on a Monday.

On Tuesday they are busy and forget about it but something (maybe a push notification?) reminds them about it on Wednesday and they log back in and poke around some more. Then on the weekend they come back again on Saturday and Sunday.

This user will appear active on Monday, Wednesday, Saturday, and Sunday but not on the other days of this week. They will also be counted as a weekly active user (one time) for the week these days appear in (or for two weeks if the time periods are cutting through the middle of this Monday-Sunday week), and for the month (or months) these days appear in.

DAU, WAU, MAU
(sidebar)

Product folks tend to talk most about daily and monthly active users (I recall being pretty excited when a product I led hit a million monthly active users!), and sometimes weekly as well, all of which depending on the usage cadences of the product. These concepts get abbreviated to DAU (daily active users), WAU (weekly), and MAU (monthly), and one interesting analysis you can do is comoparing them as ratio, such as DAU/WAU or DAU/MAU (often expressed in terms of percentages). This can help show whether overall growth is flat, going up, or doing down. So, for example, DAU/MAU tells you on average how many days of the month a typical user drops by. A 20% ratio means they were active for about six days in the month. As a rule of thumb 40% is generally considered good and anything above 50% is considered excellent, but this will actually vary depending on industry norms.

The problem with defining being active as “showing up” is that this overstates the case, counting folks who churn or bounce (leave the product or site again soon after arriving without having engaged in any meaningful way). So instead, it’s more useful to require that a user exhibit some behavior (typically, you define a basket of events that qualify as making the user “active” and then identify all the users in a given time period who triggered any of the events in that basket).

Another model is to track active users in the loose terms of anyone who showed up (it can make a nice “vanity metric” to impress friends and the less perspicacious investors), and to additionally track a separate metric called something like “engaged users” where the bar for counting as engaged instead of merely active is higher (requiring, for example, that they trigger an event from a smaller and more rigorous list).

You can then even compare your ratios of engaged users to active users to see where more lookie-lous can be converted to participants. Ultimately, the more engaged a person is with your product, the more likely you are to retain them in your userbase.

Retention

Fundamentally product growth relies on retention. Without robust retention, you can waste a lot of time, money, and energy building awareness, acquiring new users, activating them, only to find you’ve lost them in your leaky sieve (there’s that metaphor again!) like trying to fill up a bucket with a hole in it.

Retaining a healthy percentage of the users who try your product is the one sure way to grow in a compounding way, so you will likely spend more time analyzing retention and looking for the combination of experiences that correlates best with a satisfied returning customer or member or client.

One quick way to study how well you are retaining users is by looking closely at new and returning users in any given time period. If the raw number of returning users (all users minus new users) is going up, that’s a pretty good sign. If the percentage returning new of users (all user minus new users, divided by all users) is going up that’s probably good (but it can also mean that your new user acquisition isn’t keeping up.

In the first chart in Figure 6.5 you can see a slight upward trend as returning monthly users heads toward and eventually breaks 50k. The second chart shows that over that same time period the percentage of returning users grew appreciably from around 14% to around 18% (about a 30% increase).

Figure 6.5: Monthly returning users for a product expressed first in absolute terms and then as a percentage.

The mainstay of retention analysis goes deeper than comparing raw totals of new and returning users and instead follows specific users in cohorts to track when and how often they come back after initially appearing.

Figure 6.6 shows a weekly retention analysis for nearly half a year. Week 0 is shown at 100%, which means that the people who were active in that week are the cohort we will be tracking and of course all of them were by definition active in that baseline week. Week 1 shows just upward of 25%, which means that a little over one in four of the people who came in the benchmark week returned a week later.

Figure 6.6: A weekly retention chart.

By week 2 we are closer to 20% and week over week fewer return in a familiar long-tail pattern that flatten out somewhere around 5% many weeks out. Now, this is not a funnel. There’s nothing preventing the numbers for week 3 being higher than those for week 2, for example, but the truth is this rarely happen. Retention charts almost always look this way, but the goal is to get them higher.

Note: retention can be calculated either in terms of the person returning on that specific day (which is the strict form) or the person having returned in any of the days up to and including that day (a looser calculation). Either analysis can show you interesting patterns.

Any one of the data points on a retention chart like the one in Figure 7.6 can also be compared to compared weeks (or days, or months) for other cohorts. So for example, you might want to know that the cohort in Figure 7.6 had 10% retention in week 10. You could then run the same chart for people whose week 0 is one week later and see if how their week 10 retention compares to 10%. This sort of time series analysis can also be plotted on a chart.

In Figure 6.7, the shows the change in monthly retention of signed up members over time, or mobile web users.

Figure 6.7: Member-signup retention data for mobile web users.

Cohorts don’t have to be strictly based on stretches of time. Cohorts are any groups of users collected for comparison purposes, so you can also query your data (depending on what you’re collecting) to find left-handed users, users over the age of 50, users grouped by their language preferences, etc., and compare their retention over time to notice if there are lessons to be learned or advantages to be gained.

Referral

If you’ve reached the point in your growth efforts where you’ve optimized your awareness, acquisition, activation, and retention models, then driving up referrals is the next lever you can turn to, at least in some businesses. Remember that this “pirate” mnemonic is geared toward startups and not every model (such as, for example, most B2B businesses) are positioned to make use of referrals.

One someone using your software has been activated, engaged, and retained, they probably like your product! They may even recommend it to friends. If you provide them a way to do this (such as a share button), and build in triggers to suggest they recommend the app at moments when they are most likely to be inclined to do so, then you can start tracking which of your new users arrived by following a referral link from one of your existing customers.

Turning enthusiastic supporters into ambassadors or even evangelists for your product is the ultimate growth engine.

If you take the fraction of your users who recommend your product and then look at the percentage of those recommendation that deliver you a new user, you can then calculate what is often called the coefficient of virality, and what it really measures is how many new customers you can obtain just on the back of your existing userbase, without any additional marketing or advertising efforts to supplement.

From the trenches

Now, to be honest, in this day and age I really don’t like the metaphor of the virus. Let’s work on some healthier terminology, shall we? One person who’s thought about this a lot is Kevin Marks, who wrote a blog post called How Not to Be Viral (over a decade ago!) in which he recommended several potential alternative metaphors to embrace, all looking to nature but away from disease models (warning that anything that behaves like a disease is going to trigger an immune response):

Scattering lots of seeds (r-Strategy, used by many plants and animals)
Nurturing your young (k-Strategy, used by mammals)
Fruiting (“delicious with a seed in it”)
Rhizomatic (“from the roots up”)

Maybe you can come up with some more?

Anyhow, here’s the usual equation:

  • No. of invitations sent out by each set of customers: i
  • Percentage of invites converting into customers: c%
  • So then the viral coefficient (K) = i * c%, and K is the number of customers each existing customer can convert successfully.

If K is below 1 then your userbase is going to shrink over time without additional acquisition. If it is 1 then you are at a break-even point, and so in theory any number above 1 is good and will help growth, though a K factor 15 is obviously going to help a lot more than a factor of 1.02.

A nice thing about the referral model is that it closes the loops, and if you have a sufficiently high coefficient of spread, then you start to get virtuous cycles driving all your growth numbers upward.

Revenue

The final R in the pirate mnemonic is about the money. For startups it really is all about getting to sustainability and so revenue is frequently the key growth metric to optimize. Of course early-stage startups often don’t have any revenue, and in those cases user growth is treated as a proxy for eventual potential revenue growth.

See Chapter 8, “Getting the Money,” for more on revenue modeling and optimization.

Two Cautions

Data gets a bad rap in the UX world sometimes, as it can be associated with ignoring the humane aspects of a software experience in the service of making some specific numbers go up. But data analysis is just a tool in the toolkit alongside contextual inquiry and journey mapping, to name a few at random. It is neither inherently good nor evil and its impact depends entirely on how it is utilized.

Having said that, there are two serious pitfalls to beware when obsessively trying to optimize important metrics:

  1. Resist the urge to optimize “dark metrics,” meaning data powered by human unhappiness, deceit, or manipulation.
  2. Be very careful when choosing proxy metrics not to mistake the map for the territory.

These two dangers are of somewhat different nature, as the first depends primarily on your values and your vigilance and the second has to do more with reading too much into your own data.

Dark metrics are metrics that may benefit the business, particularly in the short term, but do so at the expense of tricking people (making it hard to cancel a subscription, let’s say, or scraping their address book without permission or clear disclosure).

Bad proxy metrics can lead to you optimizing a number for its own sake, at the expense of the intended goal the metric was chosen to represent.

There is a great example of this from the world of medicine. It’s anecdotal, as I am not a doctor, so don’t take my advice here, but the idea is this:

  1. High cholesterol is an indication of increased risk of heart attack.
  2. Taking statins can reduce one’s cholesterol levels.
  3. But this reduction in the metric may not be correlated with any reduction in risk of a heart attack.

That is to say, it is possible to change the metric without improving the underlying situation that the metric was chosen to represent. So choose carefully, and validate results with other indicators to avoid missing the forest for the trees.

Lukas Bergstrom, an ex-Googler product consultant made the point that at least some quantitative data needs to be regularly reality-checked with qualitative research: “At least I’ve found it helpful to think of the product metrics and qualitative deep dives as opposite poles that I need to always be moving between.”

A Day in the Life of a Growth-Stage PM

Janet Brunckhorst, director of product management, Aurora Solar, a “growth stage” (series B) startup:

Share anything else that might help describe the environment in which you practice product management: Mix of r&d, established product, and new product for new segments.

How do you start your workday? Before work, I write. This isn’t for work per se, but means I’m thinking about how teams are structured and how to make them better. I typically check Slack and email while I’m doing breakfast and getting kids ready so by the time I’m starting for real, I can focus on some work. If I don’t have a meeting first thing, that will usually mean following up on questions from eng, testing something, or some other tactical work.

How do you spend the early morning? I’d love to say I spend my most productive hours digging into strategic work, but usually there’s at least one meeting. If I can, I dive into a deeper topic.

How do you spend most of the morning? Meetings

How does the morning end? Meetings. Twice a week I go to the gym at noon. I block out my lunch.

When do you take a lunch break, and what do you have for lunch? Noon is my lunch break. I usually have leftovers or make a sandwich or salad.

What do you do first in the afternoon? Meetings. If I don’t have a meeting, I sit down after lunch to dig into research or write some definition for upcoming products. Or prep for my next recruiting interview.

How do you handle “firedrills” or other unplanned work? First I try to get a clear sense of the urgency and reasons it’s urgent. If it really is urgent, I give the relevant team a heads up via Slack. If it’s clear enough what needs to be done, we can generally short that out on slack and get started. If not, I will kick off a Slack conversation with the people who can clarify. Only of that doesn’t work will I call a meeting. Sometimes, this means pushing back on an initial request to meet

How do you spend the bulk of the afternoon? Most days it’s — you guessed it — meetings. Also testing anything that was completed, responding to requests and questions that came in during the morning. And hopefully pushing the deeper work ahead. Some days we use the afternoon for workshops.

What do you do at the end of the workday? Wrap up any last questions. Check my calendar for the following day. Shut down my computer.

Do you work in the evening? When necessary.

Key Insights

  • All great product managers spend time diving deeply into data and seeking to understand it as thoroughly as possible.
  • Learn to work directly with data to avoid dependence on others and to make data analysis an ordinary part of your day.
  • Instrument (add product analytics) to everything you build, so that you won’t be “flying blind” when you launch.
  • But don’t track everything; focus on key user and system events.
  • Optimizing funnels can be a great complement to UX and other ways to improve an experience so that more people are able to complete it to their satisfaction.
  • Not all product analytics boil down to growth but many do.
  • Growth actually consists of many different elements that can all contribute to a growing base of users, paying or otherwise. Among these are awareness, acquisition, activation (and engagement), retention, referral, and sometimes revenue (or AAARRR for short).
  • Don’t manipulate or harm people using the excuse that the data made you do it.
  • Be careful not to chase the wrong metric off a cliff.

You can sign up to be notified when Product Management for UX People is available for order at Rosenfeld Media.

--

--

christian crumlish
PM4UX
Editor for

Product leader @dinp.xyz, writing Product Management for UX Designers (Rosenfeld Media) and Growing Product People (Sense and Respond) — more xian @crumlish.me.