An Algorithmic Approach to User Acquisition Automation

AlgoLift
16 min readFeb 13, 2020

--

Authors: Ben Young and Dmitry Yudovsky

Executive Summary:

AlgoLift automates user acquisition on Facebook, Google and Apple Search by leveraging user-level d365 predicted LTV and portfolio theory to buy ROAS maximizing installs. Using algorithmic market models to understand the predicted future performance of user acquisition campaigns, AlgoLift is able to take a portfolio approach to automate user acquisition that is out of reach for manually operated systems. With learnings from over $100m of user acquisition spend, this paper outlines the problem that AlgoLift is solving, some examples of the math used and the results they’ve generated for clients.

Introduction

Today’s digital marketers are well-served to implement a buying strategy that iteratively and continuously controls campaign bids and budgets to maximize long-term net revenue. The strategy should respond to CPI and LTV fluctuations and comply with accounting, product and advertising platform requirements. As part of a financially responsible organization, digital marketing should operate on the basis of long term return on ad spend (ROAS). In this paradigm, marketing dollars are seen as an investment with an expected return over a realistic time horizon.

To that end, easily accessible metrics such as DAU, D7 ROAS, CPI, engagement, or retention give an incomplete sense of overall marketing effectiveness since they do not directly translate to absolute corporate performance. Simultaneously, campaign monitoring and optimization consumes operator bandwidth without creating space for strategic long-term thinking. Pure automation without ML-based intelligence may free operator bandwidth, but still forces the focus on implementation and micromanagement. Intelligent Automation supported by long term revenue forecasting can bring digital marketing closer in line with other revenue driving teams.

In addition, conflicting marketing strategies and inability to model the market exposes marketing teams to ad hoc and unrealistic requests from disparate stakeholders. For example, the business asking marketing teams to optimize for net revenue while simultaneously scaling the total number of acquisitions. The modern marketing team requires a theoretical framework for analyzing and making decisions based on empirical observation of supply and demand. With an empirical relationship between volume and cost, the marketer can satisfy competing objectives to the extent possible, thus setting realistic expectations with a practical strategy.

AlgoLift’s Intelligent Automation (IA) — based on (i) forecasting of long-term ROAS leveraging user-level pLTV projections with (ii) user acquisition through portfolio ROAS maximization — can alleviate the cognitive load of manual management. IA can be designed to achieve a variety of goals with realistic business and platform constraints. If done properly, IA frees the human operator from obsessive curation of individual campaigns, allowing marketing teams to focus on long-term goals, strategy and creative. This article discusses AlgoLift’s implementation of IA tested in the wild on approximately $100 million dollars of spend on Facebook, Google and Apple Search Ads over the course of approximately a year.

Theory and Design

Manual Digital Marketing Strategies

Table 1: Different UA strategies

Some common user acquisition (UA) strategies and their relative merits are described in Table 1. S1 attempts to run each campaign at the same, business-dictated ROAS target. For example, a UA manager may have the goal of having each campaign break even on day 180. Using day 7 ROAS optimizations as a proxy for achieving their goal, the UA manager will have scaled down or shut off campaigns that do not achieve this objective. Campaigns that exceed this goal are given an increased budget until their CPI rises to a threshold value.

Table 2: Typical cost and return of mobile gaming clients in 2019 of Facebook, Google, and Apple Search Ads.

It is possible for UA managers to attempt an optimal execution of strategies S2 and S3 manually but this problem can be expressed mathematically and implemented programmatically at scale over multiple networks. S2 prefers maximizing predicted ROAS at a spend level dictated by our client’s finance team. S2 is usually preferred where there’s an associated opportunity cost of not spending the entire allocation (for example when the next period’s budget is fixed to the actual spend of the previous period). S3 iteratively seeks the largest spend level for a ROAS goal (net gain or loss) acceptable by the business. The article discusses the theoretical details, practical implementation, and real-world results of these strategies.

Mathematical Formulation

AlgoLift’s IA consists of a network of forecasting (i.e., predictive user-level LTV), planning, and optimization algorithms that work together in complex ways. Several rejected optimization approaches are discussed first as background.

Rule-Based Iterative Campaign Adjustment

Let ϱᵢ(t) and xᵢ(t) be the ROAS and budget of the iᵗʰ asset at time 𝑡. A rule-based controller can be a ROAS maximizing strategy:

where ϱ is the desired ROAS and ⍺ is a scaling factor between 0 and 1. The algorithm would increase or decrease allocation to good or bad assets. Alternatively, a proportional feedback controller can also achieve a set-point ROAS ϱ:

Where K>0 would increase xᵢ(𝑡+1) of campaigns with ϱᵢ(𝑡) higher than the set-point.

The major issues with these types of approaches are:

  • ⍺ and K are arbitrary tuning parameters
  • Cross-asset constraints (such as spend 30% on Android) are difficult to enforce since each asset is treated separately
  • Different assets may warrant a larger/smaller ⍺ or K depending on prior knowledge of asset behavior in different spend regimes
  • Some asset types have a bid and a budget with a nonobvious relationship

To make this approach work, explicit relationships between xᵢ(𝑡+1) for different assets need to be enforced and the formulation quickly becomes a system of equations. The move from that to portfolio optimization (a cost function to be optimized with a system of equations representing constraints) is an obvious one.

Efficient Frontier Portfolio

Digital marketing allocation cannot be easily modeled as a mean-variance portfolio because:

  • Asset risk and return depend on the number of units acquired. CPI generally rises at higher spend. LTV may drop at higher install volumes
  • Risk of a financial instrument is typically due to its volatility. In marketing campaign management, risk comes primarily from the the noisy relationship between changes in x and changes in ROAS
  • Tuning the portfolio risk tolerance can produce many counterintuitive behaviors. For example, our early attempts preferring low ROAS, low volatility over high ROAS, high volatility assets
  • Campaign reach or spend maximums is challenging to model and include explicitly in the risk term
  • There is a feedback loop between the uncertainty of a campaign’s future ROAS and the historical spend in the campaign since low spend yields low install volume and thus high variance in ROAS estimates

In practice, a mean-variance portfolio requires an estimate of variance (risk) and return. We found that these values could not be easily estimated since most of the risk came not from return volatility but from CPI and LTV dynamics at different spend levels (auction dynamic) as well as temporal shifts in campaign performance. Including this structure in the market model proved crucial in solving the problem as described in the next section.

Chosen Approach

We solve the digital marketing asset portfolios via a nonlinear constrained optimization. We solve the local problem daily. But the global solution is found iteratively over multiple days through a balance of exploration and exploitation:

Equation (1):

Solving Equation (1) requires a suite of sub-models that depend on the bids and budgets encoded in the control variable x. Choosing the form and complexity of these models is a fundamental challenge of IA since they affect the solvability and, more importantly, the emergent behavior of Equation (1) over many weeks. Several aspects must be carefully considered in choosing and training these submodels:

  1. Balancing data volume and latency for campaign performance (CPM, CTR, IPM, etc) with the ability to detect temporal shifts
  2. Estimating new and future user LTV while robustly handling outliers
  3. Balancing model flexibility against over-fitting by leveraging robust model selection and training

In practice, some heuristics and mathematical hacks are required to solve Equation (1) automatically, at scale, and with minimal human input. The constraints are softened with slack variables to avoid over-constraining. Then, the solver’s inability to satisfy an infeasible constraint is a warning rather than a domain error which can be tracked for diagnosis over time. Constraints can also be prioritized logically to prefer benign violations. And the numerical optimization can be wrapped with other algorithms focused on pacing and pausing.

Everything Else Resides in f(x₁,….,xc)

Budget constraints, monthly pacing and asset pausing must be addressed to truly automate UA decision making.

Budget constraints include:

  • Min/max daily spend or monthly budget limits that are driven by corporate finance
  • Daily bid/budget change maximum dictated by network best practices (i.e., don’t change budget by more than 35% a day)
  • Spend or install goals per ad network, geo, or device platform driven by product teams (i.e., half of the installs should come from iOS users)

These constraints are embodied in the inconspicuous but critical function f(x₁,….,xc). A separate algorithm is required to calculate daily budget allocation to pace monthly spend constraints.

Pausing (or asset removal) should be performed to:

  • Meet budget constraints that cannot be satisfied with gradually lowering spend
  • Prune poorly performing assets from the portfolio

Mathematically, pausing can be solved as a mixed integer program (see also Branch and Cut Method, Cutting Plane Method) where the control vector is augmented with integer variables which are either 0 or 1. Zero indicates that the asset is turned off. In practice, we iterate over a candidate set that is subject to a set of minimum criteria (e.g. at least 50 installs), rather than the whole portfolio. Our clients typically prefer to create new campaigns rather than unpause expired ones.

Exploration vs Exploitation: Handling Uncertainty

One very common question we receive about IA is how uncertainty in the predicted campaign ROAS impacts the ability to make confident bids and budgets optimizations. Campaign-level estimates are indeed very noisy, as it is not possible to predict LTV of future users based on the behavior of a very small group of recent acquisitions without significant uncertainty in the estimate. Rather than casting uncertainty on the practice of automation in general, this concern can be considered with a more useful perspective.

The key is to recognize that over a long enough time frame, a high frequency of changes will result in improved ROAS in the aggregate, despite each individual decision being uncertain. The portfolio optimization problem detailed above is being solved repeatedly as more data becomes available which causes uncertainty levels to drop. Other constraints (such as placing a minimum daily spend per campaign) ensure that the exploration vs exploitation balance is not pushed too far toward the latter. The problem of UA automation can then be considered as a “multi-arm bandit” problem, with each iteration of the portfolio ROAS maximization above representing a “pull” from the bandit. One difference from the classic problem is that there is no single hidden “best” asset, but rather an optimal allocation of spend that can also have temporal variation.

Successfully Deploying Automation

Figure 1 — Flow diagram of the AlgoLift Intelligent Automation system.

Figure 1 show a flow diagram of our IA system. Client goals (min ROAS, spend, constraints), measurements of historical performance and information about new assets (networks, campaigns, etc..) are inputs to the system. It is necessary to create a data model that sufficiently encapsulates the intent of the client and market behavior. The data model feeds into a suite of data science and analytics packages that build a market model to dynamically forecast ROAS as a function of optimizer decision.

The client’s constraints and objectives also need to be reiterated mathematically in a format suitable to numerical optimization. In our system, optimization iterates between (i) numerical portfolio optimization, (ii) asset pausing and launching and (iii) a budget pacing. These three sub-modules assures that new assets are introduced and old ones pruned at an optimal time and that monthly budget constraints are satisfied. Finally, an IA decision is made. The decision vector is transformed into a format digestible by a network Manage API. We also provide feedback to the client directly via a range of alerts that prompt new campaign creation, creative fatigue, and performance opportunities. This includes a CSV formatted file with allocations for networks that do not have Manage APIs.

UA automation requires a strategic balance between human-in-the-loop and autonomous black box. The solution to Equation (1) can produce surprising and locally counter-intuitive decisions because:

  • A suite of models are used as inputs to predict what will happen due to a budget change. Each model has noise, error, and bias. But all work together to make the best global decision
  • It is hard for a human observer to know the reason for a change on any asset because exploration, exploitation, constraints and noise happen simultaneously and continuously. Seemingly erratic or counter-intuitive budget drops or pauses to small campaigns do not signal error in overall strategy. Portfolio decision balances risk and return weighted by overall impact to net revenue
  • Traditional campaign-level KPIs don’t make sense under Intelligent Automation

That being stated, sustainable and long-term ROAS gains are achieved at scale as described in the following sections.

Results

Scaling Up

We rolled out IA over several months to some of our clients starting in early 2019. Overall, spend under management increased over time from $10,000 to $125,000 per day across Facebook, Google and Apple Search Ads as depicted in Figure 2. Spend increase was due to new clients adding IA and as existing clients increasing their IA budget. In total, 15 clients were onboarded over a 10 month period, representing thousands of campaigns (including several thousand Apple Search Ads keywords).

  • We made approximately 50,000 automated bid and budget adjustments
  • User-level data from 227 million installs were used to train our LTV models
  • 10 million users were acquired via Intelligent Automation

Figure 2 also shows ROAS for reference because different portfolios represented differently monetizing products and IA strategies. Individual case-studies are given later. Each client started with some version of S1 and transitioned to S2 and S3 via IA. The UA managers continued to update creative, set up new campaigns, and perform exploratory testing to augment the portfolio on an ongoing basis. Budgets and ROAS goals were updated at most weekly and at least monthly by the client. Figure 2 illustrates the ability of a well-configured IA system to scale and accommodate many use-cases of several clients.

Overall Portfolio Spend and ROAS

Figure 2 — Performance and spend over 15 clients, 3 networks, and thousands of assets including thousands of Apple Search Ads keywords. This represents approximately 50,000 automated decisions. Historical breakdown of portfolios, assets, and decisions. The median daily spend ranged from approximately $10,000 to $125,000 per day.

Portfolio Marketing Strategies in the Wild

Figure 3 illustrates a live example of AlgoLift’s IA product over approximately $650,000 of spend for a single client. This client’s ARPI was about 40% in-app purchases (IAP) and 60% from advertising revenue. The client’s overall ARPI was sub $1 and so UA was challenging (see Table 2).

In the first month, a relationship was established with the client’s campaign creation and creative team. IA was deployed on Google, Facebook, and Apple Search Ads. Portfolio goals were set in collaboration with their finance team. The portfolio was stabilized at maximum portfolio spend at a client set ROAS goal (200% at 365) in months 2 and 3 according to strategy S3. And then scaled up according to S2.

Multi-Strategy UA Automation

Figure 3 — Example of IA over a 4 month period and $650,000 spend over 3 networks. Transition from ad-hoc strategy S1 during month 1. Fixed ROAS (200%) at max spend during months 2 and 3 (S2). And spend scaling during months 3 and 4 (S3). Approximately 60% of this client’s ARPI was generated by ad-views rather than IA.

Over a 4 month period, we doubled this client’s daily spend at a higher average ROAS. In other words, IA doubled net revenue without any additional overhead. More importantly, the client’s team focused on product and strategy rather than bid/budget management.

Scaling Spend in Response to Events and Holidays

A big challenge in UA is scaling spend without increasing acquisition costs. Figure 4 illustrates IA’s ability to dramatically increase daily spend by a factor of 3x over 30 days without reducing ROAS. In month 5, the client required a massive increase in daily installs. IA was used to scale spend in month 5 and 6 while not seeing any drop in ROAS. This implies that the increased spend efficiently translated into more app installs, and more importantly net revenue.

Overall Portfolio Spend and ROAS

Figure 4 — Maximum spend at minimum ROAS was achieved by month 0 and kept stable until month 5. In month 5 the client requested that IA triple spend at maximum ROAS. Immediately after the promotions, IA lowered spend. And then raised it again after the client released a revenue-driving update to their product in month 7. Months 5 and 6 illustrate IA’s ability to scale spend dramatically while maintaining or improving ROAS.

The results in Figure 4 were achieved by scaling spend on Facebook, Google and Apple Search Ads simultaneously but not uniformly. Figure 5 shows the relative breakdown of spend by network. Clearly, Google had the highest ability to scale from 50% to 70% of the total budget. Facebook and Apple Search Ads relative contribution approximately halved during the scaling period; however their absolute spend increased since the overall budget increased. In all likelihood, the relative contribution of Apple Search and Facebook would have increased after the scaling period as IA discovered more optimal spend distributions within those networks. Figure 5 illustrates IA’s ability to scale spend efficiently by distributing growth in the most optimal channels to (i) maintain ROAS first and foremost and (ii) increase DAU.

Relative Network Allocation During Rapid Spend Increase

Figure 5— Relative network allocation and normalized spend during a period of budget scaling. Google was able to scale spend while spend on Apple Search Ads was initially reduced and then increased slightly. Facebook’s absolute budget actually increased by 40% in this period; but it’s relative contribution to the portfolio diminished.

Figure 6 shows the relative spend breakdown by country before and after IA. Figure 6(a) is a snapshot of spend distribution immediately after IA took over budget management. It is a representation of the client’s global strategy and indicates (i) strong investment in Tier 1 speaking countries (United States, Great Britain and Canada) and (ii) approximately 25% investment “other” countries which include English and non-English speaking. Heavy investment in “other” countries reflected the client’s preference toward low CPI acquisitions without consideration for their LTV.

Figure 6 — Spend distribution (a) before IA and (b) after IA and aggressive spend scaling.

Figure 6(b) shows the budget 6 months after IA iteratively and continuously explored and exploited the UA ecosystem. Then, UA is heavily dominated by US users. Furthermore, IA found a relatively smaller set of “other” geos that had low CPI but also yielded meaningful ROAS through LTV. There was no explicit goal to grow or reduce UA in specific geos and we advised the client to make campaigns and advertising creative that targeted all relevant audiences. In short, Figure 6 shows an emergent corporate strategy driven by daily long-term ROAS decisions in collaboration with the client. But without the cognitive load of manually deciding, exploring, and tracking marketing spend.

Steering a Self-Driving Car

Figure 7 illustrates normalized results of a multi-month Intelligent Automation client. Between month 1 and 6, the client requested that IA maximize ROAS at a constant monthly spend. After month 6, the client started updating the monthly budget each month to align with internal finance goals, in-app events, and end-of-the-year budget constraints with strict instructions to adhere to monthly budgets within ±5%. Normalized spend and one year ROAS are shown. Equation (1) was used in concert with a pacing and pausing strategy. The client consistently created new campaigns and refreshed advertising creative.

Given a stable monthly budget goal, solving Equation (1) demonstratively raised and maintained high ROAS before month 6. But the ROAS started to drop after month 6 as the client tested large monthly budget changes. This occurred for several reasons:

  • Equation (1) maximizes revenue/ROAS but is constrained to meet a budget goal:
  • If the budget goals are restrictive, Equation (1) will prefer meeting the budget to maximizing ROAS
  • Network best practice limit how much and how often campaign budgets can be changed; thus a 50–100% change in budget can take several days to achieve
  • Changing budget moves the IA into areas where the relationship between spend and ROAS is less known; thus the balance between exploration and exploitation is swayed heavily toward exploration and away from ROAS maximization

Stable spend vs. chasing spend goals

Figure 7 — Period of stable spend goals and ROAS followed up monthly spend adjustments correlated with ROAS drop. In the first regime, IA almost exclusively optimized ROAS. In the second regime, IA heavily preferred changes that achieved a target monthly spend goal.

Figure 7 is an example of non-optimal interaction between corporate and mathematical strategies. Spend undulates due to changing monthly budget objectives resulting in non-optimal ROAS outcomes. Automation is meant to scale a corporate strategy through math (optimization, statistical modeling) and software (API integration). In this case, as designed, automation did reduce the cognitive load of pacing across many spend channels and meet the monthly budget thus reducing finance risk. But the more important goal of ROAS improvement may have been lost.

Summary

Algorithmic automation of user acquisition brings a mathematically sound, technical advancement to manual campaign analysis and optimization. If executed correctly, it frees up significant bandwidth for user acquisition teams. Then, the human operator can focus on broader tasks which machines cannot yet fully automate:

  • Campaign ideation and creation
  • Creative and ad-copy creation and iteration
  • Exploration of new countries or marketing sources
  • Event or product-driven marketing decisions

This moves their focus from laborious, repetitive and inefficient tasks to creative and strategic initiatives. Most importantly, it realigns an organization along clearly understood and defined long-term goals and delivers significant improvements in ROAS, moving user acquisition from a cost center to an investment in the future profitability of the business.

https://www.algolift.com/#contact

--

--

AlgoLift

AlgoLift is a mobile app user acquisition automation platform across Facebook, Google, Apple Search and leading video ad networks www.algolift.com