The Trading Strategy

Daniel Aisen
Proof Reading
Published in
11 min readOct 20, 2021

From the outside, an institutional trading platform may seem like a giant black box of code. An asset manager sends in an order to buy or sell a particular stock, the black box crunches some numbers and in turn ships out smaller child orders to the street. If the algo does a good job, those child orders trade at relatively good prices.

But beneath the hood, there are many different jobs happening. There are dozens of specialized software applications performing various essential tasks such as FIX translation, order validation, risk management, monitoring, market data consumption, and post-trade processing. In this post, we dive into the key piece of code responsible for making trading decisions.

The Algo Engine

A schematic of the Proof Trading System and the ecosystem in which it is embedded.

The algo engine (or container) is the core piece of the system that contains the trading logic. It takes the client’s high level instruction, and decides how and when to slice out child orders based on the current state of the market and pre-loaded quantitative models. It also performs many related tasks including order validation, (local) order management, consumption of market and reference data, and risk checks. The algo container provides helpful callbacks and contains numerous safety mechanisms (all built by our CTO Prerak) so that an algorithm developer like myself can focus on one thing: writing trading code. The container keeps a tight leash on the trading logic, strictly enforcing that it behaves appropriately and stays within the customer’s instructions. Even if the algo has an unfortunate bug or tries to “go rogue,” the container blocks it from causing damage. The algo also has numerous checks within itself to ensure it acts properly, but after all of our past experiences building trading systems from the ground up, we have learned to take a belt and suspenders approach.

Prerak has plans to write a blog post describing the algo container in all its glory, but in this post I focus on just the trading logic itself, the piece that Allison and I are responsible for.

The components of the algo container.

The Trading Strategies (Algos)

At a high level, we have two strategies/trading instructions that the customer can specify:

  • VWAP: trades in-line with market volume based on a dynamic volume prediction model.
  • PROOF: a hybrid liquidity seeker/impact minimization algo that balances speed of execution with avoiding undue market impact as per a home-grown quantitative model, with an opportunistic component searching for block liquidity in parallel.

Additionally, both algos have two available override checkboxes: 1) must complete, and 2) exclude the auctions. We try to keep our order ticket as simple as possible, while still covering the most common use cases.

Both of these top level strategies are implemented via a common software architecture. As we peel back the layers of the onion, it turns out that the trading strategy is itself a collection of modular pieces, each with an important role to play.

Trading Logic Components

Now we examine components within a component (the Strategy) within a component (the Algo Engine) of our trading platform — kind of like Inception. The strategy has 4 main pieces: the Algo, the Scheduler, the Worker, and various Routers.

The components of the trading strategy.

Algo

This top layer is the main orchestrator of the trading logic. It manages all upstream interactions with the customer (e.g. handling new orders and amendments), and it creates and manages the Scheduler and the Worker. The Algo is also responsible for slicing shares to the opening and closing auctions.

Scheduler

The scheduler is the bridge between the various quantitative models and the trading logic. The primary difference between the two trading strategies is that they invoke different versions of the scheduler. There are 3 models encapsulated by the scheduler:

  1. Pre-trade model: suggests a total number of shares to trade over the duration of the order (for need-not-complete orders).
  2. Impact minimization model: suggests the pace at which the algo should trade at any given time (for PROOF orders).
  3. Dynamic volume prediction model: predicts the amount of volume throughout the day and in the auctions based on what’s happened historically and so far today (for VWAP orders).

Worker

This middle layer manages and deploys three types of intra-day trading tactics (i.e. routers), and shuffles between them as dictated by the Algo. The three intraday tactics/routers are for passively adding (POST), immediately removing liquidity (TAKE), and opportunistically seeking block liquidity (OPPO).

Routers

These lower layer tactics allocate and reshuffle orders to one or multiple external destinations at a single price level. A router can also stitch together multiple sub-routers, for example a serial midpoint all-or-none router + a spread crossing order to the IEX router.

  1. POST: generally starts with roughly a 2:1 split-post on the near side between IEX D-Limit and the primary exchange. It proportionately reshuffles between the two legs based on where it gets filled, and it repegs to the inside roughly every 30–60 seconds if the stock drifts away.
  2. TAKE: first serially pings a handful of dark pools using all-or-none IOCs, and if that fails, it crosses the spread to remove liquidity using the IEX Router.
  3. OPPO: searches for block liquidity by split-posting midpoint orders across several dark pools and exchanges using a high minimum quantity, generally in the thousands of shares. We may supplement this with other order types such as conditionals in the future.

Life-cycle of an Order

To really understand the interaction between these components, it is helpful to walk through the specific logic at the various phases of an order, from creation through completion or cancellation.

Order Arrival

Upon receipt of a valid order, the Algo subscribes to market data and then creates the Scheduler. For a VWAP order, the Algo creates a VWAP Scheduler; for a PROOF order, an Impact Minimization Scheduler. Then, the Algo asks the container for various wakeup callbacks: at the start time, shortly before the auction cutoff times, at the cleanup time (i.e. shortly before the end time), and at the end time.

Wakeup Logic

The container wakes the Algo at those various times as requested:

  1. Upon a pre-auction wakeup, the Algo slices shares to the primary auction.
  2. At the start time, the Algo creates the Worker and begins the first interval (see Interval Logic below).
  3. Upon the cleanup wakeup, the Algo tells the Worker to complete any remaining scheduled shares.
  4. Upon the end time wakeup, the Algo cancels any outstanding orders and then provides an out to the client.

Interval Logic

Both strategies treat the life of the order as a series of distinct “Intervals,” generally about 5–10 minutes windows of time that share common trading behavior.

At the start of a new interval, the Algo requests two pieces of information from the Scheduler:

  1. The length of this interval, so as to request a wakeup at the end. Intervals are generally about 5–10 minutes in length, but depend on other factors too like the time of day — intervals are stretched longer in the morning when spreads are wider, and compressed as we get later in the day. Additionally, smaller orders have longer intervals and vice-versa. All interval durations are randomized.
  2. The number of shares scheduled to trade during the interval — this is one of the two key differences between the two strategies. The VWAP Scheduler uses the dynamic volume prediction model to predict what percentage of the day’s volume will have traded by the end of the current interval. The Impact Minimization Scheduler gets this value by running a dynamic programming cycle of the impact minimization model. The Algo then uses this information to tell the Worker to complete any shares outstanding from the previous interval and start working the new interval quantity.

The second key difference between the two strategies is the liquidity seeking piece. Throughout the life of a PROOF order, the Algo tells the Worker to search for block liquidity at the midpoint with all remaining unscheduled shares using an Opportunistic Router.

Additionally, the Algo requests “catch up” wakeups throughout the interval where it checks if the Worker is falling behind, and if so, tells it to cross the spread to keep up pace.

Order Amendment

Upon receipt of a customer replace request, the Algo generally cancels or amends the Worker to comply with the new instructions. In most cases, the Algo also creates a new Scheduler to use from this point forward.

Trading Objectives / Design Process

We designed this algo by thinking through how we would approach execution for ourselves if we were on the buy-side. The driving principle of course was best execution, with less emphasis on a “consistent” customer experience (e.g. the algo doesn’t immediately trade 100 shares so the customer can see it’s working).

Low Level

We have a great deal of experience in the world of sub-second/sub-millisecond trading dynamics, so designing that piece was relatively straightforward. At the micro-level, the objectives are two-fold: avoiding adverse selection when adding liquidity, and capturing as much volume at the best possible price(s) while removing:

  • Avoiding adverse selection: When the market transitions to a new price level, there is a flood of trading activity where the fastest proprietary trading firms race to pick off resting orders at the old price level and establish queue position at the new price level (these are two similar but different strategies). As a sell-side firm, it is not plausible to effectively compete in these races, so the only viable way to prevent adverse selection is to utilize built-in protections on trading venues, such as IEX D-Peg and D-Limit, and Nasdaq MELO.
  • Removing liquidity: rather than reinvent the wheel here, we are starting off by just using the IEX Router when crossing the spread, which costs only 1 mil and gets ~99% fill rates. In most cases, we first check various dark pools with midpoint all-or-none orders prior to crossing the spread. Because these midpoint pings are all-or-none, either the full amount gets done at the mid, or nothing happens and we’ve just wasted an immaterial amount of time.

High Level

Even more important than the low level strategy is the high level strategy — i.e. how should the algo spread out trading throughout the day. We have devoted a great deal of our energy toward researching this challenge. We started with 4 major questions below, each of which stoked major research endeavors, documented below. Our research on these topics is only just beginning, and we will continue to iterate from here.

How do we measure success at a high level?

How much volume do we expect to trade in the market throughout the day?

How much should our algo be willing to trade during the life of the order?

How should we pace our trading activity throughout the life of the order?

Demonstrable better performance is the ultimate goal, but these questions are all extremely difficult to answer as higher level trading data is sparse and noisy. That, combined with an opaque industry-at-large, makes it tough to build confidence in the performance numbers or find a reliable point of reference. We are optimistic that our continued research focus on metrics, our collaboration with partners, and the growing set of our own trading data will help us to eventually get there.

In the meantime, our approach has been to bite off the significant low hanging fruit at the micro-level simply by avoiding harmful/conflicted practices and properly utilizing exchange and dark pool order types.

At the higher level, we are able to at least use the noisy performance numbers and individual case studies to build confidence that the algos are behaving reasonably and as designed, and that our historical quantitative models are consistent with the strategies’ real-world outcomes. For the purpose of external marketing, all we can do is lean heavily on our transparency around the design process and our research findings to demonstrate that our approach is compelling.

Elegance and simplicity

The product, the interface, the quantitative models, the system architecture — all of these things have a natural tendency to bloat and become more complicated with time. Unnecessary complexity introduces risk and noise, and it is just as difficult to be disciplined enough to keep a solution refined and elegant, as it is to come up with that solution in the first place. Throughout our initial design and build, we have attempted to keep things simple and intuitive, unless there is a clear reason to make them more complex, and we will continue to be vigilant as we iterate in the future.

Conclusion — why divulge this information

Most institutional equities brokers are extremely opaque when it comes to the inner workings of their execution algos. They generally give two excuses, but we don’t think these hold up under scrutiny:

  1. “If we publicly share how our algo works, adversarial counterparties can use this information to detect when we are present in the market and front-run or game us.”
  2. “Our algo is our proprietary secret sauce; if we divulge how it works, our competitors will steal our ideas.”

For the former reason, our response is twofold: if your strategy leaves an identifiable footprint in the market, adversarial counterparties are going to find you whether you tell them or not. Conversely, if your trading activity is sufficiently randomized, it can be undetectable even to counterparty who knows exactly how the algorithm works.

For the latter concern, granted when a competitor copies your key differentiator, it does dilute your brand and muddle your marketing efforts. We had this happen to us at IEX, and it was irritating. But unlike a prop trader with a unique source of alpha, it does not materially hurt the efficacy of an execution strategy when others copy it. Ultimately, the best thing for the buy-side is for all of their brokers to employ an effective approach. Our stance is that our latest good idea is probably not our last, and our competitors are probably slower-moving than we are. We welcome them to utilize and riff on our ideas.

On the other hand, we think the upside to being transparent is very significant. Transparency creates accountability — it keeps us honest, aligns our incentives, and builds trust with our customers. It also creates a wonderful vector for collecting feedback and getting better, not just from our customers, but from friendly competitors, or even those outside the industry. Speaking of which, if you have feedback on our approach thus far, or if there are additional details you would like us to share, please let us know!

We believe the single biggest problem with this industry is the ubiquitous opacity. Conflicts of interest and harmful practices flourish in the shadows — and we’re not just talking about harmful trading practices. Our wish is to be an example that transparency can still be a sound business strategy and hopefully nudge this industry in the right direction, one step at a time.

--

--