This past October, the SEC hosted a two day roundtable where U.S. equity market structure experts gathered to discuss and debate several hot issues surrounding market data and market access. Unsurprisingly, the representatives from the major exchanges defended the status quo, whereas all of the other participants were highly critical of the continuously and dramatically rising costs that the exchanges have imposed on the rest of the industry.
One area that frequently came up was the duality of market products available to industry participants. On the one hand, there are the exchanges’ direct feeds, the fastest, most informative, and most expensive data feeds, available directly from the sources. Additionally, there are the SIPs — the public market data aggregation and dissemination platforms mandated to exist by the SEC for the good of the industry, but also operated exclusively by the exchanges. Though the latency of the SIPs has improved substantially over the past few years, they are still slower and provide less information than do the direct exchange feeds in combination. It is clear to see how a group of for-profit entities operating both of these competing products might be conflicted.
During the roundtable, one particular moment stood out and has stuck with me ever since. Mehmet Kinak, the head trader at T. Rowe Price, and one of the most knowledgeable and vocal experts on market structure, when asked about the value of the SIP to him, said the following:
As far as brokers having a choice of whether or not they can use the SIP or direct feeds, that doesn’t exist. There is no choice there. If a broker is routing using SIP data, they are not routing my flow. They can route someone else’s but they’re not eligible to get my flow, period. That’s not negotiable.
And it’s kind of funny that people say, well, we offer different services for different people. Trading is a zero-sum game. Everyone has to understand that. This isn’t like Southwest where I get to pay $15 extra dollars and I can board the plane faster than anyone else. We’re all going to get a seat, we’re all going to end up at the same destination. I might have to sit in the middle and two people next to me I don’t like but, ultimately, I’m still getting where I need to go. That’s not trading.
If I’m slower than the other person, I lose. That’s it. That’s the fraction of time we’re talking about.
So when someone says, hey, from a commercial enterprise, it makes sense for you to use a faster system over a slower system — no. This is a best execution obligation. We are obligated to try and produce best execution on every single order that we have. If our brokers are not aligned in that manner to use the most direct, the fastest, the most robust feeds they can get their hands on, then we will trade with someone else.
I believe Mr. Kinak speaks for the buyside, and that most market structure savvy equity traders would concur that a broker trading on their behalf must be extremely fast to provide high quality execution in today’s market.
But is it really true? I’ve been tossing around this notion in my head nonstop since we made the decision to start an agency broker. Should we make the immense upfront capital investment to build an extremely low latency platform? Should we commit to buying the fastest data feeds, collocation, and hardware, and do we need to continuously invest afterwards as technology further improves? Obviously the answers to these questions will have an enormous influence on our business planning and fundraising needs. Execution quality is our paramount goal, so if lower latency will lead to materially better execution, then we really have no choice, but conversely if the tangible benefits are overblown and we are wasteful on the capital expenditure side, any unnecessary costs will ultimately be borne by our customers.
Here’s our attempt to unpack this question of the importance of low latency on the sell-side. This piece reflects our current thinking, but it’s worth mentioning that we are entirely open to hearing counter-evidence. Our plans will surely evolve as we continue to explore this topic and other areas of the market during our build out.
The need for speed
The point of speed is reaction time. There are many trading scenarios where something of value becomes available and apparent, kicking off a race to capture that value. Most of these races are winner-take-all, such as capturing an arbitrage opportunity between fungible securities, picking off resting orders during a price move, or grabbing the spots at the front of the line of a newly established level.
At IEX, the protections we built into our system relied on a real-time probabilistic model similar to those used by proprietary traders, but we had the advantage of a speed bump. Even though IEX’s system is not particularly competitive with the fastest traders, all members entering orders on IEX are subject to a 350 microsecond delay, whereas the exchange processes market data updates right away. As such, IEX only needs to be within 350 microseconds as fast as the fastest participant for its protections to be effective, which is an enormous buffer.
A broker, however, is on exactly the same footing as all other market participants on all trading venues. If that broker is a single picosecond late in any of these racing situations, that’s too slow.
What does it take to be fast?
The SEC roundtable focused on market data products, but that’s only one of a multitude of technologies necessary to be competitive on speed. Here’s the list that comes to mind, although this is by no means my area of expertise:
- Market data feeds — each exchange family offers several real-time market data products. Generally, the fastest most useful such product is their order-by-order feed which is delivered over their respective binary protocols (ITCH, PITCH, XDP). Allison’s currently working on a broad overview of market data options, which we hope to share in a future post.
- Networking — there are several network vendors that provide direct fiber links connecting the major equities exchange data centers (which are located in different parts of New Jersey) with each other and with other major financial centers like Chicago. The fastest options though seems to be laser and microwave/millimeter wave connections. It seems there are network vendors in this space as well as options offered directly by the exchanges (1, 2). Pricing information does not appear to be readily available.
- Co-location — NYSE and Nasdaq both directly rent rack space in their respective data centers. BATS operates out of an Equinix data center, and they also allow co-located members to connect directly, but I don’t believe they monetize the rack space itself. My impression is that many low-latency strategies require co-located equipment in each exchange’s data center, and that each location would operate trading logic specific to that exchange while also consuming data from the others. As an aside, I believe at one point NYSE would only offer direct market data connectivity to co-located customers, and if you weren’t co-located you were forced to consume their direct feeds out of a separate data center many miles away (to deter customers from buying up surrounding real-estate and setting up their servers next door). I don’t think this is still the case though.
- Connectivity — purchasing co-located server racks isn’t enough however, as a member still needs to actually connect to the exchanges’ servers, and they charge surprisingly high fees to lease the fastest available ethernet cables required to connect.
- Hardware — obviously, you also need fast servers themselves. I’d imagine that the top players use custom builds and fairly regularly upgrade their equipment.
- Feed handlers — there are many low-latency market data processing vendors out there — at IEX, we used Redline — although I’d imagine the fastest shops write their own handlers in-house. Some firms and vendors encode their data feed parsing logic into hardware using FPGAs, which I believe is the fastest option.
- Other inputs — in addition to equities market data, other low-latency data inputs may be important as well, depending on the strategy. Examples that come to mind include market data for other asset classes or regions and alternative data such as news.
While some sell-side shops utilize a selection of these ultra-low latency technologies, to be competitive with the top prop firms, one would need to embrace all of them and continuously make additional investment to remain elite. And even if a broker did choose to go down this path, client-facing firms are handcuffed by their monitoring, compliance, and regulatory requirements such as performing 15c3–5 risk checks, so it just doesn’t seem like a realistic prospect.
Who are the fastest market participants?
Based on our experience at IEX, it seems there are a small handful of proprietary trading firms that are a step faster than all other market participants. These firms collectively run a variety of trading strategies that leverage their speed, including but not limited to market making. Allison and I spent a great deal of our time at IEX focusing on one specific component of some of these strategies: crossing the spread after observing/predicting a likely imminent price tick (i.e. an aggressive sub-millisecond alpha strategy).
It’s easy to understand why a great prop trader should make a great agency trader too: you spend all day identifying opportune times to buy and sell stocks aggressively and proactively positioning yourself to pick up opportune passive trades as well. When agency trading, if you picked up all those great trades that happened to line up with your client’s direction, and maybe sometimes took on some additional slightly less opportune but still decent trades to make sure you got your client done, this should make for a great execution strategy.
Are any agency brokers actually doing this? Are any of them good enough to pick up some of those best trades? My intuition says probably not. The US stock market is a world of haves and have nots, and at least based on our experience at IEX, the universe of haves is tiny. There are a handful of shops that started off exclusively doing proprietary trading and have since ventured into the sell-side space as well, but I am skeptical that even these firms are leveraging their speed to create a tangible advantage for their customers. The agency businesses of these firms are still constrained by the same regulatory and compliance requirements I mentioned earlier. Plus, these new business lines are supposedly walled off from their proprietary trading teams, so it’s unclear the extent to which they can even leverage the firm’s best technology and expertise.
Now perhaps some agency brokers are picking up that second tier of not quite so good (as to tempt the prop traders) but still decent trades. This idea seems possible, but I’m not convinced it’s happening either.
What do broker algos actually do?
The good news is broker algos generally don’t employ tactics that depend on speed anyway. Here are several examples of standard agency algo behaviors and how latency comes into play.
- Posting limit or pegged orders — this is the primary action an algo takes: sending a resting order into the market. Price, venue, and order type selection are critical here, but unless the algo is racing to join a newly established price level after a tick (in which case the algo probably would have been better off sending that venue a native pegged order beforehand), speed does not come into play.
- Taking out the quote — if an algo is trying to trade immediately on multiple venues, an understanding of its latency everywhere is certainly important, but its absolute speed does not necessarily matter. As long as the algo has moderately consistent latencies to each venue, it can time its orders to different venues to arrive at roughly the same time and grab everything before other market participants can react.
- Maintaining a participation rate — many algos try to track a specific percentage of volume within a given tolerance. After a chunk of stock trades away, the algo may have fallen behind and might want to immediately cross the spread to catch up to its target rate. This scenario is reactive, and if there are multiple brokers simultaneously running POV-tracking algos in the same stock, latency would come into play. If two algos are trading in the same direction, the faster one to react is likely better off, especially if catching up causes market impact. Conversely, if the simultaneous algos were going in opposite directions, perhaps the broker with worse latency wins outs, as the faster broker would push the market into them, which is a little funny. Regardless, this scenario is not one where the broker algos are trying to quickly capture value sitting out in the market — they’re simply trying to provide a consistent customer experience. Further, an algo trying to keep up is probably better off not just blindly piling on after a flurry of activity anyway.
- Schedule optimization — benchmarked algos (VWAP, IS, Close, etc.) often attempt to choose an optimal schedule to guide when they slice orders into the market. The duration of these schedules is on the scale of minutes or hours, so whether it’s the initial schedule calculation or a dynamic adjustment to the schedule intraorder, a few extra milliseconds or even seconds doesn’t matter.
Genuinely incorporating HFT tactics into sell-side algos as well would provide a benefit, but we think this is probably the best an agency broker can do: utilize tools that don’t rely on speed and avoid situations where a customer’s order would likely be disadvantaged by faster participants.
While we believe brokers’ emphasis on speed is primarily just empty marketing, our position on the other hand does not make for a great story at all.
“Employing speed on the sell-side is a hopeless endeavor, so why bother?”
Hm, not great. It’ll definitely take some time to figure out how best to frame our approach.
In the meantime, please prove us wrong! We’d love to hear example scenarios where speed is worth the investment for an agency broker. Confidentiality and avoiding information leakage are paramount, but there’s no question that it’s possible to actually explain the mechanisms by which an agency broker protects its clients with speed without revealing a telltale signature, if such mechanisms exist. Maybe there are incremental benefits that justify a marginal diffuse cost shared across the many customers of a large broker. This would be a barrier for entry for an upstart broker like us, but maybe not an insurmountable one. And if there are some compelling cases out there that we just haven’t thought of, that could absolutely change our plans. I guess we’ll find out.
When a customer-facing broker says things like, “we are co-located in all the major data centers, we use the fastest cross-connects and of course direct feeds, not to mention our lightning-fast direct dark fiber network,” this is probably all just a marketing ploy. That broker is simply not fast enough end-to-end to compete with the leading proprietary trading firms for the best fills. It’s not enough to ask a broker which fast technologies it employs — one needs to learn how specifically those technologies work to the client’s benefit, if at all. If the broker can genuinely compete in these sub-millisecond winner-take-all races, it can conclusively demonstrate so by diving into the data. If the broker is unable to prove a tangible benefit, there probably isn’t one there.
We believe we can build a competitive, if not superior, equity execution platform for a tiny fraction of the cost and overhead by leveraging our market structure intuition and exercising extreme diligence about where we deploy resources. We may wind up buying direct feeds, for example, because they do carry additional content not currently available on the SIP, such as depth of book. But we don’t intend to enter the pure latency arms race. We only want to take steps that are truly beneficial to the buy-side. Anything else is just waste.