Everyone has a Plan to have a Plan for the SIPs

Allison Bishop
Proof Reading
Published in
17 min readFeb 28, 2020

Recently, the SEC issued a proposed order “Directing the Exchanges and the Financial Industry Regulatory Authority To Submit a New National Market System Plan Regarding Consolidated Equity Market Data.” Classic SEC with their catchy titles! The proposed order lays out in extensive detail how the US equities landscape has changed since the adoption of the National Market System Plan, and outlines changes that would be desired in a new plan. These changes mostly concern the governance structure of the SIPs, which are intended to be relatively affordable sources of crucial real-time information about trades and quotes that occur on exchanges. They used to function well as such, but especially in the last decade have begun to age less gracefully.

Commissioners have released statements with their individual thoughts on the proposed order (those can be found here), and several others have submitted comments that you can find here. There is not a whole lot of precise agreement except:

  1. the current SIP(s) are kind of lame.
  2. we should probably think about doing something about that.

And we agree! Proof has now submitted the comment letter copied below to explain our thoughts on the SIPs:

February 27, 2020

Vanessa Countryman

Secretary

Securities and Exchange Commission

100 F Street, NE

Washington, DC 20549–0609

Re: Notice of Proposed Order Directing the Exchanges and the Financial Industry Regulatory Authority to Submit a New National Market System Plan Regarding Consolidated Equity Market Data, Release №34–87906; File №4–757 (January 8, 2020)

Dear Ms. Countryman:

Proof Services LLC (“Proof”) appreciates the opportunity to comment on the proposed order directing the submission of a new national market system plan. Proof is currently building out an agency institutional US equities execution platform and seeking FINRA approval to launch as a broker-dealer in mid-2020. We applaud the commission for its thoughtful attention to the inadequacies of the current governance of the Securities Information Processors (SIPs), and its goal to reform the SIPs so that they can achieve their intended purpose in the current market environment.

Here is a quick summary of our thoughts:

· We agree that the governance power of large exchange families should be reduced.

· We hope that this may lead to useful SIP enhancements, like adding depth of book data.

· We fear that this may lead to wasteful SIP enhancements, like further reducing latency.

· We believe SIP fee structures should be radically restructured and modernized, and that baseline costs to data recipients can likely be lower.

Before presenting our views on the proposed order and our suggestions for additional reforms in detail, we first disclose our own view of the purpose of the SIPs and our evaluation of the current status of the SIPs relative to that purpose. This context drives our evaluation of the proposed order, and motivates our suggestions of additional regulatory reforms. It is also admittedly biased by our stake in this policy discussion as a small company intending to become an institutional broker-dealer, so we will acknowledge and highlight this bias up front (and periodically as relevant) so that it may be appropriately accounted for.

The purpose of the SIP(s)

We believe the purpose of the SIP should be to provide: 1. sufficient information content and real time performance to enable retail investors and agency broker dealers to make effective trading decisions in real time, and 2. sufficient historical data to enable retail investors, agency broker dealers, and academic researchers to effectively analyze trading performance and observe basic mechanisms of trading behavior, at both macro and micro levels of market structure. And importantly, to accomplish these goals at minimal cost to consumers of the SIP data while providing a minimal burden on those involved in the generation and processing of SIP data.

Unavoidably, there are tensions between the goals of providing sufficient information, providing fast delivery of data in real time, providing robust and reliable data delivery, minimizing costs to data consumers, and minimizing burdens on data contributors and processors. We believe these tradeoffs should be navigated by first deciding what data content and performance targets are “sufficient” for the intended uses of the SIPs, fixing those targets, and then aiming to control the costs to data consumers and the burdens on market participants through a regulatory regime that aligns incentives and forces of competition with meeting the targets, but without enshrining or rewarding unhelpful complexity.

We wish to highlight here that a SIP that cheaply provides real time data sufficient for the responsible operation of an institutional broker dealer is clearly what is good for Proof Trading as a business, and hence we are biased in our advocation of this as a goal for the SIP(s). However, it seems to be a goal for the SIP(s) that the commission shares, as the detailed discussion in the proposed order expresses myriad concerns that the current SIPs are falling short in this regard, and this is one source of impetus for the proposed governance reforms. We also wish to highlight what we have not included as a desired goal of the SIP(s): sufficient data and performance for market making, proprietary trading purposes, and arbitrage. Though many forms of these are healthy and necessary market activities, we do not believe they are intended use cases of SIP data. As we will further discuss below, the differing requirements for these use cases drive a much more expensive regime of proprietary market data products. If a single SIP data stream tried to displace the larger regime of proprietary data products, the burdens on its design would accumulate heavily across the disparate use cases, and the product could become prohibitively expensive and unwieldly.

The goals of a reformed SIP governance structure

We also declare up front what we perceive the goals of a reformed SIP governance structure to be: 1. Give all major stakeholders an effective voice, 2. Retain exchanges’ ability to act unilaterally when they all agree (important due to their unique regulatory burdens), 3. Reduce conflicts of interest that may motivate the operating committee to work against the purposes of the SIP as stated above, 4. Reduce the voting power of exchange families so that the business models of large exchange families do not unilaterally drive decisions, and 5. Foster an environment where reforms and innovations supporting the SIP purposes defined above will be proposed and adopted, while proposals that would be detrimental to those purposes will be defeated. It’s quite clear how the proposed order addresses goal 2. Goals 1, 3, 4 are (arguably) addressed to varying extents, though obviously the meat of the debate hides in the words “effective” and “reduce.” But goal 5 is perhaps the hardest to evaluate. To project the likely effects (if any) of the proposed reforms on the pursuit of goal 5 requires a highly nuanced understanding of the current state of the SIPs and the state of competition between various stakeholders in the marketplace.

The current state of SIPs: latency

As the commission thoroughly details in the proposed order, the market environment has evolved considerably since the establishment of the SIPs. Exchanges are now owned by shareholders, and have further consolidated into three large parent companies (with the exception of IEX). These three exchange families offer a wide range of proprietary data products, often containing more information than the SIPs and/or providing faster data delivery. New technologies like microwave have surpassed fiber optic cables as the lowest latency option for data transmission. In the current operation of the SIP data feeds, quotes and trades travel via fiber from their geographically dispersed sources to a central processing point to be consolidated, and are then retransmitted to geographically dispersed recipients. This is a significant source of latency that contributes to SIP data arriving hundreds of microseconds behind the fastest proprietary data feeds, which can be distributed straight from the source to recipients using the latest technology.

In evaluating how this evolution has affected the suitability of the SIPs for their intended purposes, we must keep in mind some technological realties that have not changed. Speed and reliability are still fundamentally in tension in complex distributed computer systems. Technology that delivers the lowest latency will often suffer worse reliability. Procedures to ensure data integrity and risk checks in software will add considerable overhead and slow down processing atop even the fastest underlying components. What this means is that the fastest possible SIP, one that matches (or even outperforms) the latency of all proprietary data feeds, would not necessarily be best suited to its intended purposes. The agency broker dealers who might rely on the SIP for real time trading decisions would have to pay fees at a sufficient level to subsidize the technological investment in this (hypothetical) fastest SIP, and yet they would not really be in a position to take full advantage of its speed.

Dynamics of trading at the millisecond and microsecond timescales exhibit probabilistic patterns. These patterns create “winner take all” contests that have positive expected winnings for those who are in a position to observe them and act with precision at these timescales. Presumably, a rational market maker or proprietary trader should decide whether to incur the combined expenses of technology, colocation, proprietary data feeds, etc. that it takes to be a contender in these contests based on the frequency and magnitude of these opportunities and the level of competition. For an agency broker, the cost-benefit analysis should be much simpler. It is highly unlikely that an agency broker, while trading in the market on behalf of a client, would ever win a noticeable fraction of speed contests. Crucial risk checks in software, and other design decisions made in the service of goals like robustness and reliability would exert too much drag to compete against the fastest actors in the market. Imagine a car racing event where one category of entrants was subject to stricter safety standards than everyone else. Those drivers would indeed be safer, but they would never be true contenders. They would be mostly ignored by the real racing dynasties.

As we discuss further in our blog (see [1]), we suspect that most of the investment that agency brokers make in the name of achieving “low latency” fails to be beneficial to their clients. They may be drawing the wrong conclusion from the right data: the fact that speed-based contests are hurting their clients does not mean that increasing their own speed will help. It may be that the best strategy for agency brokers is to avoid speed races altogether. If you aren’t going to win, don’t pay the entry fee. Tools like well-designed smart order routers, the discretionary peg order type on IEX, and MELO on Nasdaq, make it possible for agency brokers today to protect their clients from several forms of latency arbitrage without needing to win the speed race. To use such tools effectively, agency brokers need access to real-time data, as well as consistent and reasonably low latency connections, but being within, say, hundreds of microseconds of the faster participants is good enough. Though there is room for improvement, the current latencies of the SIPs are already within this range (as noted in the proposed order).

The current state of SIPs: content

Let’s now consider the content of the current SIPs. The top of book quote data currently provided excludes further depth of book data, for example. Nonetheless, it is likely sufficient context for retail investors, whose individual trade volumes are likely to be small and hence priced similarly to the top of the book prices they can follow on a SIP data feed. The case for agency brokers is less clear, since the overall volumes traded far exceed the volume available at the top of book at a typical moment in time. How much can depth of book data, for example, improve the trading outcomes for agency flow? This is a very hard question to answer without a side-by-side comparison of a broker algorithm that leverages depth of book data and one that does not. We are not aware of any broker making such a comparison publicly.

If the performance of the algorithm that leverages depth of book data is only marginally better than its top of book benchmark, it may cost clients more overall, as commissions might need to be higher to account for the higher data fees of consuming proprietary depth of book data. The cost of subscribing to real-time depth of book data from just the two largest exchanges, NYSE and Nasdaq, would cost a small company like Proof over $350,000 annually (taking a conservative sum of the likely assessed data fees), and this would be on top of the >$120,000 annual cost of subscribing to real-time SIP data.

As noted in the proposed order, the original decision to exclude depth of book data from the SIPs was made under the assumption that competitive forces would control prices for proprietary depth of book data products. However, third party providers of depth of book data products face two significant obstacles: monopoly control of the data origin point, and redistribution fees. The source exchange of a depth of book data feed has monopoly pricing control over any direct access mechanisms to that data as it is created. Thus, any data provider whose clientele includes highly latency-sensitive proprietary traders will have no choice except to pay whatever fees the source exchange charges for the fastest access mechanism. This alone would not preclude the existence of a much more affordable product targeted at retail traders and agency brokers, since as discussed above, their latency requirements may be comparatively relaxed. But even a less performant real-time depth of book product would be subject to crippling fees due to the originating exchanges.

A receiver of real-time NYSE depth of book data, for example, will owe NYSE a minimum non-display fee of $20,000 a month [2]. This applies even if the data is received from a third-party provider, which can be accomplished wholly independently of NYSE technology. Presumably the only additional cost that NYSE incurs is administrative, for the tracking and collecting of all the fees! The Nasdaq TotalView-ITCH FPGA feed similarly charges an “internal distributor” fee of $25,000 a month [3].

This is a pricing structure that ensures that the exchanges themselves can set a fairly high floor on the all-in cost of any real-time depth of book data product, even one that they do not produce themselves. This greatly limits the effectiveness of competitive forces as a mechanism of price control for real-time depth of book data products.

The current state of SIPs: pricing

In fact, we can see the same basic pricing structure at work in the current real-time SIP feeds. There is a significant floor on what a small agency broker like Proof will pay for a real-time SIP product, even if they subscribe through a third-party provider. Over $120,000 annually in fees will be due to the SIP processors, regardless of the fact that Proof’s consumption of SIP data through a third-party provider would not impose any cost on the SIP processors other than the administrative costs associated with tracking and collecting fees.

Presumably the purpose of this fee structure is to allocate SIP costs broadly and prevent extreme consolidation among direct consumers of the SIP feeds. If the SIP processors were only paid by direct consumers of the SIP data, there would be a strong incentive for direct consumers to consolidate. It would be difficult to dynamically adjust fees to extract fair value from a dwindling set of direct consumers, and we might end up in a world with one direct consumer serving all others as indirect consumers. This kind of monopoly on direct consumption would not be a very healthy state for the market. However, it’s important to remember another consequence of this fee structure as it is currently enshrined: it defines a floor on the total cost of any real-time SIP-based data product. This renders competition among third party data providers ineffective in reducing the cost of SIP consumption to be reasonably near the cost of SIP production and maintenance. Exchange families have seized the opportunity this affords, and offer some of their own proprietary data products at prices below this floor for SIP-based products. This was noted during the market data roundtable [4] as an example of how proprietary data products serve legitimate needs. For evidence that the current cost of real-time SIP consumption is likely much higher than it needs to be to recoup the expenses of the SIP processor and SIP data contributors, we can compare the $1.8 million estimated annual cost of IEX’s market data infrastructure and $800,000 estimated annual cost to IEX to offer physical connectivity as detailed in [5] with the whopping $292 million distributed to exchanges from the SIP(s) for tapes A,B, and C combined in just the first 9 months of 2019 [6,7]. We suspect the simplest explanation for this gaping difference of magnitude is the correct one: the fees for real-time SIP are needlessly high!

The total magnitude of cost is not the only problematic feature of the SIPs’ fee structures. The myriad units of cost enshrined in SIP fees present significant challenges to track and assess, especially in light of evolving technologies. In a world where computation can be cheaply distributed over cloud servers and human eyeballs become increasingly irrelevant, what exactly counts as a “user” anyway? This is a general challenge that transcends the SIPs and applies to proprietary data products as well. If a subscriber like Proof has one account on AWS that can be accessed by three employees, and a data product is ultimately fed into that environment for use, what should the fee be proportional to? One company? One account? Three human users? The number of individual devices the employees use to access the cloud? The average number of cloud servers actively processing the data at any point in time (this would be a nightmare to track)? The number of cloud servers that ever touch the data? (This one would be a disaster and would effectively bar anyone from using equities data products in the cloud.) Are these kind of decisions being made fairly and consistently across SIP data consumers? How can we know?

Ironically, a fee structure that was likely intended to prevent extreme consolidation and encourage competition among third-party data providers may be holding back innovation by enforcing an artificially high cost floor as well as a problematic unit paradigm. This effect is exacerbated for depth of book products, where the exchanges have managed to impose a much higher cost floor and lock in growing revenue functions for themselves that are unassailable by any new advances in technology. These issues would need to be addressed by the fee structure of a future SIP if we want the cost to data recipients to more closely reflect the actual costs of producing and processing SIP data. Competition among third party data providers without drastic changes to the underlying fees due to the processor cannot accomplish this.

Evaluating the proposed governance reforms

And finally we are equipped to discuss the heart of the matter: would a new governance structure like the one outlined in the proposed order be more likely to generate and adopt helpful changes and/or more likely to resist unhelpful ones? To evaluate this, let’s imagine what might happen for some particular issues that the operating committee may discuss in the near term, under the current rules of governance vs. under new rules.

We first consider the case of heavy investment in further latency improvements for the SIP(s). As we have detailed above, we believe such investment at this stage would be unlikely to improve outcomes for those using the SIP(s) for real-time trading decisions, but would likely increase costs for everyone. Hence we, as a small aspiring broker, would prefer not to see such investments made. One might presume that the current governance structure of the SIP(s) resists such investments because they are not in the interest of large exchange families, who are also in the business of selling proprietary data products to latency-sensitive traders. However, this reasoning does not hold up to scrutiny, as the large exchange families well know that merely faster SIP(s) do not significantly threaten proprietary data products. Those employing speed-based strategies are playing a winner-take-all game, and still could not tolerate the difference between a faster SIP feed and a fastest proprietary feed. Those wanting depth of book data, regardless of speed, would still need to purchase proprietary data products. The lack of further investments in reducing latency of the SIPs may be more a result of inertia than a clear consequence of the concentration of voting power in large exchange families.

In contrast, such speed investments for the SIP(s) might be championed by large agency brokers, as they fit into the narrative that “speed is important and hence faster is better.” Large brokers who have already positioned themselves to their clients as being “sophisticated” and “able to compete” with fast proprietary traders benefit from this questionable reasoning. Since they are never forced to explain specifically how they use their large technology investments and teams to serve the needs of their end clients, large brokers can rely on these sunk costs as a bulwark against small competitors. It is clear that this is working, as at least some buy-side firms insist their brokers invest in the fastest data feeds [4].

Hence, to the extent that the governance reforms contemplated in the proposed order shift the balance of voting power away from large exchange families and to other stakeholders like large brokers, they may result in wasteful technology investments.

Next let’s consider the case of adding depth of book data to the SIP(s). If this was accomplished at a reasonable latency and made available at similar costs to today’s top of book SIP data (or at least considerably lower cost than proprietary depth of book products), it could enable agency brokers to achieve better outcomes for their clients without incurring prohibitive costs. This would likely create new consumers of depth of book data, as well as cannibalizing some of the customer base of proprietary depth of book data products. As a result, large exchange families, who would also bear a lot of the burdens of implementation of reporting depth of book data to the SIP(s) would likely resist such a change. A large broker might support the change in the hopes of lowering their own depth of book data costs, though they may also resist it if they plan to continue purchasing the proprietary depth of book feeds. But in any case, a new governance structure for the SIP operating committee that concentrates less power in the large exchange families would seem to make it more plausible for enhancements like this to be made.

Finally, let’s consider the case of a new fee structure, containing significantly lower overall fees as well as clearer definitions of units that better fit modern computing technologies. SIP processors as well as exchanges who benefit from SIP revenue would likely oppose such a fee structure. Other market participants for whom SIP fees are a significant cost would be more likely to support it, although there may be some concern that lower fees would mean a worse product. (As detailed above, we believe fees could be significantly reduced without necessitating a meaningful drop in SIP feed performance.) Large brokers, for whom SIP fees may be a less significant cost compared to other costs, may be disinclined to support efforts to lower SIP fees, as this lowers the barrier to entry for potential competitors.

Examples like these lead us to believe that reducing the concentration of power in large exchange families makes SIP enhancements more likely. However, it may not on its own result in a governance structure that sufficiently controls costs. We encourage the commission to carefully regulate the fee structure of any new SIP plan. Fees due to the SIP processor should be heavily scrutinized and radically restructured to allow a healthy competition between third party data providers able to benefit from efficiencies of scale and cloud technology.

We also encourage the commission to further consider the process by which changes to the SIP(s) can be proposed and debated. Surface arguments like “faster SIPs are needed to keep up with the current speed of trading” should not make it through a rigorous vetting process unexamined. The process should address deeper questions like: who would benefit from this proposed change? In what specific circumstances and by what specific mechanisms? Who would be harmed? Again, in what circumstances and by what mechanisms? What is the current state of competition among relevant stakeholders, and what evidence is there to believe that the proposed change will produce a more desirable state? We find it disappointing that current disclosures and debates about the SIPs and market data in general tend to focus on features and ignore mechanisms. We hear more about speed, availability, reliability, cost, and revenue, and less about how speed is used by different market participants, how availability and reliability represent a tradeoff with speed, how costs are incurred, or how revenue is subject to competition.

In closing, we appreciate the immense amount of expertise and effort that went into crafting the proposed order, and we applaud the commission for its thoughtful grappling with these difficult and important issues. We are hopeful that a future SIP will better serve its intended purposes, and we are grateful to have the opportunity to participate in the discussion of what that may look like.

Sincerely,

Allison Bishop

President of Proof Trading

References:

[1] “Does low latency matter on the sell side?” by Daniel Aisen https://medium.com/prooftrading/does-low-latency-matter-on-the-sell-side-ff5437820b4c

[2] NYSE PDP Market Data Pricing, dated Feb. 3, 2020 https://www.nyse.com/publicdocs/nyse/data/NYSE_Market_Data_Pricing.pdf

[3] NasdaqTrader.com, Price List for US Equities http://www.nasdaqtrader.com/Trader.aspx?id=DPUSdata

[4] SEC Roundtable on Market Data Products, Market Access Services, and their Associated Fees, Oct. 25, 2018

https://www.sec.gov/spotlight/equity-market-structure-roundtables/roundtable-market-data-market-access-102518-transcript.pdf

[5] The Cost of Exchange Services: Disclosing the Cost of Offering Market Data and Connectivity as a National Securities Exchange

https://iextrading.com/docs/The%20Cost%20of%20Exchange%20Services.pdf

[6] CTA Quarterly Revenue Disclosure, Q3 2019

https://www.ctaplan.com/publicdocs/Q3_2019_CTA_Quarterly_Revenue_Disclosure.pdf

[7] UTP Quarterly Revenue Disclosure, Q3 2019

http://www.utpplan.com/DOC/UPT_Revenue_Disclosure_Q32019.pdf

--

--