Why we need a speed limit for financial markets

Mark Buchanan
Bull Market
Published in
6 min readJan 25, 2015

--

New research suggests that electronic markets are already running far too fast for their own good

As almost everyone now knows, computer algorithms account for most of all trading in today’s financial markets. The trading is now so fast, and the competition so tight, that shortening a wire linking a trader to an exchange by a few feet, or sending information from one point to another by radio waves rather than through fiber optic cables, makes a big difference. Cables, after all, can’t always follow straight lines, and the light within travels at only about 2/3rds of the speed of light in air. Hundreds of millions of dollars have been spent putting in new underground cables, and in developing new communications links working with microwaves and, next, lasers. Industry leaders suggest that drones may soon be hovering over the mid-Atlantic, 24 hours a day, to act as microwave repeaters.

It’s all very exciting, and profitable for those doing it. But does it make any sense for markets? That’s an entirely different question. And the answer may be a resounding NO. I have a short article (forthcoming, this evening) in Bloomberg View discussing some new research which suggests that the speed of electronic trading in today’s markets is already too fast, and that market function could be improved by implementing a speed limit. How such a limit might be put into effect is a puzzle in itself, but there have been several clear suggestions in the finance literature over the past couple of years, mainly from physicist Doyne Farmer and economist Spyros Skouras and, independently, from economists Eric Budish, Peter Cramton and John Shim. Both groups argued that the arms race to higher speeds might be usefully cut off by changing the way markets work, moving from continuous trading to trading over fixed time intervals, with all orders to buy and sell in the interval being cleared at one price. That is, in the terminology, by running exchanges as “batch auctions.”

The new research comes from Daniel Fricke of Oxford University and Austin Gerig of the S.E.C., and offers an analysis of how a market of this kind would work, with a specific focus on how the speed of trading — or equivalently, the time period of the auction — influences how well the market serves investors. I give some detail of the analysis below. Essentially, Fricke and Gerig develop a simple estimate for the risks that informed investors face in making trades, and look at how this risk depends on the speed of trading. These risks turn out to be minimized at an intermediate speed, which they estimate, using real data for U.S. securities, to be roughly in the range 0.2 to 0.9 seconds. Obviously, a lot slower than today’s markets.

Joseph Stiglitz, among many others, has previously argued that most HFT activity — not to mention most financial activity of any kind — has little social value. Felix Salmon gave a great discussion of these ideas a while back. This new research makes the case in a specific way in the context of HFT. The message: our markets would serve real economic functions much better if they ran quite a bit slower.

So, some details. What Fricke and Gerig do is to develop a measure for the risks that investors face in a market running by batch execution. The idea is that orders to buy or sell accumulate over a time τ, before they’re all put together and cleared at a single price that matches supply and demand. With this set up, two sources of random fluctuation in prices go together to create “liquidity risk” facing an investor, defined as the (mean square) difference between the real, authentic value of a security when an investor comes to the market to buy or sell it, and the actual realized price that investor gets in making a trade. The first source of fluctuation, call it Risk A, is volatility of the price of the security itself, as this price naturally fluctuates in the marketplace, both above and below the security’s actual, fundamental value. The second source of fluctuation, call it Risk B, comes from the process of batch execution, which calculates a clearing price by summing over the orders arriving in a fixed period of time; it hence relies on a small sample of orders to get an estimate for the fundamental market value, and so it is not perfectly accurate.

These two risks contribute in an additive way into the overall liquidity risk an investor faces. Hence, in the simplest model in the new study, the expression Fricke and Gerig develop for the full liquidity risk has two terms, corresponding to the A and B type risks I just mentioned. It is :

In this equation, σ is the empirical volatility of the security in question, ω is the “intensity” of its trading (linked to how frequently it trades) and ψ is the volatility of the actual fundamental value of the security. The important point here is that the two contributions to this risk work differently with the time interval τ. The first goes down with τ, meaning that faster trading makes it smaller, while the second goes up with τ. Roughly speaking, faster trading reduces the first term because an investor’s trades execute more quickly, and so the market has less time to move unpredictably before the trade gets made. But faster trading at the same time increases the second term because batch auctions with fewer orders entered have clearing prices that fluctuate more strongly about the fundamental market price (just from the mathematics of poor statistics).

If you want to minimize the sum with respect to τ, you can do some high school calculus. As it turns out, this sum takes its minimum value when the two terms are equal to one another, and leads to the following expression for the best value of τ:

This is the equation the researchers use to estimate the optimal speed at which a batch execution market should run. The optimum is different for different stocks, because it depends on the volatility σ and the intensity ω. Using data from US markets to estimate the various parameters, Fricke and Gerig find that this optimal time for U.S. stocks ranges from about 0.2 seconds to 0.9 seconds. (My discussion here only considers their simplest models. They also go on to consider how things change if the market contains a market maker, or liquidity provider, and also if there are many stocks and so a security may be more or less correlated with the movements of the entire market. Mathematically, the estimate of optimal speed gets more complicated, but the values do not change very much.)

Now, it’s important that I point out some of the caveats. This is just a model of batch execution market. Real markets don’t currently work this way. As the researchers pointed out to me by email, real markets effectively carry supply and demand forward between periods, and so may well be optimal at somewhat faster speeds. For this reason, and others, they suggest that the optimal speed for real market may well differ from that of a batch execution market by a factor of ten. Which means, perhaps, that the optimal delay in trading might be as short as, say, 0.01 or 1/100th of a second.

But even that is a couple or orders of magnitude slower than today’s markets work. So I think this is pretty good evidence that things probably are, at the moment, working too fast. Markets aren’t actually serving investors in the way they might. Of course, this isn’t the end of the story, and lots more research ought to be done on this.

I find it encouraging, however, that researchers connected to the SEC are working on these kinds of models, and developing this kind of insight. I’m all for technology, but I do think we should try to make sure that it helps us, rather than hurts us. Blindly racing forward just because we can doesn’t make any sense.

--

--

Mark Buchanan
Bull Market

Physicist and author, former editor with Nature and New Scientist. Columnist for Bloomberg Views and Nature Physics. New book is Forecast (Bloomsbury Press)