Trade the news, but make your FOREX research! Part 2 — Analysis and Strategy

Véber István
11 min readMar 30, 2019

--

Trading the news in FOREX during the first milliseconds or seconds requires a very different strategy and analysis than trading on longer time frames. The most popular indicators like moving average magic, Bollinger bands, Fibonacci retracement or different oscillator and momentum indicators won’t help too much during this very volatile period. In this article, I will show you my strategy to trade the news, my data analysis and the process of testing.

This article is the second part of the series. In the first article, I wrote about the trading algorithm and the idea behind the strategy. Part 1: https://medium.com/@istvan.veber/forex-news-trader-785ad0a1394c. The third part is about EDA, model fitting, and selecting the best parameters: https://medium.com/@istvan.veber/trade-the-news-but-make-your-forex-research-part-3-eda-random-forest-and-parameter-selection-456271c7febd

FXStreet Economic Calendar

The data I used for the analysis and testing comes from different sources. The main source is Dukascopy https://www.dukascopy.com/swiss/english/marketwatch/historical/. They provided the tick data of the instruments: I analyzed currency pairs, some indices, gold and silver prices. The data of the economic events comes from https://www.fxstreet.com/economic-calendar and https://www.forexfactory.com/calendar.php. All the downloaded data was uploaded to a PostgreSQL database, which at the end was larger than 70 TB. Nowadays this isn’t large, but considering my computing power, this is huge. The code for analysis is written in Excel/VBA, the database management and filtering the raw data for the analysis is written in Java.

Originally I tried to examine the effect of the economic news numbers on the price. The main focus of these examinations was: the numbers from the prior news, the consensus or forecast before the actual news, and the actual numbers. This analysis was useful to have an insight into the effect of news on the price, but I couldn’t use it in my strategy directly. As a retail trader who has to trade through a broker, I didn’t have the chance to be fast enough to evaluate and react. Instead of this approach, I chose another one, where I don’t care about the numbers of economic news, I care only about the direction and amplitude of the price jump (and some other things).

I think it is worth to share some of my analyses regarding the relations between the price movement and the actual news parameters. The results of these analyses helped me to understand the behavior of the price when the news came out.

The first step of the analysis can be seen in the excel table below. This table is generated for hundreds of events, in the table below for the US Unemployment Claims. In the first colored columns are the economic numbers and some very simple calculations, for example, “Act-For” is the difference between the actual data on release and the forecast. The next columns show how the different instruments behaved after the event. They show the extremes of the ask price relative to the price of the news zero-time, which is the official release time. The low and high extremes are calculated for the first minute, the first 5 minutes, the first 15 minutes and the first hour, but they didn’t contain data from the smaller time frame. The actual excel table is much wider, in the case of USA news the time frames are calculated for AUD/USD, EUR/USD, GBP/USD, USD/CAD, USD/CHF, USD/JPY, XAG/USD, XAU/USD, EUR/CHF, EUR/GBP, EUR/JPY, USA30i, USA500. For news from other parts of the world, the list is shorter.

Economic news numbers and price movement at different time frames. Click for larger image.

From the above data, multiple tables and charts are generated. Below the scatterplots show the relations between the news related economic indicators or some simple derived parameters and the price swing in different time frames. The orange points are the highs and the blue points the lows. On the right, the histograms show the historical distribution of lows and highs in case of negative (where the difference between the actual news and the forecast is negative) and positive surprises.

Charts for US Unemployment Claims, USD/JPY pair.

The boxplots below also show the effects of negative and positive surprises, but here we can follow more easily from time frames to time frames the highest and lowest swings.

Boxplots for US Unemployment Claims, USD/JPY pair.

To see how the price works during the news, my program generates tables of slippage and spread statistics, and charts of the price action. The charts below show the same news, but with different resolution. The first chart shows the price in the close vicinity of the news, the second charts shows the seconds before the news and the first minute. The horizontal axis is the time in seconds relative to the due time of the news. The small columns show in every tick the spread (green), the ask jump (blue), the bid jump (orange) and the full jump (grey) which is calculated from the jumps and the spread. Examining some of these charts for different events and instruments can help to recognize some common patterns, opportunities, and killer moves (which almost certainly will result in loss even if the price jump is large).

Price action, US Unemployment Claims, USD/JPY pair.

As I mentioned earlier, with my resources I can’t evaluate the news numbers before the move. I did another type of analyses, where I examined only the price movement without considering if the surprise was positive or negative. My most important chart type, what I use for every test setup and look at before the trade, can be seen below. In this chart, we can see how far the price moves in the first minute after the event relative to the price of zero-time (official news release). I use this for determining the parameters for the test runs, but a similar chart can be helpful for manual news-traders as well. In the second and third section, we can see the historical US Unemployment Claims USD/JPY movements if the larger movement was on the long side and if the larger movement was on the short side. For this event, the two distributions are very similar, and for simplicity, I will speak only about the first section, which shows all the news events together. All boxplots represent absolute numbers. For all news events, two numbers are calculated for the first minute: the largest distance in pips on the long side and the largest distance in pips on the short side. The larger of these two numbers are represented in the orange boxplot and the smaller in the first blue boxplot. The orange boxplot shows us how far the historical price got from the zero-time price in the first minute. This helps to determine the take profit levels, and how far from the zero-time price I want to open my pending orders. If this column is too close to zero I don’t trade the news, because the historical volatility during this news wasn’t high enough to expect profit from the price jump. The blue column shows us the opposite (shorter) direction movement distribution. This can help to determine the original stop loss. Obviously, the best news has an orange column where most of the points are far from zero and blue column where most of the news is very close to zero. An important feature of this chart I always look for is the quartile-gap. If the quartile-gap is small or doesn’t exist means that the price during this news likes swing towards both sides, and this news is very unsafe to trade. But if the quartile-gap is large, we can assume that the price will make large movement only in one direction, and we can set a small stop loss for this trade. Moreover, if we deploy our pending order in the quartile-gap, there is a good chance that if the price moves this far, it will move even further, and there is a relatively small chance that it will turn back.

Directional moves in the first minute after the news, US Unemployment Claims, USD/JPY pair.

This can seem promising so far, but the above chart doesn’t show anything about the spread and slippage. The problem is that the largest part of the price jump can happen from one tick to the next. In this case, it doesn’t matter if our pending order was closer to the zero-time price, we will be filled at the top of the jump, and it is possible that the next tick will be the correction. If this happens all of our positions will suffer. This is a real killer pattern, and only one thing can defend against it: no trading.

But in the long term, it doesn’t matter how many losing trades we have if we have enough winners. This strategy won’t win all trades. For the same event, I open multiple pending orders knowing that some of them will be a loss, some of them won’t be filled at all, and some of them will make a profit. The goal is to find a distribution of these parameter scenarios which together result in a profit on average. To see how our trades could perform we should test on historical data. Of course, if something worked yesterday doesn’t mean that it will work tomorrow. But if we can find the events and parameter scenarios, where the trades were in profit most of the time, we can assume that we have a good chance for profit trading this news future. We should avoid all the other events. Test runs can tell which parameter scenarios worked well in the past, but to prevent overfitting and decrease the risk I don’t choose only the single best performing scenario. Instead, I use a bunch of scenarios — an ensemble of scenarios — at the same time. I try to choose good performing scenarios which are far enough from each other, for example, one pending order is closer to the zero-time price, another is further away. The best events to trade are the ones, where very different parameter combinations yielded profit during the historical test. We can imagine the parameters as a multidimensional space, and in case of good news, a relatively large part of this space is performing well on historical tests. If the news generates profit on historical test only for a very small space, I don’t trade that news, doesn’t matter how well that few scenarios performed on historical data.

I hit some wall with the approach above. I have limited computational power and with too many parameters I can easily have day-long test runs. To overcome this obstacle first I change only the most important parameters and run the test. Then after this first test, I run another test where I change less important parameters and use only the best performing part of the test parameters of the first test.

Test outputs ordered by profit, LEFT PART. Click for larger image.
Test outputs ordered by profit, RIGHT PART. Click for larger image.

The Excel table above shows the best performing part of a test. Every row is a scenario. This is a first (or zeroth) run, where only a few parameters were changed. The LEFT PART shows the scenario parameters, and the RIGHT SIDE shows different metrics of these scenarios during the test. These metrics are important to select the best performing scenarios because the profit alone isn’t the best guide to choose the parameters. If you zoom in and read the labels you can guess what they represent or calculate, I don’t want to explain them all. This part of the strategy is immature and I am looking for a good way to select my scenario ensemble to trade.

Scenarios for live trading.

Finally, I generate the scenarios to trade. The image above shows some of the scenarios for news. The parameters are a bit randomized. With the randomization, I try to avoid for example the situation, where lots of my positions are stop-lossed with the same price. On the right side, you can see some experimental metrics which help me to set the risk percent for a particular scenario. Unfortunately, this happens manually at the moment.

With the earlier mentioned criterions, there aren’t too many events I trade, only 4–8 per week, and most of the trades happen on the most liquid instruments like USD/JPY, EUR/USD, GBP/USD, USD/CAD, sometimes EUR/JPY or AUD/USD.

The strategy is in profit, but not as much as I hoped. On Dukascopy there is a mechanism, which I didn’t know until the live trade. During news events, I get the “Stop order has been rejected by interbank party. System will resubmit this order.” message. This is because there is no demand/supply available for the order to be filled under the conditions I have set. If the order would be simply canceled after this message, there wouldn’t be any problem; no profit, no loss. But the order isn’t canceled, instead, it is automatically resubmitted, but of course, it is resubmitted after the news with the parameters I wanted to trade the news; loss, loss, loss. It turned out that it isn’t possible to disable this mechanism for my account, it must be solved in the code. The problem is that this messaging and deleting the resubmitted order can take too much time, maybe even 1–2 seconds (didn’t test so far). I have to wait for the message that my order wasn’t filled and will resubmit, and after that, I have to send a message back to the server to delete this order. Fortunately, this resubmitting doesn’t happen all the time, but often enough. This is the next step I have to solve.

Earlier I said that I don’t use the economic numbers of the news directly in the strategy, because I can’t be fast enough to evaluate them. But it is possible to use them a bit later for modifying stop-losses, take profit levels, stopping unfilled pending orders, or even opening new orders if the news is very promising. This feature needs lots of work (programming, testing, and analysis), but this can be the next larger goal.

I hope you found this strategy interesting. You can find the java code on my Github page for the tester and the trader algorithm. If everything goes well next year I will have enough live trading data to write a short Part 3 article about the performance and changes.

https://github.com/sinusgamma/Forex-News-Trader-Dukascopy-API

--

--

Véber István

Quantitative developer, meteorologist, deep learning, data analysis and algorithmic trading enthusiast. https://www.linkedin.com/in/istvanveber/