SUPERALGOS DATA MINING
Advanced Holt-Winters Smoothing
Forecasting and averaging auto-adaptive method for stock/crypto market with a damped trend seasonal smoothing in Superalgos.
There are plenty of methods to create a moving average of market price. From the simple moving average (SMA) to the most efficient Ehlers’ Mother of All Moving Average (MAMA) the possibilities in traders’ toolbox are almost endless. Holt-Winters is a triple exponential moving average using a damped trend seasonal smoothing. We propose an application to stock/crypto markets and an original optimization method to use it as a non-lagging moving average.
In this article, we will see the basis of the Holt-Winters damped trend seasonal smoothing and forecasting method. We will look at the theory to explain how we can develop an auto-adaptative algorithm to calculate the initialization parameters, the smoothing constants, and use it to forecast the price on one or several periods.
Exponential Smoothing: Back to Basics
Exponential smoothing is a method used in signal processing when chronological data series are affected by random values and present noise. It allows a better visualization of the time series and provides a more stable picture of the signal to proceed to calculations.
Instead of giving the same weight to all the averaged values as SMA does, an exponential smoothing / moving average (EMA) gives an exponential decreasing weight to the older values, resulting in a more accurate and reactive smoothing of the signal.
If regular EMA has the advantage of calculation simplicity, the main drawback is an increased sensitivity to big and sudden signal variations. But it is a good compromise between computing resource requirements and signal accuracy.
A. Simple Exponential Smoothing
The simple exponential smoothing of y signal is described by the equation :
With α the smoothing factor (0 < α < 1). Values of α close to 1 will tend to give a higher weight to the most recent values, resulting in a more reactive smoothing but will also keep track of more unwanted noisy components.
It is worth noting that unlike SMA, the exponential smoothing does not require any previous values of the signal and can be applied from the very first value by choosing an initialization value for the previous smoothed signal.
When using exponential smoothing for Stock market, we generally choose the first close value for previous smoothed signal, giving the first smoothed signal value equal to the first close value.
Having a close look at the smoothing factor, it is described as a function of the sampling period of the signal and a time constant:
With ΔT the sampling period of the signal and τ the time constant where the smoothed signal reaches ~63% (1–1/e) of the signal. If the sampling period is small, α can be approximated with :
Hence we can easily close the loop with the well known EMA formula:
If we consider the value of the previous smoothed signal, we observe this is where the exponential characteristic stands. The current smoothed signal shows a geometrical progression driven decreasing of the weight of the past values, i.e. a decreasing exponential function.
B. Double Exponential Smoothing
As already explained, the simple exponential smoothing is outclassed once it is about big short moves. When used with trending signals, the error between the signal and the smoothed values increases. An upward trend produces overestimated smoothed values whereas a downward trend will produces underestimated smoothed values.
Double exponential smoothing can, to a certain extent, help to partly overcome those limitations. Brown, in 1956, and Holt in 1957, developed a method to introduce trend influence in the exponential smoothing. As we are considering here only the full Holt-Winters method, we will focus only on the Holt double exponential smoothing.
Considering we know at least the two first values of a time series, the smoothed values can be written :
Where α is the simple smoothing constant and σ the trend component as :
The smoothing can be initialized following :
Since we consider trending influences in the double exponential smoothing, we can consider a forecast :
C. Triple Exponential Smoothing: The Holt-Winters Damped Trend, Seasonal Exponential Smoothing
Robustness of simple exponential smoothing is challenged once a big variation occurs and the error between smoothed signal and signal is not optimal in trending time series. The double exponential smoothing introduces a trend component but if we are familiar with stock/crypto markets, we know trends don’t last forever and pullbacks are more than common, creating a somewhat periodic/cycling behavior (more or less).
Trying to reproduce the real behavior of the market requires then to consider trend changes and cycles. This is what the full Holt-Winters smoothing can provide with a damping factor, to modeled trend variations, and a seasonal factor, to consider the cycles. The smoothed signal can be expressed as a function of a level component, a trending component and a seasonal component, whereas the trend is modulated with a damping factor.
We then have a set of 4 formulas for the smoothed signal :
- Level component
- Trend component
- Seasonal component
- Smoothed signal
Where m is the seasonal period, and α, Β,γ, ϕ are smoothing factors. If we want to obtain a forecast, we simply use the smoothed signal equation with a shift equal to the number of periods we want to forecast on:
Initialization and Optimization Procedures
All formulas constituting the Holt-Winters smoothing are a function of a past value. Initial value for those parameters are actually critical to ensure accuracy of the smoothing so it is of importance to have a rational method to initiate the values.
Smoothing parameters could be determined using an empirical method but the amount of iterations required to approximate a good set of value is just huge. Also, since we want to use the system on stock/crypto markets, those parameters will be subject to variations due to the unpredictable fluctuations of price and optimum value at one specific period will not be the optimum value for the next period. We need to establish a method to adapt the parameters to each period.
A. Initialization of Trend, Level, Smoothed Signal and Seasonal Components
To initialize the Trend and Level components, we choose the method proposed by R. J. Hyndman in his book “Forecasting : methods and applications”. The method can also be found at his website.
The initial level is the average of the signal on the whole first season period:
And the initial trend is the average of the slopes on the 2 first seasonal periods:
The smoothed signal can then be initialized with:
The seasonal component is initialized with:
B. Smoothing Parameters Optimization
The value of the smoothing parameters is critical. They weight the different components of the smoothed signal. Since there is no reliable initialization method, we have no clue to intuitively set their initial value. Of course, depending on the parameter, we can figure a range inside which the parameter will have the most relevant value but it will still be a rough approximation.
The idea is to determine the best possible value in real-time (almost). The “price” signal is discrete. The close value of a period arrives periodically at the pace of the update frequency. So we can use this time lapse to perform an optimization calculation of the parameters. To find the best parameters we will use the mean square error minimization sequentialy applied to each parameter, spreading on the seasonal component length :
If we iterate on a certain number i of α, Β,γ, ϕ, with value sampled at 1/i rate between 0 and 1, we can calculate the minimum value of MSE for each parameter and then determine the optimized value for each parameter :
C. Forecast Parameters Optimization
On the same principle as the smoothing parameters optimization, we can calculate the parameters for the forecast. After the first forecast calculation, we can proceed to the same type of calculation. Since it is calculated for the forecast, we will use the Mean Absolute Scaled error, as proposed by R. J. Hyndman in 2005. MASE is defined as the ratio between the mean absolute error of the forecast value and the mean absolute error produced by a naïve forecast:
We then seek for the minimalization of the MASE for iterations over the range from 0 to 1 for each smoothing parameter.
Holt-Winters Smoothing and Forecast Implementation in Superalgos
The implementation of the Holt-Winters forecast in Superalgos is challenging. The total amount of code line is particularly big for an indicator: nearly 560.
The indicator code is separated in eight parts:
- Variables initialization.
- Smoothing components initialization.
- First smoothing signal calculation.
- Smoothing parameters optimization.
- Optimized smoothed signal calculation.
- Forecast initialization.
- Forecast smoothing parameters optimization.
- Forecast and Begin/End forecast values calculation.
A. Variables Initialization
For variable initialization, we have to consider the initial values of the 4 different smoothing parameters, the season period length and the number of periods to forecast.
let candle = record.current //pointer to candles datamine
let Al = 0.4 //initial value for alpha factor
let Bet = 0.4 //initial value for beta factor
let Ph = 0.8 //initial value for phi factor
let Ga = 0.3 //initial value for gamma factor
let sLen = 5 //length of seasonal period ; integer only
let fLen = 6 //number of season period to forecast ; integer only ; forecast will be calculated on fLen * sLen = 30 periods
Since we want to allow future users to tune the number of iterations of the parameters optimization, we also declare it in the same block with:
let Iter = 20 //number of loop to proceed parameters optimization
We choose what seems to be a reasonable amount of iterations. It is critical to keep in mind this number will have a direct impact on the calculation performance.
The variables that will be used to contain the optimized smoothing parameters have to be initialized :
//initializing Factors with fixed value or optimized value if exists
if (variable.Alpha === undefined) {
variable.Alpha = Al
}
else {
variable.Alpha = variable.Alpha
}
if (variable.Beta === undefined) {
variable.Beta = Bet
}
else {
variable.Beta = variable.Beta
}if (variable.Phi === undefined) {
variable.Phi = Ph
}
else {
variable.Phi = variable.Phi
}if (variable.Gamma === undefined) {
variable.Gamma = Ga
}
else {
variable.Gamma = variable.Gamma
}
The code is indented on purpose. It is indeed a good practice to keep track of open/close brackets.
The “close” values of the candles have to be stored in an array with a number of season period length. The calculation of the different parameters requires to store also twice the number of season periods close values and the forecast will need a number of past close values equal to the forecast length:
//Fetching candle close values for sLen length
variable.lastClose.push(candle.close)
if(variable.lastClose.length > sLen) {
variable.lastClose.splice(0,1)
}
variable.lastClose2x.push(candle.close)
if(variable.lastClose2x.length > 2 * sLen) {
variable.lastClose2x.splice(0,1)
}
//fetching candle values for sLen * fLen length
variable.last30Candles.push(candle.close)
if(variable.last30Candles.length > sLen * fLen) {
variable.last30Candles.splice(0,1)
variable.forecastTrig = 1 //trigger to switch to the next step of forecast optimization as soon as there is enough values
}
We have defined here a variable to trigger the availability of the dataset for the forecast. This will help to keep the code clean.
B. Smoothing Components Initialization
The smoothing components are initialized in two steps:
- Initialization at 0 for the current and previous values of each component
- Calculation of initial value
This initialization will trigger a variable to 1 once the stage is over. It is previously defined at 0 at the begining of the stage:
//managing the undefined values until lastClose2x reach sufficient length
if(variable.trig === undefined) {
variable.trig = 0 //variable to indicate initialization stage is over
The initialization at 0 is then done by:
if(variable.lastClose2x.length < 2 * sLen && variable.trig === 0) {
variable.Y = 0 //forecast through history variable
variable.A = 0 //level
variable.B = 0 //trend
variable.S[0] = 0 //season
variable.previousB = variable.B
variable.previousA = variable.A
variable.trend = 0
variable.last31A.push(variable.A)
variable.last31B.push(variable.B)
if(variable.last31A.length > sLen * fLen +1){
variable.last31A.splice(0,1)
variable.last31B.splice(0,1)
}
}
Where Y is the smoothed signal, A the level, B the trend and S[i] the season. We also prepare the recording of the components on forecast length plus 1 periods for the smoothing parameters optimization.
We build the loop to calculate the components as required by the initialization procedure developed by R. J. Hyndman:
if(variable.lastClose2x.length === 2 * sLen && variable.trig === 0) {
for(var i = 0 ; i < sLen ; i++) {
variable.previousA = variable.previousA + variable.lastClose[i]
}
variable.previousA = variable.previousA / sLen
variable.last31A.push(variable.A)
if(variable.last31A.length > sLen * fLen +1){
variable.last31A.splice(0,1)
}
for(var i = 0; i < sLen ; i++) {
variable.previousB = variable.previousB + variable.lastClose2x[i + sLen] - variable.lastClose2x[i]
}
variable.previousB = variable.previousB / Math.pow(sLen,2)
variable.last31B.push(variable.B)
if(variable.last31B.length > sLen * fLen +1){
variable.last31B.splice(0,1)
}
variable.S[0] = candle.close / variable.previousA
variable.trig = 1
variable.trend = 0}
Since the initialization procedure is over, the trigger variable is now set to 1. We can now run into the smoothed signal value calculation:
if(variable.lastClose2x.length === 2* sLen && variable.trig === 1)
{
variable.Y = (variable.previousA + variable.Phi * variable.previousB) * variable.S[0]
variable.A = variable.Alpha * variable.Y / variable.S[0] + (1 - variable.Alpha) * (variable.previousA + variable.Phi * variable.previousB)
variable.B = variable.Beta * (variable.A - variable.previousA) + (1 - variable.Beta) * variable.Phi * variable.previousB
variable.S.push(variable.Gamma * variable.Y / variable.A + (1 - variable.Gamma) * variable.S[0])
variable.slope = variable.B - variable.previousB
if(variable.slope < 0){
variable.trend = 1
}
else {
variable.trend = -1
}
variable.previousA = variable.A
variable.previousB = variable.B
//store last sLen*flen +1 values of variable.A and variable.B for the min MASE forecast optimization
variable.last31A.push(variable.A)
variable.last31B.push(variable.B)
if(variable.last31A.length > sLen * fLen +1){
variable.last31A.splice(0,1)
variable.last31B.splice(0,1)
}
}//keep Season array to the right size
if(variable.S.length > sLen) {
variable.S.splice(0,1)
}
We introduce a rough analysis of the slope of the trend component to determine if the trend is going upward or downward.
C. Smoothing Parameters Optimization
The optimization procedure will be the same for each parameter, so we only describe one here. In a first time, we need to initialize the mean square error:
//Parameters optimization
//calculation of initial mean square errorvariable.Errors.push(Math.pow(variable.Y - candle.close, 2))
if(variable.Errors.length > sLen) {
variable.Errors.splice(0,1)
}
if(variable.Errors.length === sLen) {
variable.ErrSqSum = 0
for (var k = 0 ; k < variable.Errors.length ; k++) {
variable.ErrSqSum = variable.ErrSqSum + variable.Errors[k]
}
variable.MS = variable.ErrSqSum / variable.Errors.length
if(variable.lastClose2x.length === 2*sLen && variable.trig === 1 ){
//initializing best parameters calculation
variable.bestAlpha = 0
variable.bestBeta = variable.Beta
variable.bestPhi = variable.Phi
variable.bestGamma = variable.Gamma
variable.bestError = variable.MS
The error between smoothed and signal is stored in an array and the MSE is calculated on the array length. We then run into the initialization of the smoothing parameters. The parameter that will be optimized is set to 0, while the others keep their last value. It is worth noting we’ll use the same procedure for each parameter, so once a parameter is optimized, its value is used as determined for the other parameters. The very first loop might give irrrelevent values, but the procedure will give a fully optimized results no later than the second loop.
We have defined a variable to keep the best error result. This calculation is as simple as to keep the value if it is lower than the last stored, then keeping also track of the parameter value corresponding to the error. Look at the example for α:
//Alpha optimization
for(var i = 0 ; i < Iter ; i++) {
variable.sS = []
variable.sError = []
variable.bestAlpha = variable.bestAlpha +(1 - i/Iter)
variable.sS[0] = variable.S[0]
variable.previoussA = variable.previousA
variable.previoussB = variable.previousB
variable.sErrSqSum = 0
for(var j=0 ; j < sLen ; j++) {
variable.sY = (variable.previoussA + variable.bestPhi * variable.previoussB) * variable.sS[0]
variable.sA = variable.bestAlpha * variable.sY / variable.sS[0] + (1 - variable.bestAlpha) * (variable.previoussA + variable.bestPhi * variable.previoussB)
variable.sB = variable.bestBeta * (variable.sA - variable.previoussA) + (1 - variable.bestBeta) * variable.bestPhi * variable.previoussB
variable.sS.push(variable.bestGamma * variable.sY / variable.sA + (1 - variable.bestGamma) * variable.sS[0])
variable.previoussA = variable.sA
variable.previoussB = variable.sB
variable.sError.push(Math.pow(variable.sY - variable.lastClose[j],2))
}
for(var k = 0 ; k < variable.sError.length ; k++) {
variable.sErrSqSum = variable.sErrSqSum + variable.sErrSqSum[k]
}
variable.sMS = variable.sErrSqSum / variable.sError.length
if(variable.sMS < variable.bestError){
variable.bestError = variable.sMs
variable.Alpha = variable.bestAlpha
}
}
variable.bestError = variable.MS
variable.bestBeta = 0
You might have seen we directly prepare for the next parameter optimization at the end of the procedure. Once the best value for a parameter is found, it is stored in the variable previously initialized and used (variable.Alpha in this case).
This procedure is repeated 4 times. The number of iterations is set to 20, so each loop the optimization procedure takes 80 iterations.
D. Forecast Initialization
Now it is time to run into the initialization of the forecast. We choose here to calculate a direct forecast value for 1 period and the increasing/decreasing forecasted percentage on 5, 10, 20 and 30 periods (30 periods is the limit since it is somewhat hardcoded with the parameters, but it can be changed):
//initializing forecast data
variable.fS = [] // purge the season array for forecast data
variable.previousfA = variable.A //equivalent A variable for the forecast calculation
variable.previousfB = variable.B //equivalent B variable for the forecast calculation//calculating forecast data 1st set
if(variable.forecastTrig === undefined) {
variable.forecastTrig = 0
}
if(variable.S.length === sLen && variable.trig === 1 && variable.forecastTrig === 0) {
for (var j = 0; j < fLen ; j++) {
for(var n = 0; n < sLen; n++) {
variable.fY.push((variable.previousfA + variable.Phi * variable.previousfB) * variable.S[n])
variable.fA = variable.previousfA + variable.Phi * variable.previousfB
variable.fB = variable.Phi * variable.previousfB
variable.previousfA = variable.fA
variable.previousfB = variable.fB
if(variable.fY.length > fLen * sLen) {
variable.fY.splice(0,1)
}
variable.prevfY.push(variable.fY[0])
if(variable.prevfY.length > fLen * sLen){
variable.prevfY.splice(0,1)
}}
}variable.forecast = variable.fY[0]
variable.forecast5 = 100 * (variable.fY[4] - variable.fY[0])/variable.fY[0]
variable.forecast10 = 100 * (variable.fY[9] - variable.fY[0])/variable.fY[0]
variable.forecast20 = 100 * (variable.fY[19] - variable.fY[0])/variable.fY[0]
variable.forecast30 = 100 * (variable.fY[29] - variable.fY[0])/variable.fY[0]
variable.SW = 1} else {
variable.forecast = 0
variable.forecast5 = 0
variable.forecast10 = 0
variable.forecast20 = 0
variable.forecast30 = 0
}
Once again, we use a trigger variable to highlight the end of the initialization procedure.
E. Forecast Smoothing Parameters Optimization
After the first forecast value calculation we proceed to the smoothing parameters optimization, but this time with the MASE procedure.
/Forecast optimization after 1st data setif(variable.prevfY.length === sLen * fLen && variable.trig === 1 && variable.forecastTrig === 1 && variable.SW === 1) {
//initialising best parameters calculation
//calculate the value for Mean Absolute Scale Error with seasonal period influence
//Calculation of the stable measure of the scale of the training time serie i.e. candle.close on the desired forecast number of periods minus season number of periods
variable.sumY = 0
variable.totalQT = 0
for(var i = sLen + 1 ; i < fLen * sLen ; i++) {
variable.sumY = variable.sumY + Math.abs(variable.last30Candles[i] - variable.last30Candles[i-sLen])
}
variable.Q = variable.sumY / (fLen * sLen - sLen)
for(var i = 0 ; i < variable.prevfY.length ; i++) {
variable.totalQT = variable.totalQT + (variable.last30Candles[i] - variable.prevfY[i])/variable.Q
}
variable.MASE = Math.abs(variable.totalQT / variable.prevfY.length)
variable.MbestAlpha = 0
variable.MbestBeta = variable.Beta
variable.MbestPhi = variable.Phi
variable.MbestGamma = variable.Gamma
variable.bestError = variable.MASE
We use the same principle for the code execution. We set the parameter to optimize at 0 and keep the last value of the others: Here is the example for α:
//Alpha optimization
for(var i = 0 ; i < Iter ; i++) {
variable.sS = []
variable.sError = []
variable.MbestAlpha = variable.MbestAlpha +(1 - i/Iter)
variable.sS[0] = variable.S[0]
variable.previoussA = variable.last31A[0]
variable.previoussB = variable.last31B[0]for(var j=0 ; j < sLen ; j++) {
variable.sY = (variable.previoussA + variable.MbestPhi * variable.previoussB) * variable.sS[0]
variable.sA = variable.MbestAlpha * variable.sY / variable.sS[0] + (1 - variable.MbestAlpha) * (variable.previoussA + variable.MbestPhi * variable.previoussB)
variable.sB = variable.MbestBeta * (variable.sA - variable.previoussA) + (1 - variable.MbestBeta) * variable.MbestPhi * variable.previoussB
variable.sS.push(variable.MbestGamma * variable.sY / variable.sA + (1 - variable.MbestGamma) * variable.sS[0])
variable.previoussA = variable.sA
variable.previoussB = variable.sB
}
variable.sumY = 0
variable.totalQT = 0
for(var i = sLen + 1 ; i < fLen * sLen ; i++) {
variable.sumY = variable.sumY + Math.abs(variable.last30Candles[i] - variable.last30Candles[i-sLen])
}
variable.Q = variable.sumY / (fLen * sLen - sLen)
for(var i = 0 ; i < variable.prevfY.length ; i++) {
variable.totalQT = variable.totalQT + (variable.last30Candles[i] - variable.prevfY[i])/variable.Q
}
variable.sMS = Math.abs(variable.totalQT / variable.prevfY.length)
if(variable.sMS < variable.bestError){
variable.bestError = variable.sMs
variable.MAlpha = variable.MbestAlpha
variable.Alpha = variable.MbestAlpha
}
}
variable.bestError = variable.MS
variable.MbestBeta = 0
Once the first optimization is done we can now enter in a “routine” forecast calculation:
//calculating forecast data after 1st set
if(variable.S.length === sLen && variable.trig === 1 && variable.forecastTrig === 1) {
for (var j = 0; j < fLen ; j++) {
for(var n = 0; n < sLen; n++) {
variable.fY.push((variable.previousfA + variable.MPhi * variable.previousfB) * variable.S[n])
variable.fA = variable.previousfA + variable.MPhi * variable.previousfB
variable.fB = variable.MPhi * variable.previousfB
variable.previousfA = variable.fA
variable.previousfB = variable.fB
if(variable.fY.length > fLen * sLen) {
variable.fY.splice(0,1)
}
variable.prevfY.push(variable.fY[0])
if(variable.prevfY.length > fLen * sLen){
variable.prevfY.splice(0,1)
}
}
}variable.forecast = variable.fY[0]
variable.forecast5 = 100 * (variable.fY[4] - variable.fY[0])/variable.fY[0]
variable.forecast10 = 100 * (variable.fY[9] - variable.fY[0])/variable.fY[0]
variable.forecast20 = 100 * (variable.fY[19] - variable.fY[0])/variable.fY[0]
variable.forecast30 = 100 * (variable.fY[29] - variable.fY[0])/variable.fY[0]} else {
variable.forecast = 0
variable.forecast5 = 0
variable.forecast10 = 0
variable.forecast20 = 0
variable.forecast30 = 0
}
We also prepare the framing of Min/Max forecast value on 5 periods. This may help to evaluate the accuracy of the forecast on the chart
//identifying max and min forecast value until the value used to calculate the first member of the array is out of scope
//The idea is to frame the forecast value by the corresponding min and max at the same distance
variable.lastForecast5.push(variable.forecast5)
if(variable.lastForecast5.length > 5){
variable.lastForecast5.splice(0,1)
}
variable.maxForecast5 = Math.max.apply(Math,variable.lastForecast5)
variable.minForecast5 = Math.min.apply(Math,variable.lastForecast5)variable.lastForecast10.push(variable.forecast10)
if(variable.lastForecast10.length > 10){
variable.lastForecast10.splice(0,1)
}
variable.maxForecast10 = Math.max.apply(Math,variable.lastForecast10)
variable.minForecast10 = Math.min.apply(Math,variable.lastForecast10)variable.lastForecast20.push(variable.forecast20)
if(variable.lastForecast20.length > 10){
variable.lastForecast20.splice(0,1)
}
variable.maxForecast20 = Math.max.apply(Math,variable.lastForecast20)
variable.minForecast20 = Math.min.apply(Math,variable.lastForecast20)variable.lastForecast30.push(variable.forecast30)
if(variable.lastForecast30.length > 30){
variable.lastForecast30.splice(0,1)
}
variable.maxForecast30 = Math.max.apply(Math,variable.lastForecast30)
variable.minForecast30 = Math.min.apply(Math,variable.lastForecast30)
F. Forecast and Begin/End Forecast Values Calculation
Of course since it is a forecast, i.e. future values as compared to the current candle values, we need to calculate the position of the forecast:
//calculate time for forecast
variable.T5Begin = candle.begin + (candle.end - candle.begin) * 5
variable.T5End = candle.end + (candle.end - candle.begin) * 5
variable.T10Begin = candle.begin + (candle.end - candle.begin) * 10
variable.T10End = candle.end + (candle.end - candle.begin) * 10
variable.T20Begin = candle.begin + (candle.end - candle.begin) * 20
variable.T20End = candle.end + (candle.end - candle.begin) * 20
variable.T30Begin = candle.begin + (candle.end - candle.begin) * 30
variable.T30End = candle.end + (candle.end - candle.begin) * 30
And this is it ! After data mining calculation, we will obtain a chart like this :
Performance and Forecast Accuracy
The Holt-Winters forecast from Quasar data mine in Superalgos has been ran on BTC/USDT pair from 1-min to 24-hs charts using a Raspberry Pi 4 board with 4gB RAM. The minimum loop-time observed while running only the data fetching from exchange (Binance), candle mining and Holt-Winters forecast (i.e.: the minimum mining requirement) is 3 minutes. While looking at the task manager, the limit is clearly the computing capability of the RPi whereas, memory impact is low. So all the values below the 5-min charts arrive too late. The using of a more powerful hardware might help to reduce the execution time.
We observe most of the time, the forecast range on 5 periods is pretty good but when it comes to sudden important change in the trend, the forecast lags but the trend is preserved.
Measures of the accuracy of the smoothed signal have been done on the 30-min and 1hs timeframe :
Both charts show a pretty good accuracy. Considering Superalgos evaluates the smoothed signal all along a considered period, we then have a good forecast of the expected close value of the current candle as, once the candle is closed, we get a very good smoothed signal.
Conclusion
The Holt-Winters forecast shows a good ability to provide a triple exponential moving average with almost no lag and very good accuracy. It can provide a forecast of the current period and up to 30 periods. Even if those forecast results are interesting, they should be used with great care. At the end, it is only mathematics and the credit it has is the only one you are wiling to give to it.
The Holt-Winters forecast is available in the Quasar data mine in Superalgos. It can be reused and integrated freely on the condition that you link to this article and the Superalgos website.
Disclaimer: The content of this article is for educational purpose only and does not constitute financial advice. Trading is not suitable for everybody; seek professional advice. Use this article at your own risk.
If you enjoyed this article and want to participate in the most promising open-source social trading platform,…
Come and Join Us !
- Superalgos Community : https://t.me/superalgoscommunity
- Superalgos Support Group : https://t.me/superalgossupport
- Superalgos Develop Group: https://t.me/superalgosdevelop
- Superalgos Data Mining : https://t.me/superalgosdatamining
- Superalgos Machine Learning Group: https://t.me/superalgosmachinelearning
- On Discord Server: https://discord.gg/CGeKC6WQQb