Sitemap
CodeX

Everything connected with Tech & Code. Follow to join our 1M+ monthly readers

From Chaos to Alpha, Part 2: Supercharging a Machine Learning Strategy with Lorenz Features

3 min readSep 4, 2025

--

Introduction

In Part 1 of this series, we explored a fascinating concept from chaos theory — the Lorenz attractor — and used it to build a novel, rule-based trading strategy. We demonstrated how the market’s “state” could be described by its position, velocity, and acceleration, allowing us to identify profitable trend transitions.

The results were promising, but they left us with a tantalizing question: can these same concepts provide an even greater edge when used within a more sophisticated Machine Learning framework?

In this article, we’ll answer that question. We will take our Lorenz state features and inject them into a classic ML-powered trend-following strategy, the DonchianBreakoutStrategy, and use the AlphaSuite platform to optimize and validate its performance.

The Strategy: Donchian Channel Breakout

The DonchianBreakoutStrategy is a trend-following system that aims to capture sustained moves. Its core logic is simple:

  1. Identify a stock in a long-term uptrend.
  2. Enter a trade when the price breaks above its recent high (the upper Donchian Channel).

The primary challenge for this strategy is filtering out “false breakouts” — moves that quickly reverse. Our hypothesis is that by giving the strategy’s machine learning model access to the Lorenz state features, it can learn to distinguish between high-conviction breakouts and low-probability fakes.

Feature Engineering: The Physics of Price

Instead of using the Lorenz states as hard-coded rules, we will now use them as features for the ML model. This allows the model to learn the complex, non-linear relationships between the market’s “physics” and the probability of a successful trade.

Here is the core code snippet from our DonchianBreakoutStrategy where we calculate these features:

# --- Add Lorenz State Features ---
lorenz_lookback = self.params.get('lorenz_lookback', 50)
lorenz_momentum = self.params.get('lorenz_momentum', 14)

# State X: Price position relative to its recent range (normalized).
# A value of +1 means the price is at its lookback high.
# A value of -1 means the price is at its lookback low.
rolling_min = data['close'].rolling(window=lorenz_lookback).min()
rolling_max = data['close'].rolling(window=lorenz_lookback).max()
rolling_range = rolling_max - rolling_min
data['lorenz_state_x'] = np.where(
rolling_range > 0,
2 * ((data['close'] - rolling_min) / rolling_range) - 1,
0
)
# State Y: Price velocity (momentum).
data['lorenz_state_y'] = data['close'].pct_change(periods=lorenz_momentum)

# State Z: Price acceleration (change in velocity).
data['lorenz_state_z'] = data['lorenz_state_y'].diff()

With these features, the model can now ask much more intelligent questions before placing a trade: “Is this breakout occurring with high acceleration? Is it starting from an oversold position in its range? Is the current velocity sustainable?”

The Experiment: Optimization and Validation

Using the AlphaSuite UI, we ran a full walk-forward backtest on Apple Inc. (AAPL) from 2000 through late 2024. Crucially, we enabled hyperparameter tuning, allowing the platform’s Bayesian optimizer to find the ideal combination of parameters (like donchian_period and lorenz_lookback) that maximized the Calmar Ratio (return relative to drawdown).

The Results: A Decisive Victory

The results were nothing short of spectacular. The hyperparameter tuning process unlocked the full potential of the Lorenz features, leading to a strategy that was dramatically more profitable without taking on additional risk.

Metric Baseline Strategy Tuned Strategy + Lorenz Features
Total Return % 142.1 389.9
Max Drawdown % -8.03 -8.10
Win Rate % 50.0 61.8
Sharpe Ratio 0.057 0.071

The tuned strategy produced nearly 2.75 times the total return of the baseline while maintaining an almost identical risk profile. The win rate soared from 50% to over 61%, proving that the model had learned to effectively filter for high-quality breakouts. And the Lorenz features are high-up in the Feature Importance list (lorenz_state_z: #2, lorenz_state_y: #9, lorenz_state_x: #16), proving that they played an important role in getting better results.

Press enter or click to view image in full size
A screenshot of the Feature Importance list.
A screenshot of the Feature Importance list. Image by author.

Conclusion

This experiment is a powerful demonstration of a modern quantitative workflow. By taking an abstract concept from chaos theory, implementing it as a set of descriptive features, and leveraging a machine learning model with walk-forward optimization, we were able to create a strategy with a clear and significant edge.

The Lorenz features provided the “alpha,” but it was the systematic tuning process that unlocked their true potential. This showcases that the most powerful strategies often arise not from a single magic bullet, but from the intelligent combination of creative feature engineering and robust optimization.

If you found this interesting, please check out the project on GitHub and give it a star! You can also try the live demo yourself.

--

--

CodeX
CodeX

Published in CodeX

Everything connected with Tech & Code. Follow to join our 1M+ monthly readers

Richard Shu
Richard Shu

Written by Richard Shu

Senior software developer & AI practitioner (full-stack, 20+ years) , builder of aitransformer.net, videoplus.studio and AlphaSuite.

Responses (3)