Quantum-Leap in Finance: Advancing Decision-Making with D-Wave Portfolio Optimization

Suman Kumar Roy
7 min readAug 28, 2023

Portfolio optimization is allocating investments to maximize returns while managing risk. It involves investing a given budget in a set of financial assets based on their past returns and volatility, aiming to achieve the desired return level within an acceptable level of risk. Constraints can be imposed to ensure the investment aligns with specific objectives or limitations. The problem of determining where the whole problem lies can be exciting and can be NP-hard.

Portfolio Optimization Problem

Markowitz Portfolio Optimization, also known as Modern Portfolio Theory, is a method in finance developed by Harry Markowitz. It focuses on creating investment portfolios that balance risk and return. Key aspects include diversification, the Efficient Frontier (which plots optimal portfolios), consideration of asset covariance and correlation, and incorporating a risk-free asset. The goal is to maximize returns for a given risk level or minimize risk for a desired return. While powerful, it assumes normal distribution of returns and requires accurate data. It forms the basis of portfolio management and allocation strategies. The real-world data set consists of three main asset classes: equity, fixed income, and money market. Typically, client portfolios contain between 9-11 assets. However, there are constraints imposed by product design or regulatory requirements. The goal is to optimize the portfolio for maximum return.

Mathematical Problem and Constraints

To optimize and maximize the return on your assets, there are several mathematical constraints that must be considered. Volatility is a crucial constraint, represented by a quadratic equation, which necessitates understanding the variability of your assets. Another key constraint is the weight constraint, which requires investing all of your funds and not keeping any aside. The weight of each asset should total 100% or 1. Moreover, there are various linear constraints to be taken into account. These involve setting minimum and maximum limits on investments in specific assets. Additionally, there are multi-asset constraints that restrict investments in groups of assets, and asset classes, or determine the relative allocation between these groups. To effectively resolve the problem, it is vital to organize and enforce these constraints. This ensures that your investment strategy aligns with your desired outcomes.

The plain vanilla Cubo approach uses binary variables to optimize the objective function and find the best configuration of variables. Discretizing continuous variables is necessary for benchmarking against other solvers. Constraint penalties are included in the objective function. The approach involves discretizing solutions based on a binary representation with a given granularity. Increasing the granularity of a course can provide better results and approximation of continuous variables, but it requires more resources. This includes more qubits to represent a portfolio on a quantum computer. The second realization is that by leveraging a constraint on the budget, we can save resources. This allows us to focus our attention and the computer’s attention on a specific range, which increases accuracy and reduces the need for discretization. Additionally, this automatically satisfies the min-max constraint and eliminates the need for additional constraints and burdens on the QPU. Constraints and terms in the objective function still need to be modeled.

In the realm of portfolio optimization, the primary objective is clear: maximize returns. This cornerstone goal sets the stage for a series of intricate maneuvers. The requirement to invest all resources — an equality constraint — brings clarity to the allocation process. Linear constraints then come into play, defining the limits and combinations of assets, often involving inequality-based considerations. However, the complexity deepens when addressing risk assessment. The standard deviation, a representation of squared randomness, introduces a quadratic challenge to minimize. Integrating this constraint into the objective function requires innovative strategies, like incorporating it as an additional constraint or leveraging Lagrange multipliers for flexibility. Amidst these intricacies, the determination of numerical values becomes a pivotal task, involving methods such as Bayesian optimization. The pursuit of benchmarking then takes center stage, featuring solvers like QBSolv, D-Wave’s hybrid binary quadratic model, and their novel hybrid constraint quadratic model. Ultimately, portfolio optimization emerges as a symphony of strategies, harmonizing numbers and solvers to strike the right balance between returns and risk. This evolving dance underscores the quest for financial perfection, where risk and return continue to echo, propelling the ongoing journey.

Performance Evaluation

The solutions obtained by sampling show the volatility and expected return of the resulting portfolio, with a cross indicating the classical result of the return and the volatility threshold imposed by the problem. A system using c-plex IBM determines the real optimum for the given set of assets and portfolio size. The point cloud of solutions or QBSlov indicates some solutions exceed the admitted volatility, while others violate constraints. A hybrid BQM method reduces variance and better satisfies constraints. Adding more granularity may not improve solutions and demands better search algorithms. Hybrid CQM shows a significant improvement in solution quality, with solutions concentrated on the almost optimal point.

The objective of finding the best portfolio is not just about the average or mean return but also about the return with the best volatility. The Sharpe ratio, which is the objective divided by the volatility, is an important quantity to consider. The distribution of objectives, volatilities, and Sharpe ratios can be plotted to compare different strategies. In this case, QBSolv, hybrid BQM, and hybrid CQM were compared for portfolios with 10 and 20 variables. The best permissible portfolio, which satisfies all constraints while giving the best return for the volatility given, was identified for each strategy. For portfolios with 10 variables, all strategies were close in terms of the business-relevant portfolio, but the BQM strategy had a slightly better Sharpe ratio. However, for portfolios with 20 variables, the CQM strategy was better in terms of the Sharpe ratio for the best permissible portfolio, while the BQM strategy resulted in a worse return in this approach. While the ensemble performance is an advantage, it requires a lot of modeling effort and trade-offs must be considered.

It has been observed that quantum computers cannot be beaten by classical computers as they already provide optimal results. However, there may come a point where optimizing a portfolio classically becomes extremely challenging. The focus was on scaling up to a large portfolio with 499 assets, using objective volatility and Sharpie ratios. Comparing 10 assets in blue and 499 assets in orange, the Sharpie ratios appeared to be on par with similar objectives. It was found that scaling up to 499 assets in a portfolio is feasible, but it is still not a problem for the classical solver.

In analyzing the performance of the CQM Biolet solver, it was found to be less constrained compared to the Construction in Cubesolve solver. All solutions generated by CQM did not violate any constraints, which is desirable. Benchmarking was performed for various granularities, with results showing that having higher granularity does not necessarily improve performance. Scaling up to 499 assets showed that CQM provides valid results, while Cubisolve does not. Overall, CQM does a good job in solving the problem at hand.

Hybrid solvers demonstrate significant enhancements in results, particularly notable with hybrid constraint quadratic models (CQMs). In known optimization setups where the optimum is well-defined, CQMs approximate exceptionally well. Quantum Computing offers a compelling alternative for solving this problem at scale, potentially yielding a Quantum Advantage. However, constraint adherence requires meticulous attention, as violating constraints can undermine solutions. Currently, the pursuit is to identify a Quantum sweet spot, which might involve exploring novel complexities in the problem, like intricate or distinct constraints, rather than merely scaling up assets.

References:
https://www.youtube.com/watch?v=SWW7Gpg30Nw

#Quantum30 QuantumComputingIndia

--

--

Suman Kumar Roy

Completed M. Tech at NITK, Surathkal, Quantum Researcher @TCS Research, Quantum Computing, QML and Algorithm Enthusiast