Skip to content

Chapter 194: HHL Algorithm for Finance

Chapter 194: HHL Algorithm for Finance

1. Introduction

The Harrow-Hassidim-Lloyd (HHL) algorithm, introduced in 2009 by Aram Harrow, Avinatan Hassidim, and Seth Lloyd, represents one of the most celebrated results in quantum computing. It solves linear systems of equations of the form Ax = b, where A is an N x N Hermitian matrix and b is a known vector, producing the solution vector x. The breakthrough lies in its exponential quantum speedup: while the best classical algorithms require O(N * kappa) operations (where kappa is the condition number of A), HHL achieves the same task in O(log(N) * kappa^2 * poly(1/epsilon)) time, where epsilon is the desired precision.

In quantitative finance, linear systems appear everywhere. Portfolio optimization, risk factor models, derivative pricing via PDE discretization, and regression-based strategies all reduce to solving Ax = b at their core. As the dimensionality of these problems grows---more assets, finer discretization grids, larger factor models---the classical computational burden increases dramatically. The HHL algorithm offers a tantalizing path forward: the possibility of solving these massive linear systems exponentially faster on a quantum computer.

This chapter explores the mathematical foundations of HHL, maps it onto concrete trading and portfolio optimization problems, discusses the practical caveats that currently limit real-world deployment, and provides a Rust-based educational implementation that simulates the algorithm’s core mechanics for small systems.

2. Mathematical Foundation

2.1 Problem Setup

Given a Hermitian matrix A of dimension N x N and a vector b of dimension N, we seek the vector x such that:

A * x = b => x = A^(-1) * b

If A is not Hermitian, we can always transform the problem by considering the augmented system:

| 0 A | | 0 | | b |
| A* 0 | | x | = | 0 |

where A* is the conjugate transpose. This augmented matrix is Hermitian by construction.

2.2 Eigenvalue Decomposition

Since A is Hermitian, it admits a spectral decomposition:

A = sum_j (lambda_j * |u_j><u_j|)

where lambda_j are the real eigenvalues and |u_j> are the orthonormal eigenvectors. The inverse is then:

A^(-1) = sum_j (1/lambda_j * |u_j><u_j|)

The solution vector can be written as:

|x> = A^(-1)|b> = sum_j (beta_j / lambda_j) * |u_j>

where beta_j = <u_j|b> are the coefficients of |b> in the eigenbasis of A.

2.3 The HHL Algorithm Steps

The HHL algorithm proceeds through four key stages:

Step 1: State Preparation. Encode the vector b as a quantum state |b> on n = log2(N) qubits. This requires that ||b|| = 1 (normalized).

Step 2: Quantum Phase Estimation (QPE). Apply quantum phase estimation to the unitary U = e^(iAt) with respect to |b>. QPE maps:

|b> = sum_j beta_j |u_j> => sum_j beta_j |u_j> |~lambda_j>

where |~lambda_j> is a register encoding an approximation of the eigenvalue lambda_j in binary. QPE uses the Quantum Fourier Transform and controlled unitary operations to extract eigenvalue information into an ancilla register.

Step 3: Controlled Rotation. Conditioned on the eigenvalue register, apply a rotation to an ancilla qubit:

|~lambda_j> |0> => |~lambda_j> ( sqrt(1 - C^2/lambda_j^2) |0> + C/lambda_j |1> )

where C is a normalization constant chosen so that C/lambda_j <= 1 for all eigenvalues. This step encodes the crucial 1/lambda_j factor into the amplitude of the ancilla qubit being |1>.

Step 4: Uncomputation and Measurement. Reverse the QPE to uncompute the eigenvalue register, yielding:

sum_j beta_j (C/lambda_j) |u_j> |1> + ... |0>

Measuring the ancilla qubit and post-selecting on outcome |1>, we obtain:

|x> = (1/||x||) * sum_j (beta_j / lambda_j) |u_j>

which is proportional to A^(-1)|b>, the desired solution.

2.4 Complexity Analysis

ComponentClassical (Best)HHL (Quantum)
Matrix dimension dependenceO(N)O(log N)
Condition number dependenceO(kappa)O(kappa^2)
Precision dependenceO(1/epsilon)O(poly(1/epsilon))
OverallO(N * kappa)O(log(N) * kappa^2 * poly(1/epsilon))

The exponential speedup in N is the headline result. For large-scale financial systems with millions of variables but well-conditioned matrices, this speedup could be transformative.

3. Trading Applications

3.1 Portfolio Optimization as a Linear System

Mean-variance portfolio optimization seeks weights w that minimize portfolio variance subject to a target return. The first-order optimality conditions yield:

Sigma * w = mu * lambda

where Sigma is the N x N covariance matrix, mu is the vector of expected returns, and lambda is a Lagrange multiplier. This is precisely an Ax = b problem where A = Sigma and b = lambda * mu.

For a universe of 5000 assets, Sigma is a 5000 x 5000 matrix. Classical solvers handle this in seconds, but when we consider:

  • High-frequency rebalancing requiring solutions every millisecond
  • Multi-period optimization with tens of thousands of decision variables
  • Robust optimization with scenario-dependent covariance matrices

the computational burden can become significant. HHL could, in principle, reduce the dependence on dimension from O(N) to O(log N).

3.2 Risk Factor Models

Factor models express asset returns as:

r = B * f + epsilon

where B is the N x K factor loading matrix, f is the K-vector of factor returns, and epsilon is idiosyncratic noise. The covariance matrix becomes:

Sigma = B * F * B^T + D

where F is the factor covariance and D is the diagonal idiosyncratic covariance. Computing risk contributions, stress testing, and scenario analysis all require solving linear systems involving Sigma or related matrices.

3.3 Derivative Pricing via PDE Discretization

The Black-Scholes PDE and its generalizations are commonly solved by finite difference methods, which discretize the PDE on a grid and produce a linear system:

A * V(t) = V(t+dt) + boundary_terms

where V is the vector of option values at grid points and A encodes the finite difference scheme. For multi-asset options (basket options, rainbow options), the grid dimensionality grows exponentially with the number of underlyings---precisely the regime where HHL’s logarithmic scaling would be most valuable.

3.4 Regression and Factor Analysis

Ordinary least squares regression solves:

(X^T X) * beta = X^T y

where X is the T x N design matrix (T observations, N features) and y is the response vector. The normal equations form a linear system that HHL could accelerate when N is extremely large.

4. Caveats and Practical Limitations

4.1 The Input/Output Problem

The most fundamental limitation of HHL is the input/output problem. The algorithm assumes that:

  1. The vector b can be efficiently loaded into a quantum state (state preparation)
  2. The matrix A can be efficiently simulated as a Hamiltonian e^(iAt)
  3. The output quantum state |x> can be usefully read out

Each of these steps carries significant overhead:

  • State preparation for an arbitrary N-dimensional vector requires O(N) operations, potentially negating the exponential speedup.
  • Hamiltonian simulation of a dense N x N matrix is generally expensive unless A has special structure (sparse, low-rank, etc.).
  • Readout of the full solution vector x requires O(N) measurements. HHL is most useful when we only need a global property of x (e.g., the inner product <x|M|x> for some observable M).

4.2 Condition Number Sensitivity

HHL’s complexity scales as kappa^2, where kappa = lambda_max / lambda_min is the condition number. Financial covariance matrices are notoriously ill-conditioned---condition numbers of 10^4 to 10^8 are common. This means:

  • The quantum speedup may be partially or fully offset by large kappa
  • Preconditioning techniques from classical linear algebra become essential
  • Regularization (Tikhonov, shrinkage estimators) that reduce kappa are doubly beneficial

4.3 NISQ Device Limitations

Current Noisy Intermediate-Scale Quantum (NISQ) devices have:

  • Limited qubit counts: QPE requires many ancilla qubits for precision
  • High error rates: Deep circuits needed for QPE suffer from decoherence
  • Limited connectivity: Physical qubit topologies add SWAP gate overhead

Realistic estimates suggest that solving even a 4x4 linear system with useful precision would require thousands of logical (error-corrected) qubits. Fault-tolerant quantum computers capable of running HHL at scale remain years away.

4.4 Quantum-Inspired Classical Algorithms

In 2018, Ewin Tang showed that for certain low-rank matrices, classical algorithms can achieve similar (polynomial rather than exponential) speedups using randomized sampling techniques. This result, known as “dequantization,” narrows the advantage of HHL for specific problem structures common in machine learning and recommendation systems.

5. Implementation Walkthrough (Rust)

Our Rust implementation provides an educational simulator for the HHL algorithm on 2x2 Hermitian systems. While a real quantum implementation would use physical qubits, our simulator demonstrates the mathematical core of the algorithm.

5.1 Core HHL Simulation

The simulator follows the exact mathematical steps of HHL:

// 1. Eigendecompose A
let (eigenvalues, eigenvectors) = eigendecompose_2x2(&a);
// 2. Express |b> in the eigenbasis
let beta = [
eigenvectors[0][0] * b[0] + eigenvectors[0][1] * b[1],
eigenvectors[1][0] * b[0] + eigenvectors[1][1] * b[1],
];
// 3. Apply the 1/lambda inversion (controlled rotation core)
let x_eigen = [beta[0] / eigenvalues[0], beta[1] / eigenvalues[1]];
// 4. Transform back to the computational basis
let x = eigenvectors_transpose * x_eigen;

The key insight is that HHL’s quantum advantage comes from performing steps 1-3 in superposition over all eigenvalues simultaneously, using O(log N) qubits instead of operating on the full N-dimensional space.

5.2 Portfolio Optimization Pipeline

The trading example constructs the covariance matrix from historical returns fetched from Bybit, formulates the minimum-variance portfolio as a linear system, and solves it using both HHL simulation and classical Gaussian elimination.

// Construct covariance matrix from returns
let cov_matrix = compute_covariance(&returns_matrix);
// Formulate: Sigma * w = ones (minimum variance portfolio)
let b = vec![1.0; n_assets];
// Solve via HHL simulation and classical method
let w_hhl = hhl_solve(&cov_matrix, &b);
let w_classical = gaussian_solve(&cov_matrix, &b);
// Normalize to sum to 1
let w_hhl_normalized = normalize_weights(&w_hhl);

5.3 Classical Comparison

We implement Gaussian elimination with partial pivoting as the classical baseline:

fn gaussian_elimination(a: &Vec<Vec<f64>>, b: &Vec<f64>) -> Vec<f64> {
// Forward elimination with partial pivoting
// Back substitution
// O(N^3) complexity for general systems
}

For 2x2 systems, the classical solver is of course faster due to minimal overhead, but the comparison illustrates the algorithmic differences.

6. Bybit Data Integration

The implementation fetches real market data from the Bybit public API to construct realistic covariance matrices:

let url = format!(
"https://api.bybit.com/v5/market/kline?category=spot&symbol={}&interval=60&limit=200",
symbol
);

We fetch hourly kline (candlestick) data for multiple trading pairs (BTCUSDT, ETHUSDT), compute log returns, and use these returns to build the covariance matrix that forms matrix A in our portfolio optimization system.

The pipeline:

  1. Fetch OHLCV data from Bybit for each asset
  2. Compute log returns: r_t = ln(close_t / close_{t-1})
  3. Build the covariance matrix: Sigma_{ij} = Cov(r_i, r_j)
  4. Formulate the minimum-variance optimization as Sigma * w = 1
  5. Solve using both HHL simulation and classical Gaussian elimination
  6. Compare results and execution times

7. Key Takeaways

  1. Exponential Speedup in Theory: HHL solves Ax = b in O(log(N) * kappa^2 * poly(1/epsilon)) versus the classical O(N * kappa), offering exponential improvement in the dimension N. For financial problems with thousands or millions of variables, this is a compelling advantage.

  2. Natural Financial Applications: Portfolio optimization (covariance systems), risk models (factor systems), derivative pricing (PDE discretization), and regression (normal equations) all reduce to linear systems that HHL could accelerate.

  3. The Input/Output Bottleneck: The most practical limitation is not the algorithm itself but loading data in and reading results out. HHL is most useful when we need aggregate properties of the solution (e.g., portfolio variance, expected return) rather than the full weight vector.

  4. Condition Number Matters: Financial covariance matrices tend to be ill-conditioned, and HHL’s kappa^2 dependence means that preconditioning and regularization are essential preprocessing steps.

  5. Not Yet Practical: NISQ devices cannot run HHL at useful scale. The algorithm requires fault-tolerant quantum computers with many logical qubits. Current hardware is at least 5-10 years away from this capability.

  6. Classical Competition is Fierce: Quantum-inspired classical algorithms, improved sparse solvers, and GPU-accelerated linear algebra continue to raise the bar that quantum approaches must clear.

  7. Strategic Preparation: Despite current limitations, understanding HHL and quantum linear algebra prepares practitioners for a future where quantum hardware matures. Firms that understand how to formulate their problems as quantum-amenable linear systems will be best positioned to exploit quantum advantage when it arrives.

  8. Educational Value: Even as a classical simulation, implementing HHL teaches important concepts about eigenvalue decomposition, phase estimation, and the mathematical structure underlying both quantum and classical linear solvers. These concepts are valuable for understanding numerical methods in finance regardless of the quantum computing timeline.