Inspiration

Financial markets are defined by high noise, non-stationarity, and frequent regime shifts. We were inspired by the hypothesis that quantum-derived feature transformations can expose hidden non-linear structures in market data that classical polynomial expansions fail to capture. Our goal was to rigorously test this and not just prove "quantum supremacy," but to find a measurable predictive edge for institutional-grade portfolio management.

What it does

Our project evaluates Quantum Feature Augmentation for predicting 5-day excess returns of S&P 500 constituents. It pulls real-world data, engineers relative features (Stock vs Market), and runs a side-by-side "race" between an 18-dimensional classic baseline and a Quantum Feature Map using a strict walk-forward backtest.

How we built it

We developed our pipeline using Python, yfinance, and scikit-learn, operating on a 10-ticker S&P 500 universe over 1,200 trading days. Both our classical and quantum models started from the exact same five base features: Momentum 5d/20d, Volatility 20d, Trend, and Volume Z. For the classical baseline, we expanded these five base features into an 18-dimensional space by adding five squared terms and eight specific two-way interactions. For the quantum engine, we implemented a 5-qubit hardware-efficient ansatz using the AWS Braket SDK, extracting 45 raw expectation values encompassing 15 single-qubit X/Y/Z and 30 two-qubit XX/YY/ZZ interactions. To maintain a strict apples-to-apples comparison, we dynamically selected the top 18 quantum features by variance in each training window to perfectly match the classical model's dimensionality. Finally, for validation, we utilized a 31-period rolling window consisting of a 2-year training block, a 1-quarter validation block, and a 20-day test block to completely prevent look-ahead bias.

Challenges we ran into

Our initial synthetic data test showed a negative result as the classical model outperformed the quantum one (Classical rho=0.6122 vs. Quantum rho=0.5511). Instead of hiding this, we used it as a diagnostic tool, realizing that purely classical data-generating processes do not benefit from quantum embeddings. This led us to pivot to real S&P 500 data, where the hidden non-linearities allowed the quantum model to thrive.

Accomplishments that we're proud of

On real S&P 500 data, our quantum pipeline delivered exceptional results, most notably a 71% win rate where the quantum model's Information Coefficient (IC) strictly exceeded the classical baseline's IC in 22 out of 31 rolling out-of-sample periods. Furthermore, the quantum model achieved a 15.2% reduction in IC volatility measured as the standard deviation of the IC across all 31 rolling windows, indicating a significantly more stable and consistent predictive signal. Ultimately, we achieved a "Median Flip," successfully shifting the Median IC from a negative (-0.0399) in the classical model to a positive (0.0245) in the quantum model.

What we learned

We learned that quantum features require aggressive classical regularization to manage their high expressivity. By tuning the Ridge alpha penalty via grid search on the validation sets, we found the classical model often "bled" directionally wrong predictions in noisy regimes, whereas the quantum feature map acted as a structural stabilizer, providing a more reliable edge for risk management.

What's next for Quantum-Enhanced Alpha

The next step is to transition from our exact state-vector simulation to executing our XX/YY/ZZ measurement scheme on real physical QPU hardware. We plan to migrate our pipeline to AWS Braket's IonQ or Rigetti backends to empirically measure how device-specific shot noise impacts our 15.2% signal volatility reduction, and to further explore multi-scale temporal dependencies in the quantum Hilbert space.

Built With

Share this project:

Updates