Quantum Feature Augmentation for Financial Market Prediction

The core question we wanted to answer was simple: do quantum-derived features actually help predict financial returns, or is it hype? We wanted to find out rigorously.

Inspiration

Most quantum finance papers set up a weak classical baseline and claim quantum wins. We thought that was intellectually dishonest. If quantum features are genuinely useful, they should be able to beat a strong classical baseline one specifically engineered to be competitive. That framing shaped everything we built.

How We Built It

We split the pipeline into two parallel tracks feeding the same model:

Classical track: We reverse-engineered the functional forms in the synthetic data-generating process and built a 39-feature library spanning polynomial terms, pairwise interactions \(X_i \cdot X_j\), and log-abs transforms \(\log(|X_i|+1)\). Ridge and Elastic Net regularization handled feature selection.

Quantum track We used angle encoding to embed classical inputs into quantum states via parameterized circuits:

$$|\psi(X)\rangle = W_{ent} \prod_{k=1}^{n} R_Z^{(k)}(X_k)|0\rangle^{\otimes n}$$

Measurement expectation values \(\langle \psi(X)|\hat{O}|\psi(X)\rangle\) became additional features fed into the same linear model.

The shared model, Ridge regression, was identical across both tracks. The only variable was the features.

Results

Condition MSE IC
Raw features 4.55 0.932
Classical augmented 1.69 0.975
Quantum augmented 1.66 0.976
GBM ceiling 1.12 0.984

Our most significant contribution to the model that we devised is a means to combine our stock prediction data from the quantum and classical pipelines into a single maximally efficient framework that utilizes a hybrid quantum-classical model inference to identify the best predictive ratio between quantum and classical outputs for any given financial regime and stock.

What We Learned

The key lesson we learned during this project was how to handle scope. We aimed for a massive scope. We wanted to implement multiple algorithms, each with its own challenges. Once we realized this was inefficient, we focused on using a single quantum feature method and doing the best we could with it. By focusing our energy on one area, we achieved significantly better results.

Challenges

One of the biggest challenges was navigating the real hardware constraints of quantum computing under tight budget and time pressure. We had to choose between superconducting and ion-trap QPU architectures on AWS Braket, each with meaningful tradeoffs. Ion traps offered higher gate fidelity (+0.6%) and better error mitigation, which matters for detecting weak financial signals, but superconducting devices gave us roughly 89× better cost efficiency and 24/7 availability. With a $1,000 superconducting budget cap and only 24 hours, we couldn't afford to optimize for quality alone; a single IonQ Forte sample cost $403 versus $7.50 on Rigetti Ankaa-3, meaning the ion trap would have limited us to just 6 QPU-validated training samples compared to 133. Beyond cost, queue latency, and device availability windows threatened to eat up our entire hackathon timeline. Our time modeling showed that an IonQ-primary strategy would leave roughly one hour of buffer, making debugging and iteration nearly impossible. Balancing circuit fidelity, throughput, connectivity requirements, and budget discipline to arrive at a viable quantum feature map pipeline was a nontrivial engineering problem that required careful consideration of every architectural decision. That's why we chose Rigetti.

Built With

Share this project:

Updates