Inspiration

I’ve always been fascinated by how powerful machine learning models are, but frustrated by how they are often taught as "black boxes." Most tutorials jump straight from a high-level hand-wavy explanation to a wall of Python code. I wanted something in the middle something tactile. Inspired by the interactive teaching style of Hacksplaining, I decided to build a platform where users could "touch" the math and build visual intuition before ever writing a line of code.

What it does

Dataxplaining is an interactive compendium of 15+ Machine Learning models. It transforms abstract mathematical concepts into live, manipulatable SVG simulations.

  • Real-time Insights: Integrated Gemini 3 Flash provides micro-explanations as you move sliders.
  • Neural Insights: Real-time micro-explanations powered by Gemini 3 Flash that provide immediate feedback and diagnostic status as users adjust simulation parameters.
  • Voice Lab: A low-latency voice session allows users to discuss the active simulation with a research assistant.
  • Prophetic Visions: Gemini 2.5 Flash generates Roman-style art to visualize future real-world applications of these mathematical models.

How we built it

The project is built using React and Tailwind CSS. I prioritized performance and raw control, so instead of using standard charting libraries, I built custom SVG-based simulations. This allowed me to perfectly illustrate specific geometric concepts like the "Maximum Margin" in SVMs or the "Error Valleys" of Gradient Descent.

For the "brain" of the app, I leveraged the Google Gemini API:

  • Gemini 3 Pro for deep, structured reasoning.
  • Gemini 3 Flash for ultra-fast diagnostic feedback.

Challenges we ran into

  1. Simplification without Distortion: Distilling complex topics like Principal Component Analysis (PCA) or Reinforcement Learning into a 2D browser window without losing the core mathematical truth was incredibly difficult.
  2. Web Audio Buffers: Implementing the Gemini Live API for real-time voice interaction required handling raw PCM audio data. Bridging the gap between a high-level LLM and low-level browser audio processing was a steep learning curve.
  3. The Humility Gap: Since I’m not a master of Data Science, I frequently ran into situations where my code worked, but the math was slightly misleading. This required constant refactoring and double-checking of formulas.

Accomplishments that we're proud of

I’m proud of creating a system where math doesn't feel like a chore it feels like a discovery. Successfully integrating the Gemini Live API to provide a voice-driven learning experience was a major technical win. Seeing a user move a slider and get an immediate, helpful response from an AI that actually "sees" their simulation state feels like the future of education.

What we learned

I learned that Machine Learning is inherently geometric. Whether it's the boundaries of a Decision Tree or the weights of a Neural Network, every algorithm has a shape. If you can't visualize it, you don't truly understand it. This project taught me that the best way to learn is to build something you don't fully understand yet.

What's next for Dataxplaining

I want to expand the library to include more modern architectures like Generative Adversarial Networks (GANs) and Transformers. I also hope to integrate a community feedback loop where experts can suggest corrections to the simulations, ensuring the "visual truth" of the platform is as accurate as possible. Dataxplaining is just the beginning of making math accessible to everyone.

Share this project:

Updates