ImpactFace

Asteroids collide. Debris settles. Your face emerges.

Inspiration

We started with a simple question: what if a physics simulation could read you?

Most science hackathon projects treat data as something cold — numbers fed into a model, outputs printed to a screen. We wanted to flip that. We wanted the science to feel personal, even visceral. Space is the most inhuman environment imaginable, and yet here we are — creatures made of the same stardust as asteroids — making faces at our laptops.

The concept locked in when we realized that asteroid collisions are one of the most chaotic events in the universe, yet the debris they produce follows precise, deterministic physics. Chaos with rules. That's also a pretty good description of human emotion. So we asked: what if your emotion was the input, and a collision was the output?

The name ImpactFace came naturally.

What it does

ImpactFace is a real-time browser application that:

  • Reads your face via webcam using on-device ML (face-api.js), detecting four emotional states — anger, joy, fear, and calm.
  • Selects real asteroids from NASA's JPL Small-Body Database (e.g., 433 Eros, 4 Vesta) displaying actual diameter, mass, and albedo data.
  • Simulates a collision using a Velocity Verlet n-body integrator. Hundreds of debris particles erupt from the impact point.
  • Assembles your emoji from the debris. Fragments are steered toward pixel-precise coordinates derived from the dominant emotion's emoji (😡 😂 😨 😌).
  • Live Physics Controls: Every parameter is adjustable, from particle count (25–200) to gravitational constants and damping factors.

How we built it

The Stack

Layer Technology Why
Bundler Vite + TypeScript Fast dev server, strong typing for physics
Physics p5.js Draw loop + vector math without Three.js overhead
Emotion face-api.js Runs entirely in-browser; no API latency
Data NASA JPL SBDB API Real catalog data, no auth required
Deployment Vercel Auto-deploys Vite in under 2 minutes

The Emoji Rasterization Trick

An emoji is just pixels. We render the target emoji to a $42 \times 42$ offscreen canvas at build time and collect every pixel with $\alpha > 80$. Each pixel becomes a target coordinate $(x, y)$ in simulation space:

$$T = { (x_i, y_i) \mid \alpha_i > 80, i \in [0, W \times H) }$$

Velocity Verlet Integration

We chose Velocity Verlet over simple Euler integration because Euler accumulates energy error—the simulation either explodes or collapses. Verlet conserves energy to the second order.

Per timestep $\Delta t$: $$\vec{x}{n+1} = \vec{x}_n + \vec{v}_n \Delta t + \frac{1}{2} \vec{a}_n \Delta t^2$$ $$\vec{a}{n+1} = \frac{\vec{F}{n+1}}{m}$$ $$\vec{v}{n+1} = \vec{v}n + \frac{1}{2}(\vec{a}_n + \vec{a}{n+1}) \Delta t$$

Emotion → Physics Mapping

The dominant emotion selects a PhysicsParams object that controls the simulation:

Emotion Speed $G$ Elasticity Fragments Character
😡 Anger High Low Low (shatter) 600 Violent, hot red
😂 Joy Medium Medium High (bouncy) 420 Energetic, gold
😨 Fear Low Very low Medium 500 Scattered, purple
😌 Calm Very low High Medium 300 Slow orbital merge

Challenges we ran into

  • The Assignment Problem: Naive assignment causes particles to cross paths in a chaotic tangle. We implemented a greedy nearest-available assignment, providing clean convergence without the $O(n^3)$ cost of the Hungarian algorithm.
  • Rendering Inconsistency: Emojis render at different densities on macOS vs Windows. We solved this by normalizing output size and adding a fallback minimum particle count.
  • CORS & Assets: face-api.js model files must be served from the same origin. Loading from a CDN caused silent failures. We fixed this by moving models to /public/models/.

Accomplishments that we're proud of

  • The Formation Mechanic: It genuinely looks like a face emerging from chaos, not just a static overlay.
  • Scientific Grounding: Using real NASA catalog data makes the project feel "load-bearing" rather than decorative.
  • Performance: A full ML pipeline and n-body physics engine running at 60fps in the browser with zero cloud costs.

What we learned

  • Numerical methods matter. Switching from Euler to Verlet was the difference between a broken simulation and a stable one.
  • Art direction is hard. Getting debris to look "natural" required more tuning of ramp functions than actual physics coding.
  • On-device ML is viable. TensorFlow.js in WebGL is fast enough for real-time interaction without privacy concerns.

What's next for ImpactFace

  • Optimal Transport: Implementing the Hungarian algorithm for even cleaner particle transitions.
  • Multi-face Support: Detecting multiple people to spawn simultaneous collisions.
  • Kessler Cascade Mode: Completed faces shatter to become the debris for the next emotion, creating an infinite chain.

Key Improvements Made:

  1. Fixed Tables: Devpost uses GitHub Flavored Markdown. Your original table was missing the | :--- | alignment row, which would have rendered as plain text.
  2. LaTeX Spacing: Added proper spacing to the math formulas (e.g., \vec{x}) so they render cleanly.
  3. Hierarchy: Used bolding for technology names and bullet points for lists to make it "skimmable" for judges.
  4. Emoji Integration: Kept the emojis in the table to make the "Mapping" section visually intuitive.

Built With

  • css3
  • face-api.js
  • face-expression-net
  • google-cloud-vision-api
  • html5
  • html5-canvas-api
  • javascript
  • mediastream-api
  • nasa-jpl-small-body-database-api
  • p5.js
  • tensorflow.js
  • tiny-face-detector
  • typescript
  • vercel
  • vite
Share this project:

Updates