Project Title: VibeOS // A Nervous System for Your Computer

Inspiration

I didn’t want to build another tool that tells you what to do.

Most software is cold, static, and indifferent. It assumes the user is always focused, calm, and rational—which is rarely true. Our operating systems look the same whether we are hitting a flow state at 2 AM or suffering a "Kernel Panic" during a production bug hunt.

VibeOS started as a simple question: What if your computer reacted to your mental state the way your body does?

This project is an experiment in coding for sensation rather than productivity—using visuals, motion, and math-based sound to reflect stress, focus, and flow. I wanted to build a biological abstraction layer between the human and the machine.

What it does

VibeOS is an experimental interface that behaves like a digital nervous system. It acts as an atmospheric regulator for your workspace.

Instead of tasks or dashboards, it uses:

Generative 3D Physics: A central "Vibe Orb" that morphs, distorts, and changes viscosity based on your emotional syntax.

Procedural Audio Synthesis: A real-time drone engine that generates "Digital Biometrics"—audio that doesn't just play back, but synthesizes based on your mood.

Cognitive Utility: Tools like the "Brain Dump" (a vanishing text input for clearing mental clutter) and the "Focus Physics" (a 3D timer where the orb physically shrinks as time passes).

When stress increases, the interface tightens, glitches, and growls. When focus returns, it smooths out, slows down, and hums.

How we built it

We focused on "Vibe Engineering"—ensuring the technical stack served the emotional impact.

AI as a Signal Translator: We used Google Gemini not for chat, but as a sentiment-to-physics compiler. It interprets user input and returns a strict JSON schema containing hex codes, distortion vectors, and audio frequencies.

Generative Visuals: Built with React Three Fiber (Three.js). The orb uses custom lerping (linear interpolation) logic to ensure that state changes feel organic and "biological" rather than abrupt.

Math, Not MP3s: All sound is generated in real-time using the Web Audio API. We synthesized raw waveforms (Sine, Sawtooth, Triangle) and processed them through a dynamic Biquad Filter to create cinematic drones.

The Hybrid Fallback: To ensure a "Zero-Failure" experience, we built a local keyword-matching engine that takes over if the API hits a quota limit, ensuring the vibe never breaks.

Challenges we ran into

The "Mosquito" Problem: Raw sawtooth waves at 200Hz sound like a fire alarm. We had to build an intelligent filtering layer that "muffles" aggressive sounds based on the user's stress level, turning annoying noise into an ominous, cinematic growl.

The Time Paradox: We hit 404 errors when Gemini model aliases were deprecated mid-hackathon. This forced us to engineer a more robust API routing layer with hardcoded stable versions and a "Neural Link" recovery protocol.

3D Performance: Keeping the Three.js render loop smooth while simultaneously running real-time audio synthesis and AI decryption effects required careful optimization of React's state-update cycle.

Accomplishments that we're proud of

Generative Immersion: Building an audio-visual environment that feels "expensive" but uses 0kb of external assets—it's all math and code.

The Decryption Hook: Creating the "Matrix-style" scramble effect for text. It turns a "Loading..." state into a moment of anticipation.

The "Brain Dump" Mechanic: Successfully implementing a digital therapy technique where text physically blurs and vanishes after 5 seconds of no typing—helping users let go of distractions.

What we learned

The hardest part wasn’t the code—it was restraint.

Small changes in color, motion, or the "pitch" of a hum have a much bigger impact than adding ten new features. We learned that AI doesn’t need to "talk" to feel intelligent. Sometimes, returning a few well-chosen parameters that change the color of your room is more powerful than a paragraph of text. Coding can be an emotional medium.

What's next for VibeOS

VibeOS is a platform, not just a toy. Our roadmap includes:

Spotify Pulse Sync: The Orb throbbing to the BPM of your actual music.

IoT Integration: Connecting the accent colors to Philips Hue lights to change your physical room's vibe.

Biometric Hook: Using the Apple Watch heart rate API to auto-vibe the OS without user input.

Binaural Beats: Scientific audio generation to induce Alpha/Theta brainwaves for deeper sleep and focus.

Built With

Share this project:

Updates