Hi FigBuild Judges! We are the team behind FRQ-C (pronounced "Frequen-see"), and we’re excited to share the journey of how we turned the invisible world of sound into a vivid, actionable visual experience.


What Inspired Us

Our inspiration began with the concept of chromesthesia which is a form of synesthesia where sound involuntarily evokes an experience of color. We realized that while humans naturally use sound localization to find a noise, we are often "blind" to the specific frequencies that subtly dictate our stress levels, focus, and mood.

We wanted to bridge the gap between exteroception (perceiving the external environment) and cross-modal perception (combining senses). By making these invisible signals visible, we aimed to empower people to "design their state of mind" by intentionally shaping their sonic surroundings.

How We Built It

FRQ-C is designed as a real-time reflection tool that processes both live ambient audio and digital playback.

  • The Engine: We built a system that analyzes sound frequencies using live microphone input and/or Spotify playback integration.

  • The Visualization: We mapped the complex audio data into a simplified circular sound spectrum.

  • The Color Logic: We developed a specific color spectrum to give "meaning" to different sounds:

  • Dark Red: Vibrations/Low frequencies

  • Orange: Bass and crowd energy

  • Yellow/Green/Cyan: The nuances of human speech (fundamentals, clarity, and detail)

  • Indigo/Violet: High-pitched alerts and acoustic details

  • Intensity Mapping: To reflect proximity and volume, we programmed the colors to become more saturated and "stronger" as sound sources get louder or closer.

What We Learned

We discovered that sound has a profound "invisible" impact on wellbeing. For example, during testing, we found that:

  • Students could identify that "blue spikes" (notifications) were the true culprits behind their lack of focus in busy cafés.

  • Restless sleepers often don't realize that "dark red" low-frequency vibrations are what's disrupting their deep sleep.

We learned that providing cognitive clarity through visualization is more effective than just showing raw decibel data; people need to see the pattern to change the habit.

Challenges We Faced

The biggest hurdle was avoiding cognitive overload. Translating every single raw frequency into a color initially created a chaotic "visual noise" that was as stressful as the audio itself.

To solve this, we:

  1. Simplified the spectrum: Grouped frequencies into recognizable categories.

  2. Highlighted the Dominant Frequency: Ensured the most impactful sound in the room was the easiest to see.

  3. Refined Animations: Used smooth transitions to guide the eye rather than jarring jumps.

Lastly, Privacy was a major concern. We had to ensure that while we "hear" the environment, we never record it. We built FRQ-C to process all audio locally on the device, ensuring no recordings are ever stored.

Share this project:

Updates