🌷 Anthos — Every Flower Has a Voice

Inspiration

Spring arrived at Cornell and the gardens exploded with color. And then one of us stopped and thought — what is this season like for someone who cannot see it?

250 million people worldwide live with vision impairment. Spring, the most visually rich season of the year, is also the one most quietly excluded from their experience.

Anthropic's CEO Dario Amodei wrote that AI should expand "the space of what is possible to experience" — that more of people's lives could consist of extraordinary moments of beauty and transcendence. That line stopped us cold. Those moments — standing in a garden, feeling the sun — are exactly what blind and low vision people are shut out of. Not because the garden isn't there. Because the bridge doesn't exist yet.

We built that bridge. We built Anthos.

What It Does

Point your camera at a flower and hear it bloom into sound.

A live camera feed sends frames to our backend every few seconds. AI analyzes the flower and returns its visual features as musical parameters. A deterministic mapping layer converts those features into a melody — so the same flower always sounds the same, every single time.

The experience is completely hands free. Open the app, face the garden, and music finds you.

Large piano keys at the bottom of the screen, tuned to the flower's exact scale, let users reach out and play the flower themselves. Voice announcements guide every moment. The whole interface speaks.

Why This Flower, Why This Tune

Every musical decision is research backed. We did not pick what sounded pretty. We picked what human perception actually does.

Shape to melody contour. Research by Eitan and Timmers shows that rising melodies feel like upward growth and falling melodies feel like drooping or descending forms. A tall tulip plays an ascending line. A drooping flower plays a descending one. The melody traces the silhouette.

Texture to timbre. Spence at Oxford established that smooth waxy surfaces map to clean pure tones, while rough textures map to harsher timbres. Our mapping:

Texture Voice
Waxy Pluck — clean attack
Velvety Sine — pure and warm
Papery Triangle — soft complexity
Fuzzy AM synth — warm roughness

Color saturation to velocity. Vivid saturated colors map to louder more energetic sound. Pale colors map to softness. The intensity of the color becomes the intensity of the music.

Symmetry to structure. Radial symmetry produces a repeating motif. Bilateral symmetry produces call and response. Asymmetric flowers play through-composed melodies.

Petal count to note count. The number of petals becomes the number of notes, clamped between 5 and 16. The complexity of the flower becomes the complexity of the phrase.

Why D major for our orange tulip. Major keys map to brightness and warmth. Orange red maps to high arousal in color psychology. High saturation plus high arousal points directly to a bright major key at medium fast tempo. The research gave us D major at 100 bpm. The five piano keys — D4, E4, F♯4, A4, B4 — are the D pentatonic scale, used across nearly every musical culture on earth. There are no wrong notes. Every touch is rewarded.

How We Built It

The backend is Flask with a vision AI endpoint. Each camera frame is analyzed and returns a JSON object of visual features. A deterministic server side mapper converts those features into melody using fixed rules with no randomness — the same flower always produces the same music.

A perceptual fingerprinting system caches results by image hash so repeated captures of the same flower never hit the API twice.

The frontend uses Tone.js for music synthesis and the Web Speech API for voice announcements. Melodies loop and crossfade smoothly. Haptic feedback pulses on flower detection. The interface is built for high contrast, large touch targets, aria-live announcements, and keyboard accessibility throughout.

Challenges

Making music that felt emotional rather than algorithmic was the hardest part. Early versions were technically correct but cold. We tuned until the orange tulip sounded triumphant and a drooping flower sounded gentle.

Accessibility forced us to rebuild every UX assumption from scratch. No small buttons. No text instructions. No silent failures. The app had to work for someone navigating entirely by sound and touch.

What We Learned

Accessibility is not a feature you add at the end. It is the entire design.

Music is more powerful than speech. Speech tells you what is there. Music makes you feel what is there. That difference is everything.

One flower done perfectly is worth more than ten done poorly.

What's Next

Every new flower is a new voice. We want to walk through an entire botanical garden and hear it sing.

We didn't build an accessibility tool.

We built a new sense.

Built With

Share this project:

Updates