Better Me: Vision Streaming

Inspiration

The traditional vision board is a relic of the analog age—a static collage of magazine clippings or a pinned collection of digital images. While effective at defining intent, it often fails to bridge the gap between "what I want" and "how I live."

The inspiration for Better Me came from the realization that manifestation is not a destination, but a frequency. If we could see our future selves moving, breathing, and existing in a daily routine, the psychological barrier between the present and the future would dissolve. We wanted to create a "Desktop Portal"—an ambient, always-on broadcast that serves as a mirror of potential, making the path to one's goals a visible, living reality.

What it does

Better Me is an immersive, 24/7 manifestation tool that transforms a user's abstract desires into a continuous live-streaming video show of their "future self."

The Living Stream: It provides a real-time window into a completed version of your life, showing your "Better Me" engaging in routines like yoga, deep work, or nature communion.

Manifestation Interface: A style-agnostic system where the visuals adapt to user preference, whether that is 2D Studio Ghibli-style animation or high-fidelity 3D realism.

Voice Portal: A bi-directional communication layer where users can speak directly to their future self to seek motivation or update their life blueprint.

Proactive Engagement: Unlike a passive video, the "Better Me" has agency; it can ask the user questions about their current progress, creating a "Proactive Mirror" that encourages immediate action.

How we built it

The project is powered by our proprietary Four-Stage Engine:

The Blueprint (Input): Using Large Language Models (LLMs), we parse raw user goals into structured behavioral data.

The Chronology (Breakdown): The system maps these desires into an actionable 24-hour timetable. The distribution of activities is calculated to maximize the "State of Mind" the user wishes to achieve:

$$T_{total} = \sum_{i=1}^{n} (A_i \cdot W_i)$$

where $A$ represents a specific activity and $W$ its weight relative to the user's core values.

The Visualization (Generation): We utilize generative video AI to create 10-15 second seamless loop animations based on "Vibe Directives" (lighting, atmosphere, and subtle motion).

The Broadcast (Streaming): A React-based dashboard stitches these loops into a seamless, clock-synced live stream.

Challenges we ran into

The "Uncanny Valley" of Manifestation: Early iterations felt too robotic. We had to implement specific "Micro-Movement" logic to ensure the AI captured a "living" feeling—like steam rising from tea or leaves rustling in the wind.

Character Persistence: Maintaining the same facial features and build across different activities (e.g., from a gym session to a library) required advanced seed management and consistent prompt-prefixing.

Interaction Latency: Creating a natural conversation flow for the Voice Portal required optimizing the pipeline between Speech-to-Text (STT) and the LLM response to prevent breaking the "live" immersion.

Accomplishments that we're proud of

Seamless Loop Stitching: We successfully created a system where transitions between time blocks feel like a continuous life broadcast rather than a playlist of videos.

The Proactive Mirror: Developing the logic where the AI "future self" asks the user for advice—turning the user into a mentor for their own future—has proven to be a powerful psychological motivator.

Style Adaptability: Building a backend that can switch from a cozy "Lofi Girl" aesthetic to a hyper-realistic "Unreal Engine 5" look without losing the core character identity.

What we learned

The Power of Ambience: We learned that "soul" in AI video comes from the background—dust motes, lighting shifts, and environmental noise—rather than just the main character's actions.

Cognitive Dissonance as a Tool: We discovered that seeing a visual representation of who you want to be creates a productive "Manifestation Gap" $G$:

$$G = |S_{present} - S_{future}|$$

As the user interacts with the stream, their subconscious works to reduce $G$ to zero, aligning their current habits with the observed "Better Me."

What's next for Better Me

Dedicated Hardware: Transitioning from a web-based dashboard to a physical "Portal Display"—a dedicated, minimalist hardware device for the desk.

Community Manifestation: Allowing users to invite "Better Me" versions of their friends or mentors into their stream for collaborative sessions (e.g., a "Better Me" study group).

Dynamic Skill Evolution: Updating the loops in real-time as the user gains new skills, allowing the "Better Me" to grow alongside the user.

Built With

Share this project:

Updates