Inspiration

Most AI tools today feel mechanical: you can tell you’re talking to something that has no real awareness of you, your context, or your emotions. The interaction is often purely functional, which makes it easy to disengage or forget what was said. We want to change that by making the experience feel more human, where the AI understands intent, tone, and continuity, so conversations feel natural instead of transactional.

At the same time, people spend too much time managing the mechanics of their own lives, organizing tasks, setting priorities, estimating deadlines, and keeping calendars up to date. This AI is designed to remove that friction by automatically structuring what you say into actionable tasks, filling in missing details like time, urgency, and importance. It’s not just about saving time, but about reducing mental load so users can focus on what actually matters.

What it does

This is a voice-first personal AI assistant that helps you manage your time and daily life through natural conversation. Instead of manually creating tasks or updating your calendar, you can simply speak or type something like “meet John tomorrow,” or “remind me to finish the report this week” and the system automatically turns it into structured, actionable items.

It tracks todos, infers missing details like priority, deadlines, and timing, and organizes everything into a personalized schedule. It also remembers people you mention and keeps important context about them, and it can summarize any links you share into useful, saved insights. Over time, it builds a connected view of your tasks, relationships, and information so everything feels organized without you having to manage it manually.

How we built it

We split the system across three people to build in parallel as one team. One person focuses on the overall architecture and memory module, which stores and retrieves people-related context like names, relationships, and preferences so the AI can remember important details over time and build continuity across conversations. Another handles the todo system, which turns natural language into structured tasks with inferred deadlines, priority, and scheduling to act like a smart, auto-managed calendar. The third person works on voice + link summarization, handling speech-to-text, text-to-speech, and generating concise summaries from URLs that can also be converted into actionable items. Once each part is ready, we combine them through a central orchestrator that routes user input to the right module and connects everything into a single voice-driven web application.

Challenges we ran into

The main challenge is that most of the system is new to us—we’re all relatively new to machine learning and building AI-driven systems. As a result, it can be difficult to clearly define individual responsibilities and translate the overall architecture into concrete, implementable components during development.

Another key challenge is integration. Even with a shared architecture and agreed-upon protocols, combining all modules into a cohesive system remains complex. While each component—memory, todo management, link processing, and voice—can function independently, ensuring they work seamlessly together through the orchestrator without breaking data flow or consistency is the most difficult part of the build.

Accomplishments that we're proud of

We have worked as a strong and collaborative team, communicating effectively and helping each other learn unfamiliar concepts along the way. Despite being new to many parts of the system, we were able to support one another and steadily build understanding as we progressed.

We are proud that a project we initially thought was too complex to complete is now running successfully. Even more importantly, we are proud that we were able to deliver it in a short period of time while still maintaining a clear architecture and working system.

What we learned

We learned a wide range of technical and collaborative skills throughout this project. On the technical side, we gained hands-on experience with building AI-powered systems end-to-end, including how to design system architecture, structure modular components, and integrate multiple AI capabilities into a single working product.

We also learned how to effectively use AI tools such as spec driven development and not just coding, but also for debugging, analysis, and understanding complex concepts. Beyond the technical aspects, the project helped us improve teamwork, communication, and system-level thinking, especially when coordinating different modules and aligning on a shared architecture.

What's next for EmoNote

  1. Authentication Right now everything runs under default_user. Plugging in Clerk (already in the tech stack plan) would make EmoNote multi-user instantly — each person gets their own isolated memory, graph, and task list.
  2. Calendar Integration Sync tasks with due dates directly to Google Calendar or Apple Calendar. EmoNote becomes the capture layer; your calendar becomes the execution layer.

Built With

Share this project:

Updates