Inspiration :
In today's fast-paced digital world, it's easy to get lost in work and forget to take breaks, leading to burnout and eye strain. I've noticed that most screen time trackers are simple, "dumb" timers that send the same annoying notification every 30 minutes. This inspired me to create Aura, a smart wellness assistant that's personalized to you. My goal was to build a tool that understands your work habits and provides genuinely helpful, varied advice, acting more like a caring wellness coach than a simple alarm clock.
What it does:
Aura is a smart wellness assistant that helps you maintain a healthy work-life balance. Unlike simple timers, Aura is personalized. When you first install it, it asks for your basic preferences (like age, activity level, and work hours). Using this profile, it does two things: Personalized AI Budgets: Every time you open a new browser window, Aura invisibly calls a built-in AI (LanguageModel) to set a unique, personalized screen time budget (e.g., 90 minutes, 120 minutes) for that specific window, based on your profile. Smart Notifications: Aura's background script tracks the cumulative time you spend across all tabs in that window. When you exceed your personalized budget, it triggers a notification. Varied Wellness Tips: The notification you receive isn't the same boring message. Aura calls the AI Writer API to generate a fresh, unique, and encouraging wellness tip every single time, suggesting stretches, hydration, or quick mental breaks. It's a wellness coach that adapts to you, running silently in the background to provide helpful nudges exactly when you need them.
How we built it
Aura is built as a Chrome Extension (Manifest V3) using plain HTML/CSS for the UI and asynchronous JavaScript for the core logic. The biggest challenge was correctly using the new built-in AI APIs, which required a specific three-part architecture. UI (welcome.js): A simple HTML page handles the one-time setup. It's responsible for saving the user's preferences (userInput) to chrome.storage and, most importantly, handling the one-time, user-activated download of the LanguageModel (Gemini Nano) if it's not already installed. Core Brain (background.js): This is the central nervous system. It uses chrome.windows.onCreated to detect every new window It uses chrome.tabs.onActivated to track the time spent on each tab and saves it to chrome.storage (using overAllTimeSpent and windowId properties). It uses chrome.windows.onRemoved to automatically clean up storage by deleting data for closed windows. When a time budget is exceeded, it receives the final AI-generated tip and calls chrome.notifications.create() to display it. AI Helper (offscreen.js): This was the solution to a major technical hurdle. The background.js script cannot call LanguageModel (or window.ai) directly because it has no window object. The Offscreen Document acts as an invisible, temporary helper page: When a new window is created, background.js gets the userInput from storage and sends a message to the offscreen script. offscreen.js receives the message, calls the Prompt API (LanguageModel.prompt()) to calculate the budget, and sends the budget back to background.js to be saved. When the budget is exceeded, background.js sends another message. offscreen.js then calls the Writer API (LanguageModel.prompt() with a high temperature) to generate a unique wellness tip, which it sends back to background.js.
Challenges we ran into
This was my first time working with built-in, on-device AI, and the learning curve was steep. The Offscreen Document: The biggest challenge was the window is not defined error. I learned that a background script can't call LanguageModel and that an Offscreen Document is the only way to solve this. However, I then discovered the Offscreen Document can't access chrome.storage! This forced me to learn the correct, more complex architecture: background.js (gets data) -> offscreen.js (calls AI with data) -> background.js (saves the result). Experimental APIs: The API names were changing (window.ai vs. LanguageModel vs. Writer). Debugging was confusing, and the documentation was sometimes out of sync with direct developer advice. I learned the importance of relying on the latest information from the source. Debugging: Debugging a three-part system (background, offscreen, and welcome page) was a challenge. I learned to use console.log to trace the flow of messages and temporarily disable window.close() in the offscreen script just to see its console. Content Security Policy (CSP): I was blocked by CSP errors until I learned that all JavaScript, including my welcome page animations, must be in separate .js files and cannot be inline in the HTML.
Accomplishments that we're proud of
I am incredibly proud of building a functional and useful extension using such cutting-edge (and challenging!) technology. The biggest accomplishment was designing and debugging the complex, three-part message-passing system between background.js, offscreen.js, and the UI pages. It was a difficult architecture to get right, but it's robust and respects all of Chrome's security limitations. I'm also proud of using the AI for two distinct tasks: Logic/Calculation: Using the Prompt API to get a personalized number (the budget). Creative Writing: Using the Writer API (with a high temperature) to generate unique, helpful wellness tips.
What we learned
This project was a deep dive into the real-world architecture of modern Chrome Extensions. I learned the distinct and separate roles of the Service Worker (background.js), Offscreen Documents, and UI pages. I learned that event listeners in the background (onCreated, onActivated) are the true engine of an extension. I mastered the "Read, Modify, Write" pattern for safely using chrome.storage and using async/await within listeners. Most importantly, I learned how to debug a complex, event-driven system with multiple "black boxes" by checking the console of each part (Service Worker, Offscreen page, and UI page) to trace the flow of communication.
What's next for Aura
This hackathon was just the beginning. The deadline is tomorrow (October 31st), but I have a clear roadmap for Aura's future: Integrate More APIs: I want to use the Translator API to automatically translate the wellness tips into the user's browser language. Clickable Notifications: I will add a chrome.notifications.onClicked listener so that clicking the notification opens a new tab with more detailed stretches or wellness information. Polished UI: I plan to replace the simple welcome.html page with a full React application to create a more polished and professional onboarding and options experience, similar to the UI components I've designed. Publish: After more testing and refinement, I plan to publish Aura to the Chrome Web Store to help people all over the world improve their well-being.
Log in or sign up for Devpost to join the conversation.