Inspiration I wanted a personal assistant that didn't force me to choose between intelligence and privacy. Most bots send everything to the cloud, but I wasn't comfortable with that for my personal data. I wanted to build a system that acts as a secure gatekeeper—keeping my private details on my own hardware while still leveraging the smartest models in the world for general knowledge.
What it does J.A.V.E.I.R.S. is a privacy-first AI assistant. It intelligently routes my questions based on context. If I ask about personal details or private data, it processes the request strictly on my secure home servers. For general coding, creative writing, or web searches, it taps into a powerhouse of external APIs—Gemini, ChatGPT, Grok, and DeepSeek—to get the absolute best answer available.
How we built it The Backbone: The core logic runs on my custom home servers, ensuring I have total physical control over the data flow.
Frontend & Backend: Built with HTML, CSS, JavaScript, and Node.js python for a responsive and flexible interface.
The "Router": We built a logic layer that classifies prompts. "Private" prompts stay local, while "Public" prompts are dispatched to the external model best suited for the job (Gemini, ChatGPT, Grok, or DeepSeek).
Challenges we ran into The hardest part was orchestrating the "handshake" between my private server and the external giants. We had to write logic to ensure that no sensitive personal data accidentally leaked out to the public APIs (Gemini/ChatGPT/Grok/DeepSeek). Managing the different API structures and response formats for four different major AI providers was also a complex integration puzzle.
Accomplishments that we're proud of I’m proud that we achieved a "best of both worlds" setup. I have the security of a local air-gapped system for my private life, but the raw power of the world's best LLMs for everything else. Successfully balancing load across multiple top-tier models like Grok and DeepSeek gave us a massive edge in accuracy.
What we learned I learned that privacy doesn't have to mean sacrificing power. I also gained a deep understanding of how different LLMs "think" differently—learning when to use Grok versus Gemini versus ChatGPT has made me a much better prompt engineer and system architect.
What's next for J.A.V.E.I.R.S. I plan to refine the auto-detection algorithm so it's even faster at deciding which server handles which request. I also want to expand the home server capabilities to handle more complex local automation tasks.
Log in or sign up for Devpost to join the conversation.