Taste-Based AI Assistant
This project is a web application that provides personalized recommendations for movies and music through a conversational AI interface. It features three distinct AI agents: a Movie Agent, a Spotify Agent, and a Couple Movie Agent, each tailored to specific use cases. The application is built with Django and utilizes LangChain, Google's Gemini models, and the Qloo API for taste-based and context-aware recommendations.
Inspiration
The inspiration behind the Taste-Based AI Assistant was to create a more intuitive and personalized way for users to discover new movies and music. Traditional recommendation systems often rely on generic algorithms, but we wanted to build a system that understands individual tastes on a deeper level. By leveraging conversational AI and incorporating contextual data like user location, we aimed to provide a more engaging and intelligent experience—offering recommendations that are not only personalized but also relevant to the user’s current environment.
What it does
The Taste-Based AI Assistant offers three distinct agents to cater to different needs:
Movie Agent: Provides personalized movie recommendations based on a user's Emby watched history and current geographic location. It can filter by genre, language, and mood, analyze past viewing patterns, and even tailor suggestions to local content popularity or region-specific trends.
Spotify Agent: Offers music recommendations by analyzing a user's Spotify listening habits, including playlists and liked songs. It uses the Qloo API to provide in-depth insights into musical preferences.
Couple Movie Agent: Designed for two users, this agent recommends movies that cater to both of their tastes by analyzing their combined viewing histories, enabling shared entertainment experiences.
How we built it
The application was built using the following technologies:
- Backend: Python ,Django
- AI/LLM: LangChain, Google Generative AI (gemini-2.0-flash-lite)
- APIs: Qloo API, Emby API, Spotify API, Location Services (IP-based or device GPS)
- Frontend: HTML, CSS, JavaScript Hardware: Raspberry pi -4 Deployment and Cloud: Cloudflare Tunnel
We designed the architecture to be modular, with each AI agent operating as a separate component. This allowed us to develop and test each agent independently before integrating them into the main Django application. Location data is captured securely and integrated into the movie recommendation logic to provide dynamic, context-aware suggestions.
Project Structure
/ ├── appfront/ # Django project for the frontend ├── chat/ # Django app for chat functionality ├── couple_agent/ # AI agent for couple movie recommendations ├── movie_agent/ # AI agent for movie recommendations ├── spotify_agent/ # AI agent for music recommendations ├── manage.py # Django management script ├── requirements.txt # Python dependencies └── README.md
Challenges we ran into
Throughout the development process, we encountered several challenges:
- API Integration: Integrating multiple external APIs (Qloo, Emby, Spotify) required careful handling of authentication, rate limiting, and data parsing.
- Context Awareness: Incorporating real-time location data into the recommendation engine added complexity but significantly enhanced relevance.
- Data Privacy: Ensuring the privacy of user data from Emby and Spotify. We implemented secure data handling practices to protect user information.
- Agent Development: Creating AI agents that could accurately interpret user queries and provide relevant, contextual recommendations was a complex task that required iterative prompt engineering and fine-tuning.
Accomplishments that we're proud of
- Context-Aware Recommendations: We successfully implemented Taste-based intelligence to offer suggestions that match the user’s current surroundings.
- Three Distinct AI Agents: Specialized agents for movies, music, and couple viewing provide a more personalized Recommendation experience.
- Seamless API Integration: Unified multiple data sources into a coherent recommendation system.
- Conversational Interface: Built a smooth, interactive experience where users feel like they’re chatting with a knowledgeable assistant.
What we learned
This project provided us with valuable experience in several areas:
- Large Language Models: We learned how to build, refine, and prompt AI agents using LangChain and Gemini.
- Context-Aware Systems: Gained insights into how real-world context can elevate recommendation quality.
- Security & Privacy: Understood the critical importance of handling sensitive data like user history with care.
What's next for Taste-Based AI Assistant
- Expanded Recommendation Categories: We plan to include E-commerce, Travel.
- Smarter AI Agents: Continue improving recommendations using real-time behavior, emotional cues, and cross-domain insights.
- More Personalization Factors: Use additional signals like time of day, device type, and mood indicators.
- UI Enhancements: Further polish the interface for accessibility and visual appeal.
Log in or sign up for Devpost to join the conversation.