Inspiration
The inspiration for R.E.N. stemmed from our fascination with creating a chatbot that not only responds to user queries but also adapts its responses based on the user's emotional state and previous queries. We were motivated by the idea of building a more empathetic and human-like conversational agent that can understand and respond to the user's emotions effectively. That is why we developed our own, custom large language model that is tailored to those in need of mental health therapy, R.E.N.
What it does
R.E.N. is a custom llama2-based chatbot tailored to those in need of assistance for mental health. Equipped with speech recognition, text-to-speech synthesis, and real-time emotion detection capabilities, it engages users in natural conversations, responding to their queries and providing assistance while also analyzing their facial expressions to determine their emotional state. Based on the detected emotions, R.E.N. tailors its responses to be more empathetic and relevant, fostering a deeper connection and understanding with the user.
How we built it
We built R.E.N. using various Python libraries. The core functionalities include:
- Utilizing the SpeechRecognition library for capturing audio input and converting it into text.
- Employing the Pyttsx3 library for converting the chatbot's responses into speech, enabling natural conversation with the user.
- Integrating the OpenCV and DeepFace library for real-time analysis of facial expressions to determine the user's emotional state.
- Implementing the Ollama library to create our custom large language model that is designed specifically for those in need of mental health assistance.
Challenges we ran into
During the development of R.E.N., we encountered several challenges, including:
- Our custom R.E.N. LLM: integrating our model with our speech-to-text became difficult to set up at first when paired with local storage.
- Real-time emotion analysis: Implementing real-time emotion detection without sacrificing performance or responsiveness requires optimization and efficient processing techniques.
- Balancing user experience: Striking a balance between the conversational style of the chatbot and the inclusion of emotion detection without overwhelming the user posed a significant challenge in interface design.
Accomplishments that we're proud of
Despite the challenges, we're proud to have achieved the following milestones with R.E.N.:
- Successfully integrating various technologies to create a sophisticated offline chatbot capable of real-time emotion detection and response.
- Developing a robust and efficient architecture that enables seamless interaction and adapts to the user's emotional state in real-time.
What we learned
Through the development of R.E.N., we gained valuable insights into:
Speech recognition and text-to-speech technologies and their integration into conversational agents. Emotion and face detection libraries, such as OpenCV and Deepface, and their application in enhancing user experience and interaction. Techniques for managing real-time data streams and optimizing performance for interactive applications.
What's next for R.E.N.
Moving forward, we plan to:
- Further refine the emotion detection algorithms to improve accuracy and sensitivity in detecting subtle emotional cues.
- Expand R.E.N.'s functionality by incorporating additional features such as context-aware responses and multi-turn dialogue management.
- Implement mechanisms to solicit and utilize user feedback to continuously improve R.E.N.'s conversational abilities and emotional intelligence.
- Explore integration with devices to expand R.E.N.'s capabilities and enable more seamless interactions in smart home environments.
Log in or sign up for Devpost to join the conversation.