Introduction

In contemporary society, loneliness is an increasingly prevalent issue, as people find themselves isolated despite being more connected than ever through digital means. Our project seeks to mitigate this sense of solitude by giving voice and personality to the inanimate objects that surround us daily. By creating meaningful, emotionally charged interactions with these objects, we foster a sense of companionship and comfort, making our personal spaces feel more alive and responsive to our emotional needs.

Inspiration

In a world brimming with cutting-edge AI technologies, most intelligent devices are purpose-built, necessitating the acquisition of new hardware. Our project challenges this norm by asking, "Why not imbue existing everyday objects with intelligence?" This approach leverages the familiarity and ubiquity of household items, transforming them into interactive partners through AI, without the need for specialized new devices.

What it does

Users first pair the Traitify hardware with the smartphone app. After successful pairing, users select the object they want to endow with personality and voice and upload it to the app. Then, they connect the hardware to the chosen object. The app will create a personality and voice for the object. At this point, users can begin communicating with the object. Additionally, users can adjust the object's personality and voice, and manage their library of objects that have already been endowed with personalities.

How we built it

Thanks to the efforts of the community, we have been able to develop basic prototypes using tools like Flowise, based on the LangChain framework. It’s an open source low-code tool for developers to build customized LLM orchestration flow & AI agents. By using a drag-and-drop user interface, we successfully developed a conversational agent with two LLM chains. One utilizes Gemini's multimodal capabilities to analyze images of objects provided by users and generates personalities for these objects based on MBTI personality theory. By adding an output parser node to the chain, the output of the Personality Modeling LLM is standardized using JSON file format. This data will be stored in backend and user could customised it’s personality and add stickers later. The second conversation chain uses these generated personalities as input to construct a chatbot, which become your personal assistance.

Accomplishments that we're proud of

  • Enhanced Daily Interaction With Hardware & Software: By giving personality to everyday objects, our project turns mundane interactions into engaging, emotional experiences. This can transform routine tasks into moments of joy and companionship.
  • Accessibility and Inclusivity: Instead of requiring new, often expensive hardware, our approach uses existing objects in the user's environment. This makes our technology more accessible and affordable, broadening its impact.
  • Emotional Support: By creating objects that can interact in a human-like manner, our project provides emotional support and companionship, helping to alleviate feelings of loneliness. This is especially valuable in today's world where many people, including the elderly and remote workers, spend significant amounts of time alone.
  • Sustainability: Leveraging existing objects reduces waste and the environmental impact associated with producing new electronic devices. This sustainability angle could appeal to environmentally conscious consumers.
  • Customization and Personalization: The ability to customize the personality traits of objects allows users to truly personalize their experience and form deeper connections with their surroundings.
  • Innovation and Creativity: Our project pushes the boundaries of traditional AI applications, fostering creativity in technology use and setting a precedent for future innovations in how AI integrates into everyday life. Additionally, during the development process, we fully utilized resources from the open-source community, allowing us to build the prototype at a very low cost and explore the possibilities of constructing an LLM chain with low code.

Challenges we ran into and what we learned

  • Cross-Time Zone and Cross- Background Collaboration: One of our key challenges was collaborating across different time zones and professional backgrounds. To manage this, we adopted flexible communication tools that allowed team members to work effectively from various locations. We also emphasized respect for different perspectives to ensure a harmonious and productive work environment.
  • Creating Seamless Hardware-Software Interaction: Designing a seamless integration between hardware and software was a major technical challenge. We focused on user-friendly designs and iterative testing, applying user feedback to refine and improve the integration. This approach helped us create a more intuitive and responsive experience for users.
  • Lack of LLM Development Experience in the Team: Our team initially lacked experience with large language models (LLM). To overcome this, we engaged in self-study and sought advice from experts, which quickly brought us up to speed and built a foundation for ongoing learning and development. During the development process, we fully utilized resources from the open-source community, allowing us to build the prototype at a very low cost and explore the possibilities of constructing an LLM chain with low code.

What's next for Traitify

  • Refinement of Hardware Design: Our current hardware design is primarily focused on external appearance and structure, but the internal functionalities are not yet fully implemented. We need additional time to iterate on the hardware design to figure out how to fit the necessary modules into such a compact space. We also plan to explore the feasibility of Bluetooth positioning technology in real-world scenarios.
  • Chatbot Development: Currently, our chatbot can only be used locally. In the future, we plan to deploy it through cloud servers, enabling its use anywhere.
  • Tune Generation: Ideally, we would generate a character's voice and intonation directly from an image. However, we haven't found an open-source tool that fits our needs yet. Therefore, we will continue to explore the feasibility of this concept and actively seek technical support.
  • Connect with Google Products: We aim to continue exploring how our system can integrate with the existing smart home ecosystem, such as Google Home, to create a more seamless user experience.

Built With

  • apis
  • figma
  • flowise
  • gemini
  • langchain
Share this project:

Updates