Inspiration

GenTalk AI is a cloud-based chatbot application that leverages Google Cloud’s Vertex AI and the PaLM 2 language model to enable intelligent, real-time conversations. Built using Flask for the backend and Streamlit for the frontend, the project demonstrates how generative AI can be integrated into a simple, user-friendly web app. GenTalk AI serves as a practical example of deploying large language models (LLMs) for tasks like educational support, creative writing, or personal assistance.

What it does

GenTalk AI allows users to have real-time conversations with an AI chatbot powered by Google Cloud's PaLM 2 model via Vertex AI. Users can type any question or message into a web interface, and the chatbot responds intelligently using natural language understanding and generation. The app handles API requests in the backend (Flask) and displays responses in an easy-to-use frontend (Streamlit). It's ideal for exploring generative AI, building assistants, or testing prompt engineering.

How we built it

We started by creating a project on Google Cloud Platform and enabled the Vertex AI and Generative Language APIs (PaLM 2). Using the API key, we built a simple Flask backend that sends user messages to the PaLM 2 model and receives AI-generated responses. For the frontend, we used Streamlit to create an interactive chat interface where users can type messages and see replies instantly. The backend and frontend communicate via REST API calls. The project was tested locally and can be deployed on cloud services like Cloud Run for scalability.

Challenges we ran into

  1. Understanding PaLM 2 API: Initially, figuring out the correct request format and handling responses from the Google Generative Language API took time due to limited documentation and new features.
  2. Prompt Engineering: Crafting prompts that generate relevant, coherent, and safe AI responses required multiple iterations and testing.
  3. Latency Issues: Managing response times to ensure smooth real-time chatting involved optimizing backend calls and handling network delays.
  4. Deployment Setup: Configuring secure API key management and deploying the Flask app to cloud services like Cloud Run posed some initial setup challenges.
  5. Frontend-Backend Sync: Ensuring seamless communication between Streamlit frontend and Flask backend needed careful API design and error handling. ## Accomplishments that we're proud of
  6. Successfully integrated Google Cloud’s cutting-edge PaLM 2 generative AI model into a functional chatbot with real-time conversation ability.
  7. Built a clean and responsive UI using Streamlit that makes AI accessible and easy to use for anyone.
  8. Developed a scalable backend with Flask that securely handles API requests and responses.
  9. Gained hands-on experience working with cloud AI services, REST APIs, and prompt engineering. 5.Created a project that is easy to extend and deploy, opening doors for further AI-powered applications and learning.
  10. Demonstrated practical use of generative AI for education, support, and creative tasks in a lightweight web app. ## What we learned
  11. How to effectively use Google Cloud’s Vertex AI and PaLM 2 API for generative AI tasks. 2.The importance of prompt engineering to guide AI responses and improve conversation quality.
  12. Building a RESTful backend with Flask to communicate with cloud APIs securely.
  13. Creating an intuitive and responsive frontend using Streamlit for rapid prototyping.
  14. Challenges and best practices in API authentication, latency management, and deployment on cloud platforms.
  15. The practical potential of generative AI models to create real-world applications that can assist in education, creativity, and support. ## What's next for A Generative Chatbot using Google Cloud
  16. Enhance conversation context by implementing memory, allowing the chatbot to remember previous messages for more natural interactions.
  17. Integrate voice input and output to make the chatbot accessible via speech.
  18. Add multi-language support to reach a wider audience.
  19. Deploy the app on Google Cloud Run or Firebase Hosting for scalable, worldwide access.
  20. Implement user authentication and chat history storage using Firebase or Firestore.
  21. Experiment with fine-tuning prompts or exploring other Google GenAI models like Gemini for improved response quality.
  22. Explore integrating visual or multimodal AI capabilities to allow users to upload images or get image-based responses.

Built With

Share this project:

Updates