๐ŸŒ ## Inspiration

In many regions, particularly in underserved communities and areas with limited internet access, people face difficulties accessing timely, relevant, and inclusive information about climate change and health impacts. This is even more critical for individuals with disabilities such as visual impairments, learning difficulties, or mobility challenges who often encounter accessibility barriers when using mainstream health or climate platforms.

EcoVerse AI was born from the need to bridge this gap. The idea was inspired by:

  • The urgency of climate-related health risks (e.g., asthma, heatwaves, air pollution).
  • Conversations with users who rely on screen readers or prefer voice-based tools.
  • A vision to make AI tools offline-capable, ensuring real-time support even without internet connectivity.
  • The desire to include voice input, read-aloud responses, and keyboard/screen reader accessibility, so no one is left behind.

We realized that relying on cloud-based APIs alone excluded many users. Thatโ€™s why we integrated Ollama to power local inference with Phi-3 Mini, a lightweight LLM that runs directly on the user's machine.


๐Ÿ’ฌ ## What it does

EcoVerse AI is a voice- and text-based AI assistant that:

  • Answers questions about climate health, pollution, accessibility, and adaptation.
  • Supports offline local AI responses via Phi-3 through Ollama.
  • Offers voice input (speech-to-text) and text-to-speech read-aloud for visually impaired users.
  • Provides an accessible interface with large font options, keyboard navigation, and screen reader toggles.
  • Allows users to report climate-related issues and view submitted reports.
  • Functions even in low-connectivity areas, making it a valuable tool for remote or underserved regions.

๐Ÿ› ๏ธ ## How we built it

  • Frontend: Built using React, TailwindCSS, and Lucide React icons.
  • Voice Input & Output: Integrated Web Speech API for speech recognition and synthesis.
  • Local AI Chat: Connected to Ollama running Phi-3 Mini model using the /api/chat endpoint with streaming.
  • Accessibility: Added ARIA labels, focus states, readable contrast, keyboard nav, and a global settings switch.
  • Fallback Demo Mode: When Ollama is not available, it shows example answers to demonstrate use cases.
  • Pages: Includes climate issue reports, art gallery with read-alouds, blockchain fund concept, and more.

๐Ÿšง ## Challenges we ran into

  • Streaming local AI output from Ollama and dynamically updating the UI while preserving performance.
  • Handling CORS issues and async fetch states when switching between online and offline modes.
  • Designing a global accessibility toggle system that synchronizes settings across all pages.
  • Making voice input work seamlessly with form submissions without user confusion.
  • Ensuring read-aloud respects global "enabled/disabled" states across components.

๐Ÿ† ## Accomplishments that we're proud of

  • Successfully connected Phi-3 locally to the React app with real-time streaming output.
  • Designed an interface that works well for both general and assistive users (e.g., screen readers).
  • Built a fully functional AI assistant that doesn't depend on internet access.
  • Created a clean and professional frontend UX with accessibility best practices.
  • Developed real-world examples that can help vulnerable populations prepare for climate-related health risks.

๐Ÿ“š ## What we learned

  • How to work with local LLMs like Phi-3 and tools like Ollama.
  • Best practices in building accessible React apps (including keyboard nav, screen reader support, and font scaling).
  • Using the Web Speech API effectively in production environments.
  • How to structure global state for cross-page accessibility settings.
  • Importance of graceful fallback when AI is offline or unavailable.

๐Ÿ”ฎ ## What's next for EcoVerse AI: Climate, Health & Accessibility Assistant

  • ๐Ÿง  Model Upgrades: Add support for more capable local models (e.g., Mistral or LLaVA for image input).
  • ๐Ÿ“ก Offline Data Storage: Cache common climate health questions so even if AI is off, content is browsable.
  • ๐Ÿ“ฑ PWA Version: Make it installable as a mobile app for health workers in remote areas.
  • ๐Ÿงฉ Multilingual Support: Add translation options for Swahili, French, and other underserved languages.
  • ๐Ÿ‘ฅ Crowdsourced Climate Reports: Let users upload photos or videos of local climate issues for community awareness.
  • โ™ฟ Deeper Accessibility Features: Add sign language videos and alt text guidance for future media content.

๐Ÿงช ## Demo vs Full AI Capability

โš ๏ธ The live demo version of EcoVerse AI provides **example responses to show functionality.
To experience the full offline AI assistant powered by Phi-3 Mini, download the project and run it locally with Ollama.
This enables full voice and text chat, powered completely offline, perfect for:

  • Users in low-internet environments
  • Data-sensitive users who prioritize privacy
  • Health workers and educators in the field

โœ… With EcoVerse AI, no one is left behind โ€” whether you're offline, visually impaired, or navigating a climate emergency.

Built With

Share this project:

Updates