Homefull
The Inspiration
Our team know many people who either are homeless or were homeless in the past due to past mistakes, unfair accusations or unfortunate circumstances. Being homeless is more than just a struggle for money, lack of shelter can lead to sitting in the cold getting frostbite or even hypothermia leading to more debt. Having barely any money, unable to feed yourself and difficulty in securing water. After all this hard it is to be homeless and amount of homeless people unaware of the resources available to them, we thought that it would be a good idea to make an app to help them find out.
Setting up
After realizing what idea we wanted to do it was a bit of challenge understanding how we would do it. I mean we can't exactly make a web app our strong suit since most homeless people don't really have a full desktop setup, so we had to start working on a mobile app. We didn't have experience in mobile app development, but we split up our roles for two person each on frontend of the app and backend with the AI and scraping of information. It was a real struggle to even setup a map view for the app, but thanks to a library we were able to make a map with just:
import React from 'react';
import MapView from 'react-native-maps';
<View style={styles.container}>
<MapView style={styles.map} />
</View>
we will definitely not run into problems with this library since it was made before react (foreshadowed).
Bumps and Bruises
Our main challenges was just trying to adapt because we had no experience with mobile development or agentic AI, not mention the hard time debugging due to rate limiting and merge conflicts. However as I not so subtlety foreshadowed we ran into a big problem where a lot of the features of the map library we were using had been deprecated such as labels, so we had to use other methods like a search bar and menu. We had some plans to use software such as Eleven labs instead, to give an audio cue rather than nothing, however due to the amount of API calls we were already making our team decided it would be too costly in time and money for the amount of gain. In the end we were able to finish all of the front end and optimize it to only require 1 API call then an O(n) operation to sort it correctly.
The Backend: Cogs and Cogs and Cogs and Cogs...
The backend of our application was hosted on a Digital Ocean cloud compute virtual machine. We were lucky enough to have $200 of credits to work with and a powerful Nvidia data center card wit 141GB of vram.
Nemotron the Hedgehog
For our project we decided to leverage the power of Nvidia Nemotron 70B on ollama to make good use of the computing power we have access to. The use of Nemotron as our agentic AI allows our application to perform more in depth and precise research tailored to the location and needs of the users. In the beginning we struggled to work with and design of our application using agentic AI due to the new design paradigms we had to adapt to. We ended up designing small python functions that perform specific tasks so that the AI can call them as needed. First we implemented functions that take a urls and and search for relevant information to our problem area such as finding information about when and where free meals are being given out by churches or wen and where harm reduction outreach is happening. Then, to keep leveraging the powers of agentic AI, we connected it to DuckDuck Go search API so that it can search for sites on its own and continue gathering useful information. We originally ran the AI in a docker container for a consistent development environment and portability, but we ran into issues fully utilizing the GPU. We found that CUDA pass through was becoming too much of a hassle and a time sink so we decided just run it on the machine itself so that we could continue developing our application. Although our systems are working a change we would make is for the AI to dump the information that it finds into a in a place where the frontend looks every so often to see if there's anything new instead of the frontend requesting to find new information.
API for You and I
Our backend API is built with FastAPI for speed, async I O, and clear typing, and it is containerized with Docker for consistent environments and one command deploys. It orchestrates calls to multiple external data sources, coordinates with the agent to generate responses to requests, and aggregates provider info like food, restrooms, and shelters. The API exposes simple JSON endpoints that the front end calls directly for search, details, and map data. The API also orchestrates an Agentic investigation and research flow. It begins with a request from the application with the user's GPS coordinates, and guides a Nemotron model through generating Google searches, narrowing down the results of the searches, and determining if an event is worth being shown to the user, all unassisted. This will find nearby food drives, clothing/supply handouts, and other events that will be happening in the near future, that no simple API would collect.
Epilogue
Although we were able to create our original vision, we were not able to implement all the ideas that we had envisioned. Some features we were excited for was highlighting areas where it was legal to pitch a tent, give adverts for areas that are at risk of flooding, and using a queue that is constantly being filled with URLs for our AI continuously find new information and resources. The good thing about some of these ideas is that all it would take was fine tuning the model and the system prompt plus some more work on the frontend. This project showed us that agentic AI shines when we expose small utility functions as tools the model can call, which makes the system more general and easier to extend. We truly see a lot of potential in this application to be helpful and useful to those in need.
Built With
- agentic-ai
- ai
- expo.io
- fastapi
- javascript
- maps
- nemotron
- nominatim
- overpass-api
- overpass-openstreetmap
- python
- react-native

Log in or sign up for Devpost to join the conversation.