Background

Economic and industrial changes over the last year has led to increased food insecurity and homelessness. In America today, there are over 553,742 people who are homeless and 38 million people dependent on federal food assistance programs. At the same time, the nation has over 15 different large food insecurity program and 12,127 homeless shelters. We realized that there isn't a lack of programs, and we also believe that there isn't a lack of people (We're optimistic about humanity!). However, so many people go malnourished and shelterless because the vast majority of programs have limited funding and receive little public awareness.

We wanted to encourage people to be a little more charitable by improving and optimizing donations. At the same time, we wanted a way for non-profit organizations to communicate their most crititical needs. So we build givvy - a platform that uses data and artificial intelligence to rapidly process donations and recommends non-profits that need it the most.

What is givvy?

givvy is a social good platform that promotes and improves donations using the power of technology.

Here's how it works.

  1. You scan items that you wish to donate to quickly produce a list. Items not approved or recommended for donating will be rejected! Object Recognition

  2. We use requests submitted from local non-profits (Kind of like Patreon!) to track what each needs most. Our algorithm uses a combination of the donation item (explicitly requested) and the donation item category (implicitly requested) to determine where your donation would best be sent to. Donation Suggestion

  3. You can pledge to donate to a non-profit, which updates the organization's need tracker. Now, other users will know to donate to other organizations in critical need! Donation Pledge

How givvy was built

givvy was built by a team made up of both beginner and advanced hackers - a mix of designers and developers.

givvy's design was created through a typical, albeit slightly rushed design process. We conducted need-finding, created user personas, decided user flow, designed low fidelity and high fidelity prototypes.

givvy's tech was designed first through discussion and writing design documentation. The frontend was created using React, Redux, SASS, WebGL, Three.js, Ant Design, and various other technologies. Google Cloud and Mapbox served as additional APIs used directly from the client-side. The backend was created using Node.js and Express coupled with Firebase to store and process shelter and item data.

For more detail, take a look at the sub-sections below.

UI / UX

User Personas

ivyjande We designed a few user personas to help direct and guide our design process.

User Flow

flow A clear and concise user journey was important for helping us decide which screens needed to be designed.

Lo-fi Prototypes

lofis Wireframes were created on Figma. We wanted to initially focus just on the functionality of our application.

Hi-fi Prototypes

hifis We selected a pink, yellow, blue color palette, font, and developed the art style from there.

Engineering

Design Documentation: https://docs.google.com/document/d/1l_gfXnynuhIzrG3j-8jcYDmDmX67ItPHh5yi99TdCJk

Tech Stack Our tech stack.

Frontend and Challenges

The frontend was built from a wide variety of web technologies. To start, we used React, Redux, SASS, JavaScript, HTML, and CSS to form the core of our frontend stack. Ant Design was used as a component library to help us quickly create status bars, modals, and icons.

The map component was created through Mapbox GL. Multiple Three.js layers were overlayed on top to create three-dimensional buildings using WebGL.

The most challenging component, by far, was the image recognition modal. It is connected to setTimeout to delay calls to the Google Cloud Vision API. It is also made up of a webcam component using React Webcam. The webcam doesn't have a screenshot function builtin, so we created a makeshift one by intermittently positioning an image straight on top of the webcam component, which takes in the webcam's base64 image data. The most frustrating part was implementing the bounding boxes for the image recognition. It was an unnecessary feature, but we felt that it really tied the UI together! We had to annoyingly convert Google Cloud BoundingBox normalized coordinates to HTML canvas dimensions and coordinates (Which have their Y-Axis inverted).

3dmodels 3D models were generated with Three.js.

Backend and Challenges

Our backend was developed by a team member who had never touched Firebase and only briefly touched Node.js before. Eight routes were initially designed, but it was ultimately cut down to four more complex routes for the sake of reducing the number of front-end requests.

The four methods were

  • /getShelters - Returns information about all shelters
  • /rankShelters - Score the shelters in order of need
  • /donateItems - Submit donations and update shelters accordingly
  • /getItemCategory - Returns an item's category

The largest challenge, by far, in developing the backend was navigating through Firebase documentation. Being a first time user, the set-up process was especially difficult.

sysarch How parts of our system interacted with one another.

Takeaways

Accomplishments that we're proud of

We're proud of how completed and polished our app is! We managed to finish our MVP early, which is rare for most of our hackathon projects. Also, unlike our past projects, the extra time allowed us to add in little details (loading spinners, delete buttons, filtering systems, bounding boxes) that made our user interface so much more put together.

What we learned

We utilized a tremendous amount of technology to create givvy and we all learned so much while working with it.

Our beginner hackers developed a great portfolio piece. Our designer learned how to apply the design process to a short period of time, which will surely help her attend future hackathons. Our beginner developed a backend for other people for the first time and became well versed in Firebase in the process.

Our more experienced hackers learned how to work remotely from one another. This was their first hackathon physically apart, and it led to a change in our method of communicating. We became more reliant on documents like Figma and our Engineering Design Doc - which is ultimately why so much content was generated from this project. The components and styling were also a good and fun challenge as well!

What's next!

There are so many additional features that we would have loved to add to this app if we had the time! Namely, we would have loved to add additional information about each of the non-profit locations (summaries, news, comments) and we initially designed the app to contain that! Ultimately, it was cut due to time. We would also love to add additional APIs to improve the analysis of our images. For example, additional time could have allowed us to add expiration date tracking, quality tracking, and profanity tracking to our image recognition software. In any case, we really do hope that we can see this app launched and in use some day - we want to encourage as much charitable giving as possible!

Note if you're trying it: we shut down the Mapbox and Google APIs. We burnt through so many requests this weekend that Google started charging us money. We never received the credits we filled out the form for :(

+ 57 more
Share this project:

Updates

posted an update

Note if you're trying it: we shut down the Mapbox and Google API usage. We burnt through so many requests this weekend that Google started charging us money. We never received the credits we filled out the form for :(

Log in or sign up for Devpost to join the conversation.