Inspiration

These are each of our individual reasons for choosing this project.

Himanshu Janmeda: When I was a little boy, birds were all around me. They would come sit on my arm and their sweet chirping pleased me on my sad days. But now when I look around I don’t see birds – because they are dying. We figured we could come up with an idea to save them. But before saving them we needed to know where they are. So, we came up with this app with will help us tracking endangered birds and help save them.

Harsh Srivastava 2020 is making us feel to be the worst year we have ever witnessed, but the reason behind it becoming worse is obviously us, humans. We are just exploiting the natural resources to its extent for our needs, nature and speechless creatures are depleting at a great rate so i thought of taking a small step to save them and make people aware and help them come out of their hazardous slumber.

Smail Barkouch The environment is such a large topic in 2020 especially because of the California wild fires and global warming, so I felt that it was important to highlight this topic of environment some way. I chose to give scientists and enthusiasts, specifically people who are studying and monitoring endangered animals, a resource to crowd source their data. Crowd sourcing is a very powerful tool when it is used, and in a way it bring's people all around the world for one common good. (This is also Smail Barkouch's first hackathon)

What it does

Environment Watch provides a platform for scientists and enthusiasts to monitor and study populations using crowd sourcing. Users locally, in a specific region, and even all around the world can download the app to contribute towards this effort. To contribute, users simply have to take a clear picture of one of the currently 5 endangered animals on the home screen using the app. From there we will confirm it is the exact animal it is supposed to be, take the user's location and the image they took, and upload it to a database. As soon as it is uploaded (and it uploads very quickly), other users are now allowed to view the information. Each animal has its own map with plotted points were it has been spotted, and a gallery of those such photos. Scientists can look through these images and coordinates on the map to monitor where these specific animals are. They are able to sort between the last 24 hours of images and coordinates on the map, or see every spotting that has ever happened. If somehow we incorrectly identified an animal, then users are able to down vote the image in that specific animal's gallery, and with 2 downvotes the animal is erased from the map. This is able to be undone with upvotes.

If you want to add another animal, then you can submit a animal request form on the mainscreen. We will receive it in our server and will add if deemed so. If a user wants to view every animal that has ever been spotted, there is a large map on the mainscreen.

How I built it

The application was built on the android platform using Kotlin. In the android platform an activity is (basically) a pre designed screen, and if an activity is showing then it is currently being shown to the user. To start, the app always opens in the splash screen activity because that is where things are set up before moving on to the next screen. The next screen is a login and sign up screen using firebase or a welcome screen. The user is able to login and sign up using a personal email address and password. If it is the user's first time using the app, the next screen will be the 3 welcome screens. These are just activities with different well designed backgrounds made by one of our members, Harsh Srivastava. These screens explain how to use the app. After the user goes through all these screens they are met with the home screen. If the user already has an account, this is where they would have gone instead of the welcome screens. Here is an activity that consists of two fragments with one showing at a time. The first is a fragment holding a listview, while the second is a fragment from the google-maps-api. The listview in the first fragment is constructed using a common layout defined in our project that is populated when the main screen is opened. The fragment shows a map of earth with google-maps-api. To get this to work I had to register to use their api and use my generated api key in the AndroidManifest. In this fragment I display every animal that has ever been captured using this app. This data is available to all users. I get this data to display all coordinates by using the firebase online storage and looping through every image in every category of animals. Now this may sound simple, but the creators of firebase did not make an intuitive way of actually doing this. I had to get the output of these online files in raw bits and convert them into a string and then split them into their component parts: latitude, longitude, and time (that sounds weird but I use it somewhere else). I then use these coordinates with the google map api to plot a marker at a specific longitude and latitude.

If you click on one of the animals in the first fragment then it will bring you to a new activity that gives the user more information about that specific animal. They layout consists of textviews that just give details about the animal, a small map, a gallery button, camera button, and a day and back map sorting button.

When the camera button is clicked it will launch the camera and request certain permissions. It will ask for the camera and location permission to be granted, and once it is it will let the user take the photo. Once the photo is taken I am given a bitmap and this bitmap is converted into a URI. The URI is needed to send a file into firebase storage, but before it can go to storage it has to be verified. It is verified using a neural network (classifier) that was pre-trained by Himanshu Janmeda.

The classifier was created using tensorflow and numpy (generally, there are a lot of other libraries imported). The data passed into the classifier is normalized into a smaller size, and the RGB values are stripped and passed into a convolutional neural network. After 2 layers the convolutional neural network then flattens the data into a 1 dimensional array. This one dimensional array is then shrunken into 5 nodes for the output. This output is what is passed back to android. The reason we are able to use this model is using the tflite framework, which allows these neural networks to run on devices and have a convenient framework.

If the image is not 60% sure on at least on the specific animal, then the image is not accepted and the user receives a Toast message about it. If the network is 60% sure or more about that specific animal then it is accepted, and if there somehow is more than one then we will accept the highest one if it is the specific animal the user took the picture for. Once the user takes a picture then the photo is uploaded to firebase and given a random name by generating a UUID. The coordinates and time are uploaded to firebase also. Once the user does this there is a screen telling them about their success, and are given the choice to quit the app or go back to the main screen.

On the specific animal screen, there is also a gallery button. Clicking this will open our own custom gallery of every photo that has ever been taken for this animal. The user is able to easily scroll between photos and like and dislike them. If a photo has enough dislikes then they will disappear from the map. This like and dislike data is stored in the firebase database. This data is later extracted when the small map is in view because if a coordinate has too many dislikes, then it will not be plotted on the map. Right under the title is a little calendar day icon. Clicking it will cause the app to pull all the images from firebase but this time exclude the ones that are more than a day old.

On the specific animal screen, there is also a small map and a day and back button. The small map uses the google-maps-api and is plotted from the data in firebase. It will only contain the specific animal's coordinates but only searching in the animal's firebase folder. The day and back button will cause the coordinates from firebase to be reloaded but to only include the ones from the last 24 hours.

Another feature is alerts and it is located in the mainscreen. There is an ongoing background process that will check every two minutes whether 5 or more coordinates in one latitude and longitude of your location have been added in the last two minutes. If they have then there will be a notification, tell the user that there are animals near them and that can possibly spot them.

The last feature is the submission button. It brings up a custom AlertDialog that asks for an animal name and a photo from gallery. This is used to request a new animal and it is received in the database. To launch the gallery I have to start that activity and whatever is clicked will return as a bitmap. I then use that bitmap as I would when someone would take a photo. Once the user submits their photo will be uploaded in a folder called their animal name. I am then able to observe the photo in firebase.

Challenges I ran into

Training the model was challenging. These animals are endangered and it goes without saying that there isn't really a dataset that contains a lot of these animal's images. The accuracy to start was in the 10% (which I don't even understand how that is possible). This was also in part due to how the model was structured.

Another challenge was determining how we were going to store this information online. We originally settled on making our own python database, but then we found firebase and thought we should use it. Though for some odd reason it kept on throwing API key errors, saying that they were invalid. We ended up going back to the python server idea, creating a good bit of it before returning because we realized there is a lot more work left for the python server then if we just tried our luck with firebase. We ended up having to clear out some android studio stuff and it worked.

Firebase itself was a challenge because the method of uploading bitmaps is problematic. To upload anything on firebase we need the physical location it is stored, but since the bitmap was just in memory, how could we provide a Uri that firebase wanted. We ended up having to convert the image into a jpeg format, store it somewhere in the filesystem and then get it's uri. We used that uri to upload. We also had trouble downloading the file after uploading, but it was solved much more quickly.

Connecting the tflite file to the project was really hard. I tried to do it by myself and it took a very long time. A lot of it was complicated and just boilerplate code. Eventually we did find a solution but not after searching through a lot of youtube tutorials and a lot of the tensorflow documentation.

How can we do notifications on a server like firebase was the next problem. Firebase was more so for storing things, and wasn't for giving us alerts directly for a device for stuff. So I had to come up with a way to give alerts directly from the user. I had to pull all coordinates from the server and count the number of times the coordinates were within a range within the last two minutes. If they were I sent out an android notification.

Firebase downloading and uploading was really hard. The code is pretty complicated for something that professes ease of use. I had to turn the downloaded items into bytearrays and then convert them to a bitmap, but not until I set a listener to do it cause if I did it too early it could crash, but at the same time I have to check for null characters it will crash too. Not to mention that everything is callbacks and I can't accurately tell if it is done.

Accomplishments that I'm proud of

I am proud that we fully connected firebase to the application and used it in such a meaningful manner. We used it to make sure our users are logged in. We used it to store different images and coordinates and seemlessly pass them all the way across the world. My two team members instantly saw the images I took a picture of. I am proud that the neural network is pretty accurate and can stop people from easily sending fake images. I am proud that there is beatifully designed original graphic art for the results and welcome screen. I am proud that the application can provide meaningful alerts to its users. I am proud that the application does everything that I would have originally doubted would be possible and then surpassed them some more. It taught me that I am more capable then what I give myself credit for and that I should be proud, that goes the same to my two team mates Himanshu and Harsh.

What I learned

I learned how to use firebase for a wide variety of things, whether it be for storage, for a database or even to log in. It provides a nice way to do it all in a simpler form factor, even if it isn't that simple. I learned how to connect a tflite file to an android studio project and successfully run input through it. When it first worked it amazed me and made me think about the countless ways tflite could be used. It is really fast and lightweight. Another thing we learned was how to optimize a neural network for much better results. We used data augmentation to produce more training data and we scraped our the images off the internet.

What's next for Environment Watch

Environment watch is a really powerful platform and really fast. It can take many different directions honestly. The idea that someone can take a picture and have it be available to every person instantly with a location is awesome. Maybe what is next is a more interactive way to look through photos and coordinates. Maybe we could clean up the design a bit more and incorporate endangered plants. Though what will definitely be added is more animals.

note The video were we presented our project was really glitchy when recording, froze the screen at multiple parts and cut off. The first part is when we were explaining a demo recording of the application and the second part cut off when we were showing the artwork created by us. Both of these things that are frozen and cut off in the presentation video are included in this devpost (ie the artwork and demo video). Here are the slides(https://docs.google.com/presentation/d/1orQTW1bm6qWz6ufSNCEmFH1alW1SkeamhKeJHdII52I/edit). We are really sorry and we recorded this multiple times but it kept on freezing. Here is our last video before this, though it isn't as well presented as the main we submitted, it freezes less (https://drive.google.com/file/d/1RHmbJ6RhfP9icohvZKtEiMwhdK8I32bo/view).

Built With

Share this project:

Updates