Inspiration: Coral reefs are one of the most important ecosystems on the planet, but most people have no way of knowing how much damage is actually happening beneath the surface. Disease outbreaks can wipe out entire sections of reef in weeks, and by the time researchers get there to document it, the window for intervention has often already passed. Divers, snorkelers, and underwater photographers are already out there taking footage every day. We wanted to build something that could turn any of those people into a data point for conservation by making it as simple as uploading a photo to find out if the coral they are looking at is in trouble.
What it does: The Coral Health Detector website gives users a simple interface to upload underwater photos and get an instant AI classification of whether the coral is healthy or unhealthy. The result includes a confidence score and a short explanation of what the classification means. The site also has a built-in training tool for users to upload labeled images and train the model directly in the browser at the bottom of the regular page. The users can upload anywhere from 2 to 20 images and then save the webpage. Then they can go to the regular homescreen and upload a unique coral, and it collects data from the other images to classify it as healthy or unhealthy.
How we built it: We built the frontend using HTML, CSS, and JavaScript. The AI runs entirely in the browser using TensorFlow.js and MobileNet, which means there is no backend needed for detection. The model is trained using transfer learning on top of MobileNet embeddings and saved to the browser's IndexedDB so it persists between sessions. We also built a Flask backend with a REST API for a more production-ready version that serves predictions from a server-side model. The site was deployed on Render.
Challenges we ran into:
- Getting TensorFlow.js to work consistently across browsers took a lot of debugging, especially around how IndexedDB handles model storage on different origins
- BatchNormalization caused NaN loss values with small datasets, which took a while to track down and fix
- Building the drag and drop image upload so it worked reliably for both the detector and the training tool required more work than expected
- Connecting the Flask backend to the frontend and handling the prediction response format correctly took several iterations
Accomplishments that we're proud of:
- The model trains and runs entirely in the browser with no backend, which means anyone can use it and even train their own version without any technical setup
- The site works end-to-end: upload an image, get a real AI prediction, and understand what it means
- We built both a browser-only version and a server-side version, so the project works in multiple deployment scenarios
What we learned:
- How to run machine learning models in the browser using TensorFlow.js
- How IndexedDB works for persisting data between browser sessions
- How to build and deploy a Flask API on Render
- How transfer learning works in practice and why removing certain layers like BatchNormalization matters for small datasets
- A lot about coral disease and what is actually causing reef decline globally
What's next for the Coral Health Detector Website:
- Adding a map where users can tag their location with a detection, building a crowdsourced database of reef health over time
- Supporting video upload directly in the browser so divers can analyze footage without needing the Python version
- Improving the training tool so developers can import a pre-trained model from the Python trainer and refine it in the browser
- Reaching out to marine conservation organizations to see if this could be used as an actual field tool
Log in or sign up for Devpost to join the conversation.