Inspiration :
In emergency situations, it’s easy to panic — we’ve all been there, especially as students. Whether it’s seeing a friend get hurt or not knowing how serious a cut or burn is, we realized how helpless it can feel when you don’t have immediate access to medical advice, when searching on the internet it's a lot of info to take in not knowing what could work for you. That personal experience inspired us to build MediLens, a smart, AI-powered web app designed to help people like us make quick and informed decisions in stressful moments. By simply uploading a photo or describing a symptom, users can get first aid guidance, understand the possible condition, and even find nearby hospitals and all in one place. We wanted to create something that could truly make a difference when it matters most.
What it does
MediLens is a web application we built to help people quickly understand and respond to medical situations using AI. The idea is simple: users can either upload or take a photo of an injury, or just type out what symptoms they’re experiencing. Our app uses the Gemini API to analyze that input and give back useful, easy-to-understand information about what might be going on, how to treat it with basic first aid, and where to get help nearby. We also integrated live location support so the app can show nearby hospitals on an interactive map along with this our team has added the ability to use your webcam to directly capture images, making the process even faster in emergencies. MediLens is designed to be accessible and helpful for anyone, no medical knowledge or app installation needed, just open the site and get support.
How we built it
Backend + Frontend : We built MediLens as a full-stack web application, combining different tools we learned along our journey from figuring out flask python in backend to API integration and map integration in frontend (let me tell you a lot of hours of work). On the frontend, we used React with CSS/HTML to create a clean and responsive user interface for our analyze part. We also used Leaflet.js to display nearby hospitals on an interactive map based on the user's location. To capture images from the webcam, we integrated react-webcam with react based tools, which made the experience feel more real-time. On the backend, we set up a Flask server in Python, which handles all the API requests and communication with the Gemini API for AI-based image and text analysis. We also used Geopy and Geocoding APIs to calculate distances and get the names and addresses of nearby hospitals. Everything is tied together through environment variables managed in .env files for security. We hosted everything locally during development, apart from this integrated Google map links for each individual hospital through the latitude and longitude of each hospital in a nearby radius.
Challenges we ran into
The google maps api integration is the place where we got stuck for hours even after trying to integrate the google maps api into our web application. Initially, we thought it would be straightforward to show hospitals and clinics using Google Maps, but we quickly realized that it added a lot of complexity to the server-side and would significantly slowed down our web app. We spent hours debugging the integration, trying different configurations but in the end, it just wasn’t working. Eventually, we decided to pivot and go for a lighter and open-source alternative, Leaflet.js with OpenStreetMap which turned out to be a blessing :). It gave us much more control, worked smoothly on the frontend, and didn’t overload the backend at all. Other challenges included getting the Gemini API to consistently return helpful responses for both text and images handling images through webcam was also one of the hurdles.
Accomplishments that we're proud of
We are incredibly proud of integrating our web sockets between frontend and backend and using flask and gemini API as well as leaflet.js type of open softwares.
What's next for MediLens
While building MediLens during this hackathon, we discovered so much potential to take the application even further. One of the first things we were originally plan to do next was to improve the accuracy of injury detection by experimenting with more image focused AI models and training them on real medical datasets. We also want to show locations on the map for the specific purpose of the user's injury so, it can be more personalized for the user's injury.


Log in or sign up for Devpost to join the conversation.