Inspiration
Glaucoma causes irreversible blindness, yet early diagnosis is limited by access to specialists and explainable tools. Optim was built to provide an accessible, interactive, and explainable AI solution for early glaucoma detection.
What it does
Optim is an AI-powered ophthalmology assistant that:
Detects glaucoma from fundus and OCT images
Performs optic disc and cup segmentation
Enables Visual Question Answering (VQA) for explainable diagnosis
Delivers real-time results via a cloud-deployed mobile app
How we built it
We trained CLIP-inspired multimodal models from scratch using ophthalmic datasets and deployed them as Dockerized cloud APIs. A Flutter–Firebase application enables secure image upload, real-time inference, and explainable AI interaction.
Challenges we ran into
Training multimodal models with limited medical data
Aligning image–text representations for reliable VQA
Achieving low-latency real-time inference
Accomplishments we're proud of
~94% glaucoma detection accuracy
0.90 segmentation F1-score
Fully deployed explainable AI system
What we learned
The importance of explainable AI in healthcare
Real-world challenges of deploying multimodal AI systems
End-to-end AI product development
What's next for Optim
Multi-disease eye diagnosis
Model optimization for mobile inference
Enhanced explainability and clinical integration
Log in or sign up for Devpost to join the conversation.