Inspiration
Every year, millions of people face a wound and don't know whether to treat it at home, go to urgent care, or call 911. At the same time, emergency rooms are overwhelmed — 1 in 5 ER visits are non-emergency wounds that could have been triaged faster and more efficiently. I asked: What if AI could look at a wound and instantly tell you exactly what to do? That question became band.ai.d.
What it does
band.ai.d is an AI-powered wound triage system with two roles — Patient and Doctor. Patients upload a photo of their wound and instantly receive a severity classification — Home Care, Requires Stitches, or Requires Surgery — along with a personalized, easy-to-understand treatment plan generated by GPT-4o Vision. If the wound is critical, the app immediately tells them to call 911. Doctors get a full clinical interface: detailed treatment protocols, ICD-10 code suggestions, pre-surgery preparation plans, and a real-time priority queue that automatically sorts incoming cases from highest to lowest severity. Doctors can also correct misclassifications, which are saved to retrain the model over time — making band.ai.d smarter with every use.
How we built it
band.ai.d is built on two AI layers working together:
- A custom-trained EfficientNet-B0 convolutional neural network, trained using PyTorch on Google Colab (T4 GPU). We merged three datasets — the AZH Wound Care Center dataset, a Kaggle wound dataset, and a Roboflow wound classification dataset — for a total of 2,080 labeled training images across 3 severity classes. We used transfer learning, data augmentation, weighted sampling, and full fine-tuning over 50 epochs to reach 85% validation accuracy.
- OpenAI GPT-4o Vision API, which receives the uploaded image and generates role-specific treatment plans — simplified care instructions for patients and detailed clinical protocols for doctors. The frontend is built with Streamlit, with a dual-role interface, real-time priority queue, and an active learning correction system. The app is fully local and runs with a single command. ## Challenges we ran into Our biggest challenge was data. Medical imaging datasets are scarce, inconsistently labeled, and often not representative of real-world wounds. Our initial model achieved only 67.5% accuracy and misclassified nearly every real-world test image. We solved this by merging three separate datasets, remapping 6 original wound classes into 3 clinically meaningful severity levels, and applying full fine-tuning instead of partial training — ultimately reaching 85% validation accuracy. We also faced challenges with the class imbalance between severity levels, JSON formatting errors in the Colab notebook, and integrating the GPT-4o Vision API with role-specific prompts that produced genuinely useful clinical output rather than generic responses. ## Accomplishments that we're proud of We are proud of building a complete, working medical AI prototype in a single hackathon — from raw datasets to a polished dual-role web application. Specifically:
- 85% validation accuracy on a 3-class medical imaging problem
- The model achieved 96% accuracy on Home Care cases and never sent a surgical emergency home with a Home Care classification
- A real-time priority queue that mirrors actual clinical triage workflow
- An active learning system where doctor corrections automatically improve the model over time
- A relationship with Arthrex as our sponsor, opening a real path toward clinical-grade integration with their Synergy imaging system ## What we learned We learned that data quality matters more than model complexity. Switching from partial fine-tuning to full fine-tuning improved our accuracy by nearly 18 percentage points — but only after we had clean, well-labeled, diverse training data. We also learned the importance of designing for real clinical workflows. The priority queue and dual-role system came from thinking about how doctors and patients actually use triage tools — not just what was technically impressive. Finally, we learned that responsible AI in healthcare means acknowledging limitations honestly, building in human oversight through the correction system, and always keeping the doctor as the final decision maker. ## What's next for Band.ai.d band.ai.d is just getting started.
Our immediate next steps are:
- Mobile app with direct camera capture so patients can photograph wounds instantly without uploading files
- Integration with the Arthrex Synergy imaging system, which can measure wound depth and dimensions — enabling measurement-aware triage instead of purely visual classification
- Expanding the training dataset with clinical partnership data to push accuracy above 90%
- Exploring AI wearable integration: Meta glasses, GoPros, and smartwatches for hands-free field triage Long term, we believe band.ai.d has a path toward FDA exploratory clearance as an AI-assisted triage support tool,not replacing doctors, but helping them work faster and more effectively.
Built With
- colab
- databases
- gpt-4
- pytorch
- scikit-learn


Log in or sign up for Devpost to join the conversation.