Inspiration
In rural and low-resource areas, doctors often work with limited tools, yet face the enormous challenge of identifying cancer early. Many patients report symptoms only after it's too late — not because they’re careless, but because basic screening tools are out of reach. Expensive equipment, lack of specialists, and unreliable internet make early detection a luxury. We asked: What if we could give every rural doctor a simple, offline tool to flag cancer risks — instantly and accurately? CanScan Lite was born from this question, to make pre-screening accessible, fast, and entirely offline.
What it does
CanScan Lite is an offline mobile pre-screening tool that assists doctors in detecting possible cancer risks early.
It allows doctors to capture images of skin lesions, oral cavities, or chest X-rays using a smartphone.
Using OpenCV for preprocessing and a lightweight TensorFlow Lite or PyTorch Mobile model, the app analyzes the image and outputs a Low, Medium, or High Risk result.
All processing is done entirely offline — no cloud, no internet — making it ideal for rural or field use.
It also suggests whether a referral to a specialist is recommended.
How we built it
Used OpenCV for image enhancement, including denoising, contrast adjustment, and segmentation.
Trained and optimized a CNN model using public datasets (e.g., ISIC for skin, Kaggle Oral Cancer, NIH Chest X-rays).
Converted the trained model to .tflite format for offline inference.
Developed the app using Android Studio (for native Android) and React Native (cross-platform alternative in testing).
Integrated camera input → preprocessing → model prediction → risk display in a smooth mobile flow.
Challenges we ran into
Finding reliable, well-labeled, and diverse datasets for medical use.
Balancing model accuracy vs. size for on-device deployment without internet.
Ensuring image quality from phone cameras in uncontrolled lighting conditions.
Real-time processing without lag, even on low-end devices.
Ensuring the UI remains intuitive and non-technical for rural healthcare workers.
Accomplishments that we're proud of
Got a working offline prototype that can classify skin or oral lesion images with decent accuracy.
Managed to keep the model lightweight and fast, even without GPU acceleration.
Built an app that feels usable in real-world rural clinic scenarios — where internet, electricity, or equipment is not guaranteed.
Successfully completed a live demo where an image was captured and analyzed in under 5 seconds.
What we learned
Optimizing ML models for mobile is more than just compressing — it involves quantization, pruning, and lots of tuning.
UX in healthcare tools needs to be extra careful — clear, calm, and non-alarming.
OpenCV is powerful but needs fine-tuning when working with real-world medical images.
Designing for the edge means respecting the constraints — and building smarter within them.
What's next for CanScan Lite
Add multi-cancer support: enable selection between skin, oral, and chest X-ray screening.
Improve model accuracy by training with augmented and more diverse data, especially from underrepresented regions.
Add multilingual support and offline audio prompts for accessibility.
Partner with NGOs or rural clinics for field testing and feedback from real-world use.
Explore integration with government health records to make referrals seamless.
Build for iOS and KaiOS to reach a wider user base.
Log in or sign up for Devpost to join the conversation.