what inspired me ?

This project was inspired by a simple but powerful question: What if we could predict diseases at our fingertips? No long queues. No unaffordable bills. No unnecessary suffering just because someone didn’t have access to a doctor in time. In a world where we can order food, book flights, and learn skills from our phones —why can’t we access early health insights the same way? CareScan was built with the hope of bridging that gap. So that anyone — from a remote village to a crowded city — can use their phone to detect possible health issues and seek help before it's too late. Because health is not a luxury. It's a basic right— and technology should help protect it.

what CareScan does??

CareScan is a multi-disease medical diagnostic app powered by machine learning. It can predict both image based and text input based disease :

Image based:

  1. 31 type of skin disease (via image classification)

  2. Pneumonia from Chest X-rays

  3. Lung cancer ( via CT scan images)

Text based:

4.Diabetes based on medical parameters

5.ypertension (High BP risk)

6.Chronic Kidney Disease

All predictions are made within seconds using trained models hosted via APIs — with a simple, friendly mobile UI.

How CareScan was built

CareScan was built using this technologies and frameworks :

Frontend: React Native (Expo)

Machine Learning Models: Python (scikit-learn, TensorFlow, PyTorch)

API Deployment: HuggingFace Spaces

Design: Canva & Figma

Testing & Debugging: Android Emulator & Real Device Testing

Challenges I ran into :

Designing a modular architecture that could support multiple disease prediction flows without compromising performance or maintainability.

Ensuring a smooth and consistent user experience across various screen sizes, navigation states, and platforms.

Creating a unified data pipeline to handle diverse input formats required by different machine learning models.

Managing complex form structures with proper input validation while keeping the user interface clean and intuitive.

Coordinating the testing and debugging process for six independent machine learning models within a single mobile application.

Accomplishments that I am proud of:

-Completed full-stack AI-integrated mobile app solo -Integrated six medical models, including complex ones like skin disease classification -Made something that could be used by real people, not just for demo -Designed the app with both care and clarity, keeping the user in mind

What I learned :

Gained experience integrating multiple machine learning models into a single React Native application using API-based communication.

Understood the importance of consistent data formatting and preprocessing when sending inputs to external inference models.

Learned how to manage complex form state efficiently in React Native, especially when dealing with varied input types like pickers and numeric fields.

Improved skills in debugging model responses and handling edge cases in prediction logic.

Learned to manage project modularity by isolating feature-specific screens while maintaining shared state and navigation flow.

Strengthened understanding of mobile-first UI/UX principles to ensure a clean and responsive experience across devices.

Gained hands-on experience with Expo, Firebase Authentication, and RESTful API integration in a production-style app environment.

What's next for CareScan :

Improving model accuracy and adding more diseases (e.g., lung cancer, breast cancer UI)

Adding report saving, share to doctor, and multi-language support

Training personalized models based on user history (privacy-first)

Launching on Play Store and submitting to real-world health-tech incubators

Built With

Share this project:

Updates