Project Overview: Skin cancer is one of the most common and dangerous types of cancer. Many people go undetected because they lack knowledge of what cancerous moles look like from benign moles. Detection of this type of fast spreading cancer is crucial, and that’s where the Noma Skin Cancer Detection System comes in. Noma is a mobile app (that has also been implemented for desktop), designed to help detect skin cancer, and then help the user identify the type of skin cancer. We also aim to help the user keep track of their scans as well as other information that can then later be passed on to a doctor to make final conclusions. Combining a full-stack application with an advanced machine learning model allows the app to quickly provide diagnosis and information about potential skin conditions. Using a CNN model that uses a pretrained model for image classification, our app effectively labels lesions as malignant or benign, giving the user the chance to reach out for professional help if needed. The machine learning model has a Test Accuracy of 97%.

Project Objective: By uploading images of skin lesions, users can get an instant analysis, which could lead to early treatment. The user ideally is able to take a scan, and then is prompted information about location. That information along with date and time gets saved, keeping an extensive history of the user’s skin. This mobile application uses Supabase, which helps store user data securely.

Challenges: Some of the challenges we faced was once we made the machine learning model, it took a long time to make an endpoint to connect this model to the camera app. Also using Supabase and connecting a database with an app we were working on took a long time especially saving the photos and the output aswell as other important information to our database for secure storage.

What We Learned: Through this project, we learned how to build a mobile app using React Native and TypeScript, making the development process smoother with Expo. We also worked with Supabase to handle the backend, using its database, authentication, and real-time features. On the machine learning side, I gained experience with TensorFlow and pretrained models like DenseNet and MobileVNet to classify images. We also learned how CNNs work for image recognition and how to integrate them into an app. Most importantly, this project taught me how technology can help with early skin cancer detection, making it easier for people to get quick assessments through a mobile app.

Built With

  • and-storage-solutions.-backend:-tensorflow:-open-source-machine-learning-framework
  • authentication
  • convolutional-neural-networks-(cnns)
  • expo.io
  • for-building
  • mobilevnet)
  • pretrained-image-classification-models-(densenet
  • react-native
  • sql
  • supabase
  • tensorflow
  • training
Share this project:

Updates