I was inspired to build AI4GenetX recently. I have just started to get deeper into machine learning and TensorFlow recently. I have seen the amazing applications of machine learning and wanted to focus a project on improving the quality of life of different people with different diseases. Through my research, I found rare craniofacial genetic disorders as harder to identify by doctors and wanted to try using machine learning to improve detection. This is when AI4GenetX was born!
What it does
AI4GenetX is an iOS mobile application which uses a trained TensorFlow 2.0 CNN model. The app takes a facial image (either from the user's camera roll or a live photo from their camera) and runs it through the trained CNN model. For this project, the two rare craniofacial genetic disorders I focused on were Down Syndrome and Williams Syndrome (these were my two classes plus a normal class, for a total of three classifications). My CNN model outputs a predicted class and a confidence level (probability).
How I built it
To build it, I first had to create my own image dataset (as a preexisting image dataset of facial images of rare craniofacial genetic disorders is not available). To make the dataset, I wrote a Python script which took a list of inputed Youtube video files (which I complied from Youtube under Youtube's Fair Use Policy) and searched each video frame for a face and saved them each as a photo. I then went through the images and created the three classes of images (Down Syndrome, Williams Syndrome, and Normal) in the dataset. Using the dataset, I preprocessed the images (resized, normalized, etc.) and used them to train an adapted VGG16 CNN model in TensorFlow 2.0. After training, I tested the model and achieved an accuracy of 92%.
Next, I built the iOS mobile application to deploy the model on. I used Swift 5 and Objective-C to develop the AI4GenetX App. I converted the TensorFlow h5 model file to an Apple CoreML Model to implement it into the app.
Challenges I ran into
A challenge I ran into was getting a dataset of facial images of rare craniofacial genetic disorders. I first tried to search and find a preexisting dataset, but as my project is quite unique, there were not any datasets available for my application. I then thought of downloading images from Google images and creating my own dataset, but the it would take too long to download enough images to make a respectable dataset. I then settled on using YouTube videos to extract faces (under YouTube's Fair Use Policy). The process would be automated through my Python script and would only require me to check the images and sort them into the three classes. This whole process still took me a month (finding all the videos, writing the Python script, running the script for all the videos, etc.).
Accomplishments that I'm proud of
One specific accomplishment that I am proud of is my TensorFlow 2.0 trained model. When I started the project, I wasn't sure how well the model would perform as facial images have tons of different features and I only had 12,600 images in my dataset (only ~10,000 images were used for training and the rest were used for testing). I trained several different models and tried different machine learning approaches, like VGG16, VGG19, RESNET Models, and transfer learning. When my adapted VGG16 model achieved an accuracy of 92%, I was quite proud.
What I learned
Building AI4GenetX taught me a lot of things. AI4GenetX was the first time I used TensorFlow 2.0 after it was officially released. I had used TensorFlow 1.3 and the beta version of TensorFlow 2.0 previously, but building a project with TensorFlow 2.0 really helped me learn all about it and allowed me to see the differences between it and TensorFlow 1.3. AI4GenetX was also the first time I built an iOS mobile application, so everything I built for the app was brand new for me. The app development was a huge learning curve. Overall, I enjoyed working with TensorFlow 2.0 and learned brand new things.
What's next for AI4GenetX
Next, I am planning on cleaning up the app, making it more user friendly, and then submitting it to the Apple app store to be released to iOS devices around the world.