The worst thing about getting light illnesses, are the lengthy queue times at your doctors place and potential diseases you could catch from others visiting the doctor's place. This however, can be avoided! Inspired by the already broad functionality of Amazon's Alexa, we set our minds on simplifying the doctor visit experience. We could only dream about replacing doctor visits with technology that would allow the same procedure from home, but towards the end of the project, our dream became a reality.

What it does

Our Diagnose application on your home Amazon device like Amazon Echo or Amazon Fire is all you need to contact your home doctor with consultation requests or any other questions that you might have. For the doctors, the life has never been easier - a special application designed just for doctors who are connected to our Diagnose network allows fast and easy review of all patient requests, questions or images.

Our application supports:

  • Full hand-free control of the application using Amazon's Alexa
  • Medical history of patients and their previous diagnoses
  • Two-way communication between the patient and the doctor
  • Easily usable scheduling tool
  • Live consultation requests for urgent problems
  • Accessible diagnosis for the potential problems of the patients

But most importantly - the eye health analysis. Using convolutional neural networks and other machine learning algorithms, we have managed to achieve close to perfect eye feature detection that was tested on massive facial image datasets provided to us by the University of Montreal. Using this technology we can provide insights about eyes to the doctor, extracted directly from the image thus speeding up the process of eye problem review and increasing the accuracy of your diagnosis.

How we built it

The program was built using multiple modern toolkits to ensure speed and stability. Since the focus of the program was eye analysis, the first part consisted of multiple machine learning algorithms written in Python3 that are extracting facial keypoints and features. The image processing is done at the request of our server that was also written in Python3 using modern frameworks like Flask-Ask, Flask and Socket.IO. The server also deals with all the incoming requests from all the patients and the doctors, allowing communication between them using all the data that we have stored in our non-relational DynamoDB Amazon database. The patients and the doctors can also access previous medical records and diagnoses that we're storing in our Amazon S3 Object storage.

Challenges we ran into

Even while being familiar with back-end development and the tools provided to us by Amazon Web Services, the journey still wasn't easy. Multiple external modules that were being used in the programming process turned out to have bugs that haven't been solved for multiple years. To avoid all the bugs, a lot of unconventional and hacky methods were used to make sure that the final project runs smoothly. The training of our machine learning algorithms wasn't an easy task either - the training data provided to us by Zeiss was really small, so to assure that our training remains accurate, a massive amounts of data scraping from social networks was performed in conjunction with the available data that was provided to us by the University of Montreal. Just like problems with the development of the back-end, the front-end wasn't easy either. While bugs weren't the problem while developing our user interfaces, we had to make sure our application design was as smooth and responsive as possible while still fitting in the provided time period.

Accomplishments that we are proud of

Every addition to the growing application was worth celebrating for, however the accomplishments - we as a team are proud of - are the extremely high level of parallelized work. Simulating a professional environment we were able to implement a ton of features in a time period that might look unrealistic for a newly built group. This as a result leads to an even bigger accomplishment that we're proud of, which is the massive scope of tools provided by our final program - from in depth image analysis provided by our machine learning algorithms to the ease of access provided by our responsive front-end for the future users.

What we learned

Not only did we improve our skills to successfully work as a team, but the broad amount of features meant that everyone had the chance to tackle multiple problems and thus deepen the knowledge about a variety of tools that we have used. The skills we have acquired vary from person to person, but dealing with the problems we ran into as a team, makes it clear that we learned not only from the problems we were working on, but also from our teammates that might have different experience with different tasks.

What's next for Alexa Eye Doctor

The possibilities are limitless and we're not planning to stop here! While implementing the program we quickly noticed our passion for this particular task as a team and revealed mutual interest in continuing the development of the application even after the hackathon. The future plans include various optimisations to make it on par with other professional level applications and find more like minded people willing to contribute to a healthier future!

Share this project: