Our team consists of four Biomedical Engineers and a Business engineer and we perceived several problems between patients and Medical professionals. There in a service niche that can be covered with new technologies namely Cognitive IoT. Here are some Facts:
- There are communication barriers between patients and medical profesionals.
- Excessive response time to care for patients
- Not all hospitals have notification systems (mainly in developing countries).
- They have poor management on the attention of the patient.
- Drugs may not be delivered on time.
- It is not posible to provide an early attention on emergency situations.
Systems currently in the market are only for nurse call systems and devices are often of complex operation and not friendly. And they do not incorporate Industry 4.0 technologies.
What it does
MIA is a smart medical assistant, it is a web and cloud based platform with IoT components: a button, Light terminal, Voice and Vision recognition devices. The hardware and software work in conjunction to provide the hospital and/or the relative with real-time notifications about any emergency their loved ones are suffering and provides data models that can be used by the medical professionals. The versatility provided can be used by any kind of user no matter the medical context.
Imagine it as an Alexa Device purposely build for patient-medic professionals applications.
Also an analogy with a smart panic button can be made but the system can differentiate between emergencies and only basic calls.
The buttons will serve as these panic buttons, the notification lights are standard in every hospitals so the professionals can know which room had the emergency.
These two in conjunction with the image and voice devices will send notifications to a Webb app.
We have implemented all the IoT infrastructure, which are basically the two buttons and the notification lights in conjunction with the voice service and image recognition devices. The cloud services integration are both in the Amazon AWS and IBM cloud platform. And the web page to receive these notification is fully functional.
How we built it
The main part of the system was built with two raspberry pi's namely the voice recognition part and the image recognition. For the voice recognition a microphone HAT was used and for the image recognition a Pi camera. The software was implemented almost fully in Node-RED using several cloud services.
For voice recognition an Alexa skill was developed and managed through Node-RED, we also have an IBM watson version using STT, then TTS and Conversation Services.
For the image recognition part a Node-RED flow was also developed with the Rekognition AWS service and also IBM image, the current version tracks the faces of the patient and when it detects an anomaly it sends notifications to the system.
For the buttons and the notification lights an esp-8266 microcontroller was used in both cases.
All the devices are connected to an MQTT cloud that in turn is managed through the Node-RED flows and the notifications are sent to the Web app and can also sent notifications via email or SMS.
In the image above you can see how the system architecture was designed and implemented.
Market / Market Size / Business model
It costs hospitals 12 billion dollars a year having poor communication systems, the valuation for nurse communications systems for 2017 was 1.2 billion dollars. Adding to that you can save approximately 2.1 billion dollars by implementing advanced call systems in hospitals. This all ads to an 11.6% compound annual growth rate. Sources link link
Our business model will be based in a Platform as a service model. We will sell the hardware and that is the starting point, it manages the communication using the button , voice and image recognition devices. These communicate with a cloud-based web service that allows the monetization using a subscription model. We also will provide continuous maintenance and support which will be also monetized. The data that will be collected will be used to improve processes in each institution and will also be monetized.
Challenges we ran into
The main challenges were to develop this system with what we had in hand, namely a couple Raspberry pi's and a couple microcontollers. Apart from that we intended to use free services for everything so the logistics and technicality of the solution grew, as some cloud services for them to work for a long time have to be bought or incurr in recurring costs.
Accomplishments that we're proud of
- Full integration and usability.
- Entered our University incubator to fully develop the project.
- Built our network to have a change to validate the product.
- It does not need any aditional configuration but wifi credentials (think amazon echo)
What's next for Blankit: MIA assistant.
- Pilot tests in hospitals (currently we are in talks with Medica Sur and Angeles group that in Mexico City are two of the biggest private hospitals).
- Full integration with a single cloud service, namely AWS.
- Continue R&D with image recognition to expand the capabilities of the system.
- Integrate Domotics
- Doing calibration of these domotic systems with image recognition depending of the characteristics of the patient.