Inspiration

While technology has been rapidly transforming the world, the medical industry has been quite slow to adapt and take use of the newest developments. This is very clear when it comes to ambulances, where currently, the only link between the ambulance and the ER that is about to receive a patient is the 911 dispatcher. After doing more research, we realized that better coordination between incoming ambulances and the ER’s could lead to increases in survivability and actually reduce the time from incident to treatment. With a 5G network, BLANK can provide crucial real-time data and bi-directional communication for medical professionals in the trauma field.

What it does

TeleTrauma is a platform that connects ambulances, mobile ICUs, and field hospitals to specialized doctors in an instant, improving outcomes in a field where every second matters. TeleTrauma provides multiple, real-time, low-latency, video streams to doctors in the ER so they know what to expect once the patient arrives, and to allow them to guide paramedics on the go with precision. It features a live, vitals monitoring system, so doctors can have a better insight into their patients' conditions throughout time. The app also allows for doctors to receive images (like ultrasounds, etc) and annotate them. Furthermore, TeleTrauma can display the status of all ambulances connected to a hospital’s system so resource utilization can be easily managed.

How we built it

TeleTrauma consists of three parts: an iPad app that allows for doctors and medical personnel to view the inside of an ambulance while it is in transit, as well as the vitals, and file transfers. We have an “ambulance” (a raspberry pi for our test case) that connects all the medical devices, GPS, cameras, and (in the future) the 5G network. TeleTrauma also has a backend server to relay the information between an ambulance, and multiple users using the app. Here's our design doc, in which we planned out the API schema of the app: https://docs.google.com/document/d/1jVINPJ3lRpVkl9d4umOnq17789jl1MRT6Qfj14cHGL0/edit?usp=sharing

The iPad App:

  1. We used SwiftUI to organize the UI and handle the data lifecycle Integrated UIKit was necessary because some of our views were either custom or not supported in the UI framework.
  2. One of these is the custom video streaming system we used, essentially Motion JPEG.
  3. The second was the annotations which involved PencilKit integration to tap into all the features of Apple Pencil and, if available, the ProMotion display.
  4. WebSockets are used for the transmission of audio data to the server so the person with the iPad can talk to people at the ambulance/mobile ICU location.

The Server:

  1. We used WebSockets here to stream data in all directions, from the ambulance to the server, and the server to the client. This way we could have real-time capabilities.
  2. Handles multiple socket connections and organizes them based on the ambulance, which makes sure only the people authorized for an ambulance can receive the data stream. It can handle an arbitrary number of cameras per ambulance, making it flexible to different
  3. These servers behave almost identically to WebRTC TURN servers, requiring lots of bandwidth that would ideally be split amongst multiple servers. We do this because sending streams to multiple clients is more efficient this way. We’d like to make it automatically decide to use peer-to-peer or a relay server depending on the situation. The server was developed using a Docker container so hopefully scaling it using Kubernetes will be quite easy.

The Ambulance (Raspberry Pi):

  1. Uses a Raspberry Pi Camera to get the video stream from our camera attachment.
  2. Uses websockets to send data to the server with additional metadata to determine which ambulance this stream should be associated with.
  3. The vital information is sent to the server from the Pi using websockets as well. For our use case, since we don’t have this equipment, we use realistically synthesized data.

Challenges we ran into

  1. Getting the WebSockets to work was very tricky, especially since we needed to organize all the possible websocket streams to make sure data was going where it needed to go.
  2. We had to go through multiple different ways of making a live stream. We first started by using normal HLS streams which worked nicely for testing, but we realized that it has too much latency for our use case. We regretted not trying WebRTC, but after more research, WebRTC support inside iOS apps isn’t as good as we’d like it and the documentation was not very good. In the end, we made our own “custom” solution that has a few benefits, mainly simplicity, with other drawbacks. Our solution involves sending a Motion JPEG stream using a websocket from the Pi to the Server and then the Server to the iPad.
  3. We had to use a lot of UIKit elements for the live streaming and for the PencilKit integration. We usually have to do some of this for every project, but this time, there were a lot more caveats since making sure UIKit elements update when the SwiftUI data publishes changes is more challenging than just showing them.

What we learned

  1. We are very proud that we created our own live streaming service from scratch, even though it has some shortcomings we would like to address.
  2. We were able to decipher Apple’s scant documentation to implement PencilKit (allowing for the use of the Apple Pencil for the annotations), WebSockets, and for the implementation of various UIKit elements in our SwiftUI app
  3. The importance of planning everything in as much detail as possible before one begins a coding project

What's next for TeleTrauma

5G! Right now we’re just using a WLAN for the demo, but a switch to 5G will significantly increase bandwidth and reduce latency. When developing the app we were working in our separate homes and that had a significant downgrade in the connection quality. The real-time vitals and the video stream would lag every few seconds.

We also need to revamp the UI, particularly with a real-time live annotation system that allows people in the field/ambulance to see the annotations a doctor is making at the same time they are speaking. This will be a great feature to make it even more like if a doctor was really there.

Built With

Share this project:

Updates