Inspiration

We have a lot of extra tech nowadays. The routine upgrade cycle of phones and tablets has left many of us with piles of old devices. Most of them are functional, but either too tedious or too old to do anything useful with. We thought it would be cool to make all the old mobile devices we have laying around work together to form one unified display.

What it does

There are three main components to this:

  • Subordinate: A device (like an old phone or tablet) that will provide its screen
  • Coordinator: provides content to send to each subordinate
  • Web Server: manages WebRTC handshake between coordinator and its subordinates.

Here are its operations in order:

  1. With the web server started, the coordinator is initialized with video to send. It scans a field of view with a camera to look for devices.
  2. Visiting the web server on a device has it display a QR code with its ID. When placed in the camera view, the coordinator scans the QR and requests to initialize the WebRTC connections.
  3. The coordinator and the subordinate go through the WebRTC protocol to create a P2P connection.
  4. The coordinator projects the source video into camera space to determine what segment to display on the subordinate.
  5. The coordinator sends that segment, which the subordinate displays.

The end result is mosaic generated across all the devices in the camera's field of view.

How we built it

We had a pretty solid plan going into this project. We both understood the vision and have lots of experience working together so had a strong start going in. We hit the ground running one of us working on scanning QR codes the other on getting WebRTC up and running. Then we spent a while in integration hell trying to get those two parts to work together. Once they did we wanted to upgrade the system to handle video streaming, and this got us stuck for a really long time in the screen capture on different Linux window managers rabbit hole. But once we escaped that we had something that sort of worked. The last big feature was to implement the projection of the video onto the camera space. Then to fix all the numerous bugs that continually popped up. We are probably still doing that.

Challenges we ran into

This project lacked distinct independent components for us to work on, which resulted in most of the process occurring with one person just looking over the shoulder of the other, and vise versa. Or in really bad cases lots of time lost to resolving merge conflicts.

Learning new technologies is always difficult and this time was no exception. We had never worked with OpenCV or WebRTC before so we had to we had a pretty solid plan going into this project. We both understood the vision and have lots of experience working together so had a strong start going in. We hit the ground running one of us working on scanning QR codes the other on getting WebRTC up and running. Then we spent a while in integration hell trying to get those two parts to work together. We were able to pull together at least a rudimentary understanding of each part to get things working.

Accomplishments that we're proud of

We were not super confident that this would work or if it would even be interesting. So the fact that we got it working and that the result was way better than we expected really blew us away.

What we learned

This project really helped us explore the importance of planning. We have done hackathons in the past and have a habit of rushing into the 'cool' or 'most important' feature which resulted other things falling to wayside. This time around we made sure to discuss and write down our whole plan beforehand. And it really showed; things went relatively smoothly and I think we had a pretty successful event.

What's next for Phosaic

I think the logical next step would be to implement some sort of object tracking so that we can update the displays as a screen moves through space.

Share this project:

Updates