In conventional meetings, one person has control of what is being displayed on a screen everyone can see. Nowadays everyone has smartphones, so why isn’t everyone able to contribute with content to the shared screen?! Different ways people can interact with technology inspire us and Communist Canvas was born.
What it does
The user with access to the shared screen opens the communist canvas website. The participants can now connect their mobile devices to the canvas by inserting the unique identifier - the idea is to use computer vision in the future to retrieve this id just by pointing the smartphone canvas at the shared screen. Upon connection, the devices can add content and position it in the screen.
How we built it
We built the prototype using Swift for the mobile app, NodeJS and Express for the server backend, and Angular and Material for the website frontend. Communication is handled using SocketIO and Restful API.
Challenges we ran into
Having never made a mobile app before, learning Swift was a significant challenge. Using typescript with node.js can be tricky in the beginning - turned out to be really cool.
Accomplishments that I'm proud of
Realtime DOM content manipulation through websockets is really neat. With a team of 2 we were able to create the app, server and website - we are proud of having done so much stuff in only two days.
What I learned
Swift from scratch! Websockets communication using socket.io. Typescript integration with node.js tried out for the first time as well (previous knowledge of node.js and typescript on themselves).
What's next for Communist Canvas
We want to continue working on this project and use computer vision to be able to use the smartphone as a virtual laser pointer. We also want to augment the space around the screen to be able to fit all the important content on the screen, and move the rest out of focus. This content would be visible using a magic lense technique through the smartphone view.