Inspiration

Making learning fun for children is harder than ever. Mobile Phones have desensitized them to videos and simple app games that intend to teach a concept. We wanted to use Projection Mapping and Computer Vision to create an extremely engaging game that utilizes both the physical world, and the virtual. This basic game intends to prep them for natural disasters through an engaging manner. We think a slightly more developed version would be effective in engaging class participation in places like school, or even museums and exhibitions, where projection-mapping tech is widely used.

What it does

The Camera scans for markers in the camera image, and then uses the markers position and rotation to create shapes on the canvas. This canvas then undergoes an affine transformation and then gets outputted by the projector as if it were an overlay on top of any object situated next to the markers. This means that moving the markers results in these shapes following the markers' position.

How the game works

When the game starts, Melvin the Martian needs to prepare for an earthquake. In order to do so you need to build him a path to his First Aid Kit with your Blocks (that you can physically move around, as they are attached to markers). After he gets his first Aid kit, you need to build him a table to hide under, before the earthquake approaches (again, using any physical objects attached to markers). After he hides, You Win!

How I built it

I began by trying to identify the markers - for which there was an already implemented library that required extensive tuning to get working right. I then made the calibration process, which took three points from the initial, untransformed camera image, and the actual location of these three points on the projector screen. This automatically created a transformation matrix that I then applied to every graphic I rendered (eg. the physical blocks). After this, I made the game, and used the position of the markers to determine is certain events were satisfied, which decided whether the game would progress, or wait until it received the correct input.

Challenges I ran into

It was very difficult to transform the camera's perspective (which was at a different frame of reference to the projector's) to the projector's perspective. Every camera image had undergone some varying scale, rotation and translation, which require me to create a calibration program that ran at the start of the program's launch.

Accomplishments that I'm proud of

Instead of relying wholly on any library, I tried my best to directly manipulate the Numpy Matrices in order to achieve transformation effects referred to previously. I'm also happy that I was able to greatly speed up camera-projector frame calibration, which began taking around 5 minutes, and now takes about 15-20 seconds.

What I learned

I learnt a great deal about Affine Transformations, how to decompose a transformation matrix into its scale, rotation and translation values. I also learnt the drawbacks of using more precise markers (eg. April tags, or ARUCO tags) as opposed to something much simpler, like an HSV color & shape detector.

What's next for Earthquake Education With Projection Mapping and CV

I want to automate the calibration process, so it requires no user input (which is technically possible, but is prone to error and requires knowledge about the camera being used). I also want to get rid of the ARUCO tags entirely, and instead use the edges of physical objects to somehow manipulate the virtual world.

Share this project:
×

Updates