Reality Ripper is a mobile app to let people document instant cut-out collages of reality from whatever their camera sees. These MaskRCNN semantically segmented images (via OpenCV) are also rooted on location using the sparse pointclouds provided from the user's mobile device ARKit (iOS) and ARCore (Android). The images can be retrieved later to discover what once was on the configuration and state of objects in a particular place!
However, what's most interesting is the data collection potential for a futuristic GAN (perhaps one that can dream up the full 3D mesh from a few images) to learn from on what kind of real objects are where!
Please ping me on LinkedIn for the Testflight or Android beta. (Note right now it's quite a bit rough and fragile bleeding edge and has only been tested with my iPhone XS Max / iPad Pro 12.8" and Pixel XL3)
Note: for some reason Youtube stripped the audio in my original recording. There was not any music that I was aware of but a lot of HVAC noise. Here is the https://youtu.be/ao8zBcRE6qY Vimeo version of the same recording: https://vimeo.com/user291430/review/334599984/b86b3125ee
Log in or sign up for Devpost to join the conversation.