Inspiration

Precise Measurement of Dimensions

While playing around with the Google Tango, we discovered a pre-installed application that measures the length of real-life objects using the camera and the sensors unique to the Google Tango. This sparked our interest: If we can find the length of the real-life objects, why can't we find the area, or even the volume of real-life object?

What it does

Measuring Coordinates

AccuVolume tracks the relative coordinates of the points from the starting point and displays them onto the UI. This way, you can see the difference between the different points and the precision that Google Tango offers with their camera and IR sensors.

Measuring Length

AccuVolume allows calculation of straight-length between the recorded vertices. It calculates the total distance travelled along the lines formed by the vertices.

Measuring Area

Given a set of vertices on a plane, the area created by the edges formed from the vertices were evaluated using Heron’s formula.

Measuring Volume

Give a set of vertices in a three-dimensional space, the volume created by the edges from the vertices were evaluated using approximation by spheres.

How we built it

Quickstart Project

Our app was built on top of the Quickstart Project, provided by Google.

Controlling the Interface

Since it was possible for us to find coordinates of points in space in real time, we aimed to develop a UI that would enable us to determinedly select points of interest with Google Tango, store them, and run calculations on them as soon as we finished taking data. We worked around the clock this weekend, resolutely reading through Google Tango’s still-developing API, to find the methods and imports we’d need to accurately control the sensors and craft the UI.

Tango API

Tango API was heavily utilized throughout the program. Buttons were created for different purposes and were used to call the back-end operations using onClick().

Challenges we ran into

A Work in Progress

Our greatest hurdle to overcome this weekend was that Google Tango was still an experimental sector of Google’s growing technology base. Many of its applications, imports and examples were heavily layered, and beginning a completely new project was more difficult than we imagined. For example, Unity, C or Java? Most apps were developed in Unity, but Unity rendering required computing power inaccessible to us. We were most comfortable coding in Java, but it had the fewest examples of the three to work with. Regardless of these obstacles, we pushed on.

Accomplishments that we’re proud of

A Different View

Most applications and examples we found utilizing Google Tango provided users with a semi-immersion VR, and code that allowed interaction with the real world was still under development or improvement. We wanted to shrink further the boundary between the virtual and real world by providing an interface people could use to interact with their environment, and we’ve taken notable steps to bridge that gap.

What we learned

The Applications are Endless

This weekend, we learned more of the Android Studio interface, including Gradle, Google Tango API, and Unity/C# coding, but more importantly, we learned the limitations and possibilities of Google Tango. Currently, because of the limited RAM, depth renders and graphics are difficult to maintain without an external memory bank. Also, a lack of transparency of some core, built-in classes like Renderable and Trajectory made it difficult to wrestle control from them to choose when and where data was allocated, how it was painted on the interface and how to perform calculations on the data that was stored. It was a difficult system to learn, one that made time seem very precious this weekend.

What's next for AccuVolume

Immediate Development

For immediate development, we are hoping to increase the accuracy of the data by finding the suitable mathematical formula for finding the area of n-gon and n-hedrals. One possibility is to import Mathematica API for calcuations, which will allow computations to be faster and more accurate.

Looking Ahead

If we are to develop this app further, we hope to see it being used for various land area and volume studies. This application has various usages. For example, this app can be used to measure the area of New York City or to plan out an area of land for construction. It can also be used to determine the volume capacity of a lab for vacuum. We are also looking into developing a 3D Rendering view of the points and implementation of depth perception, which would allow the visuals to be matched with the numbers, allowing CAD modeling of the locations that are being studied.

Share this project:
×

Updates