At a previous hackathon in the Fall, I wanted to make an AR app from scratch. That failed miserably when it became way too much work for me and my partner to create in 24 hours. Later when I decided to revive the project, I decided to look for an API to make my development easier. But every API that I found either required the developer to create the app in Unity first then port it to Android which required a lot of work, develop the rendering yourself in OpenGL which is a huge learning curve for many developers, cost way too much for a small developer to deploy with, or some combination of them all. This inspired me to create an easy to use API for every developer from the most experienced, to the newest freshmen.
What it does
This API allows the developer to create a simple SpeedARView in the android layout. From there, the developer can add ViewObj's to the view and interact with their ViewObj's by touch, or through programmed events based on how the phone is moved. These ViewObj's are generated from coordinates from a JSON file of points that will create the object. The view will also automatically move objects in the scene to make sure that their position remains static based upon the real world. This is done through a thread that pulls data from the gyroscope and accelerometer to see how much the phone moves and having the Renderer update the scene that it draws over the camera data.
How I built it
This API was built in Java using Android-Studio
Challenges I ran into
The biggest challenge I hit while programming this was the learning curve to learn OpenGL to allow me to draw the objects that the developers can add to the view. I'm still working through how to take the gyroscope data and pass that on to the program and user. Whether a more accurate, but also more difficult to understand rotationMatrix or the less accurate, but more straightforward Euler angles would be more useful for the user to use. I want to make this as simple as possible, but I do not want extra code on the users part to cover for Euler angles.
Accomplishments that I'm proud of
I am very proud of the OpenGL work that I did for this app and all that I learned from that. I am also proud that this will eventually be available for everyone to use in their own apps.
What I learned
Some Java multithreading and preventing concurrency problems, android programming, and learning how to use OpenGL in apps.
What's next for SpeedAR
Up next for the API is some more testing to make sure that the API functions as expected and there are limited performance issues. From there I will work on using the image produced from the camera to create a virtual environment for developers to add objects to. This would be created by comparing an image to an image captured a small time before and using the movement data from the accelerometer. These 2 images would act as "eye" positions extract a 3d scene as if there were 2 cameras on the phone. Then in the much later future, I would like to add shaders from light sources that the camera picks up. Another later feature will be adding a way to cycle through textures for the same object to allow for easy reskinning on objects.