Growing up, I loved playing RollerCoaster Tycoon on my family's desktop computer. And as the name would suggest, the best part of the game was designing your own rollercoasters. Fast-forward to 2020 and I'm becoming more and more involved in Augmented Reality, especially on social platforms like Instagram and Snapchat. One night, as I'm falling asleep, I have this mini-dream of riding around on a digital rollercoaster. I quickly jotted down the idea in a note on my phone and the next day I got to work on creating it.
What it does
RollercoastAR is a rollercoaster designer and simulator. Using world-tracking on the back-facing camera, a custom coaster can be built using the 21 available track types or a pre-built option can be constructed automatically from the blueprint menu. The color scheme of the cars, track, and supports can also be customized. Once the design is finished, the lever can be tapped, beginning the cars' journey around the track. The user can resize, rotate, and position the coaster in the world-view or they can switch to the front-facing camera to get a close-up view of themselves riding in the cars. They can even open their mouth to cause the digital versions of themselves to raise their hands up into the air as they go around the track**.
**I realize the front-facing effect may not qualify for this hackathon - if that's the case, please disregard any mentions/depictions of the front-facing effect
How I built it
I started by creating the 21 track types (using 10 distinct 3D models) and car models in Rhinoceros, a 3D modeling program. I made sure that each track fit into a grid, so that piecing them together would be cleaner in the final effect. I tried to emulate the main components of a RollerCoaster Tycoon track, including stations/cars, small and large inclines/declines, and a loop.
Initially, I knew that I would need to specify if a track needed to be transformed in any way before placement. For example, a left turn uses the same model as a right turn, so when placing a left turn, it has to be rotated and translated a bit so it lines up properly. I also knew that I would have to keep track of the current position and rotational direction so I could place new tracks at the correct position and orientation. So each track type specifies offsets for position / height and rotation if that track is selected. Other information was added in as I tested, like where supports should be built for each track, the NativeUI icon associated with each track, and the types of tracks that are allowed to be placed after each track. The last part of each track definition was the movePts method, which calculates an array of keyframes for position and rotation of the car based on its speed as it travels around the track.
Once I had the basic definition of each track, I was able to put together an addBlock function that places the chosen track at the current position with the correct orientation/height and updates the relevant variables / UI elements, including changing the options in the NativeUI picker and appending new carPath keyframes. I also had to create helper functions for things like transforming offsets based on the current rotational direction of the track, constructing supports based on track type and current height, and adding the ability to undo the last placed track or delete the whole coaster.
Now that the building functionality worked, it was time to connect it to our UI element - the NativeUI picker. I added icons for each track type, undo, and delete. In the original version of the effect, I also added in icons for playing/pausing the animation and opening paint menus. I then connected these options to my functions.
The final feature for the original version of this effect was animating the cars around the track. Using the keyframes generated by the addBlock / movePts functions, I set a time interval for the cars to cycle through each keyframe. As long as I kept the frame rate high, the animation looked smooth.
At this point, I had a functional world effect that accomplished the goals I originally set for myself. However, as I discuss a bit more below, the original version that was published had issues, namely that the menu itself was too complicated and there were no pre-built tracks to choose from. In addition to this feedback, I also had the idea to add a front-facing camera experience to the effect that used the user’s face to make it a little more personal/shareable.
Version 2 required copies of the track to be built on the front and back facing cameras, simultaneously. So I duplicated my scene, moving the copy to the Focal Plane, and refactored my code to reflect the changes being made to the back camera scene in the front camera copy. For running the animation, I wanted to keep the front car in the center of the screen throughout the journey, so I created logic for applying the car transformations in the keyframe animation to the container object(s) so it appears the camera is moving about the scene, following the lead car.
I also created a method for building coasters based on a few pre-built designs I made. And lastly, I added in the 3D UI elements - the start lever, the paint buckets, the building tools, the pre-built option blueprint, and the height indicator (to help with lining up the ends of the track).
Challenges I ran into
There were three main platform-specific challenges that I had to work around on Spark:
- Spark does not allow for dynamic instantiation / reparenting of scene objects
- Facebook/Instagram is strict when it comes to custom UI elements
- The Reactive programming style didn’t really suit this application
No dynamic instantiation meant I had to pre-instantiate a set number of each track type and supports. This meant that users would not be entirely free to create the track of their dreams if it included more than 20 loops, for example. No reparenting meant that I would have to build two copies of the track at once - one for the back camera and one for the front. The track limit is arbitrary, so I could theoretically pre-instantiate as many copies of each track as I want, but practically, having large numbers of tracks in my scene made navigating the Spark interface slow and frustrating. Creating two copies of the entire track (effectively doubling the number of objects in my scene) created similar issues, in addition to requiring a refactor of my code to support editing both tracks simultaneously.
During the review process, custom UI elements - including anything that resembles a button, slider, toggle, etc. - cause an effect to be rejected. I found this out the hard way the first time I submitted this effect with a fully custom 2D interface overlay with buttons and sliders. However, Spark provides two built-in UI elements in the NativeUI picker and slider to give creators some standardized UI options, so I figured I could try adapting my custom UI to live within those. The next iteration of the effect nested all of the menus for building, painting, and running the coaster into the NativeUI picker and I was able to get the effect approved. I was very excited, but as people began using the effect, I was hit with overwhelming feedback that the menu was really complicated to navigate and people really wanted pre-built options (which wasn’t something I had considered) so they could quickly have something worth sharing. To work around this, I tried reimagining the UI in a 3D format. Rather than buttons or nested Pickers, I created tappable 3D elements that represented the menus and that, when tapped, opened the corresponding Picker menu.
Reactive programming can be great and I’ve really enjoyed using it in certain cases where it makes updating a property relative to another object’s property extremely easy. That said, it wasn’t ideal for this application as I needed static values that I could reference/adjust on the fly to keep track of the state of the experience. To work around this, I created my own vec3 class, implementing standard vector math methods along with adding in methods to convert to relevant signal data types, such as pointSignal. This wasn’t entirely necessary - I was able to successfully use Signals in the original version of the effect - but creating the class cleaned up a lot of extra code that felt cluttered, and I think it might be useful in future projects as well.
In addition to the platform-specific challenges, there were also technical challenges that I faced. These included:
- Rotating cars/tracks using Euler angles as opposed to quaternions sometimes resulted in unwanted effects
- Keeping the front car in the same spot on the screen in the front-facing experience
Both came mostly from my own inexperience with specific 3D properties. I had never really thought about exactly how rotations are applied in 3D space, but in most cases, Euler angles are applied in order, i.e. X then Y then Z. Essentially this just led to cars being rotated along the global z-axis when I was only applying x and y rotations (I think…). The simple fix for this was using nested nullObjects that exclusively received one-axis rotations. This ensured that rotations were being applied to the local axes as opposed to global axes, which made way more sense to me. Similarly, changing the front-facing experience to move the scene relative to a fixed camera to achieve a tracking effect was something I had never done before. The solution turned out to be pretty simple - negate/invert the positions/rotations of the lead car as it travels around the track and apply those transformations to the one-axis rotation objects that contain the entire scene - but it was definitely something that left me scratching my head for a few days.
Accomplishments that I'm proud of
I’m pretty proud of the workarounds I came up with for the challenges I faced. I think the 3D UI adds a fun unique-to-AR element to things that I probably wouldn’t have ever considered if I didn’t have to navigate around the UI restrictions and I think the front-facing camera experience really adds an extra dimension to the whole effect. Overall, I’m just proud that I accomplished a personally ambitious goal and that I incorporated a lot of the direct feedback I got from people who used the effect.
What I learned
Of course I learned more about building 3D experiences, about implementing feedback from users, about navigating Spark AR, but I think the most important thing I learned through building this effect is that you’re much more likely to put the work into building something full and complete if you’re personally excited by it - conceptually, technically, etc. - because it will never feel complete to you. There’s always some new feature worth experimenting with, some detail missing, or some bug or technical challenge to overcome. I think that drive is the only thing that can make something great.
Not that greatness is guaranteed - just because I spent a lot of time obsessing over this effect doesn’t mean that it isn’t needlessly complicated or that it’s interesting to people without a nostalgic attachment to rollercoaster simulators - but I think my personal excitement over the concept and technical challenge gave me the opportunity to do something special, regardless of whether the end result is actually special or not.
What's next for RollercoastAR
There are a few things I would look to improve/add in subsequent versions of this effect:
- It would be fun to add more pre-built options based on user-submitted designs. The logistics around that aren’t too clear, but it would be fun to feature other people’s imaginations
- Currently, the cars race around the track separately. Ideally, they would remain attached as a single train that would ride around the track together, with more realistic acceleration/deceleration. This would probably require a rework of the movePts method so that it kept cars together by distance as opposed to time
- Some of the animation keyframes don’t quite match up perfectly with the track. I would like to refine those a bit
- Adding new track types, like corkscrews, or entirely new track formats, like an inverted or wooden coaster
- I think adding shadows would make the back-facing effect feel a bit more real - doing so without built-in shadow casting would be tough, but there’s probably a way to do it with SDFs if I’m feeling really ambitious
Outside of Instagram, I’m currently putting together an iOS version of the game using ARKit in my free time. With the extra freedom in file size and features, I think turning it into an entire AR amusement park simulator would be pretty fun.