Introduction and Inspiration
What inspired us, and what led us to this hackathon. The only thing thinner in AR than the line between "A" and "R" is the margins on recyclable materials.
New and emerging technologies, like artificial intelligence and mixed reality, offer us new solutions to long-time problems that are debilitating the planet and the economy. For that reason, when we heard that we could participate in ImmerseGT - the world's largest XR hackathon - in the backdrop of revolutionary new AI tools changing everything, we knew we had to sign up. And, when we saw the tracks, we knew we wanted to focus on sustainable efforts and creating value for businesses.
Right now, supply chains - especially for sustainable materials - are suboptimal. Because it's often cheaper to make new materials (like plastic, as a general example) than recycle old materials, there's less incentive for responsible firms to recycle or reuse. So, we overspend on materials we don't need, all the while contributing to pollution and climate change, because there's no practical alternative.
However, with current economic conditions, such as supply shortages and new environmentally-based tax programs, sustainably produced materials are within striking distance of parity with - or even being cheaper than - newly minted materials. But operations need to be less costly across the board to get us to that point. What needs to happen to achieve this?
Vision: What It Does
Why this project, and why this platform. A large pain point across firms in the recycling and supply chain spaces is classification. From people not knowing which of the bins with ten colors, and seven shapes to put their one plastic bag in (and usually tossing it into the "real" trash because of this), to lost man-hours from mistakes made moving materials of certain types from belt A to belt B on factory floors, to scrapyards frustratedly digging through a batch of inventory they already looked at because it seems a single misplaced plastic item was just the wrong type of plastic, guessing wrong means losing money - lots of it. Costs would plummet if we could get this one thing right.
Imagine a world with firms pumping out vast quantities of high-quality, recycled materials cheaply thanks to automated classification. Overhead for correction mistakes, the vast amounts of payroll, and simple prevention of honest errors in sorting materials would save incredible amounts of money for firms. If there's anything that could make this cheaper, it's automated software.
Enter Ciclo. This proof-of-concept application uses augmented reality and machine learning to classify various items in 3D space in real time. Bottom line, you put an Apple device - your phone, an iPad, whatever you may have - in front of some items, and it tells you what can and can't be recycled. Classifying images is nothing new, and datasets exist for these applications, but this project was born out of a vision for a solution that worked by and for people in all aspects of sustainable operations. In addition to featuring a classification engine, there is a sample statistics page to demonstrate a key use case for the application; the recording and management of large amounts of recycled material for the purpose of streamlining operations and saving firms money.
User stories and use cases include:
- A device like an iPad placed above or near waste bins to provide real-time "portal" information, lowering the rate of misplaced items in different waste categories
- Providing workers at firms with augmented reality-capable phones for quicker, more accurate filtering or materials on belts and floors
- Integrating into fully automated robotically based platforms for sorting and allocation of recyclable materials
- Using the integrated statistics to calculate relevant information regarding sustainable materials use for tax, tax credits, accounting, and marketing purposes (ex; how many pounds of plastic did people recycle? how much did we make from using this metal? etc.)
Implementation
How we built the project and challenges We started this project by brainstorming a vision based on the provided tracks, and generated some ideas for what we could do. Then we came up with user stories, and some mockups of the use cases we were looking for as we narrowed down the range of ideas. Doing so, we were able to identify the key problem we wanted to solve, and developed the conceptual proof of concept for what we wanted Ciclo to accomplish.
After we had the key features and proof-of-concept concept, we researched options for datasets, models, and platforms. This was challenging, because we had limited time, but there were dozens of potential choices for how to implement the idea, and several equally valid approaches. We had some prior experience in SwiftUI and knew that the CoreML library was easier to work with, so we decided to build the project in SwiftUI. However, this came with a few roadblocks.
There are several different classes and frameworks that, while intuitive, take time to make work bug-free in a 36 hour period. The initial idea of using a large (~1500 image, 50+ annotation types) dataset and training an object detection model fell through simply because it would take longer to train on our laptops than we had time for the hackathon. Regardless, throughout this process, we followed tutorials and iterated thoroughly, stopping to absorb what we learned and try new approaches.
After a certain point, we realized it would showcase our idea best to use the implicit model in Apple's VisionKit and ARKit, and use the ARView to display what the model found. We wanted to integrate some other APIs and Libraries, such as CoreML, as well as do manual drawing of object boundaries and manual AV controlling, but this proved infeasible with the time and resources we had. In the end, regardless, we're proud of what we developed.
Results
What we've learned, and what's next I learned that it is important to thoroughly document your steps in developing a proof-of-concept thoroughly. It's easy when dealing with a set of interconnected problems to get lost in circular thinking, and end up re-trying past ideas with very slight modifications that don't reflect new understanding. It's also crucial to be patient, because when developing in a field like AR, everything is new and there isn't as much established knowledge as in other aspects of software engineering.
We hope to continue this project and build it up. Technically speaking, there is a lot we left on the table we wanted to try! A few key proposals for new features and refactoring are, in no particular order:
- Collecting more data + annotations, standardizing and aggregating the formats, and adding semantic information for annotations ("plastic bag is a subset of plastic", etc.)
- Training our own model based on the above information, and fine tuning it as needed
- Expanding the ARKit implementation with either a built-in CoreML object detector or using another, more expansive neural network library
- Adding live-updating bounding boxes to the AR feed for more industrial use cases and experiment with Apple's built in libraries in this area
- Fleshing out the statistics aggregation and refactoring the code for it and several more
- Creating an API to allow this to act as a platform to interact with industrial equipment
In terms of the business case, we hope to improve the user interface to be even more accessible to the relevant users. The immersive design was great for the hackathon and the applications we envisioned, but may not be ideal for some clients; it depends on user feedback, which we will need to collect. We'd like to have a "snap a pic" variation for more consumer-facing uses, and expand the fully-immersive mode for more industrial applications. Users should be able to integrate this with currently existing tools, which is something else to look forward to.
If you've made it this far, thank you for reading and we hope this was interesting. Best of luck to everyone!
Built With
- arkit
- swiftui
- xcode

Log in or sign up for Devpost to join the conversation.