Inspiration
I got into the industry excited by NFTs in 2021, but I never really understood the way that they were being utilized. In my perspective, what we know as the user experience for NFTs, will not be the same way that artists leverage this technology for actual use cases. This project represents a user journey and integration that makes a lot more sense, as someone who works heavily in grassroots arts communities.
I've thought about building a project like this for about 2 -3 years. So I was excited to finally have the chance to create it. I was partially inspired by other applications that allow you to scan art in order to trigger augmented reality. This made me more aware of the image recognition capabilities on our phones and I wondered why that hadn't been heavily integrated into an NFT collecting UX yet.
What it does
Users can log into the app with email; behind the scenes the app creates a wallet address associated with that account, without the user needing to understand or interact with a traditional crypto wallet. For this use case, we imagine the user at a gallery show. They are instructed by the app to go through the event and look for work by a specific artist.
When they find a work by the artist, they can go to the Collect page, open their camera, and scan the piece of art. The app uses image recognition to understand which image it is looking at, then mints an NFT that represents that piece of work and sends it to the user's abstracted blockchain account.
The user can go to their Collection to view the pieces and find additional information on the work. The app also features bonus content. Once the user collects several pieces, they unlock a new piece of art that they can collect from within the app. Collecting this bonus content also mints an NFT to their abstracted wallet. When they navigate back to their Collections page, they can find the final piece of art they collected, along with companion music that is now unlocked for listening.
How we built it
I vibe-coded the majority of this project using windsurf but there were still a lot of pieces that needed to be connected and action needed on my end throughout the process. First I built a simple proof of concept using Web3Auth sign in and QR scanning to mint NFTs to the users account. This was built for our local Toronto Polkadot Builders Party.
The proof of concept was built on the ERC-20 standard on Moonbase. I did this for fast execution as the AI tooling was most familiar with this architecture. ERC-20 came with some tradeoffs that included needing a centralized database to seed the NFTs, in order to create the logic where multiple users can mint the same NFT.
This led me to migrate the architecture to ERC-1155 for the global hackathon submission. It allowed me to create an NFT "edition" for each piece of art that could then be minted an infinite number of times to user accounts. The result was a cleaner architecture under the hood and mitigated need for a centralized database, moving everything onchain or to IPFS.
Before upgrading to the ERC-1155 version, I added the image recognition feature. I didn't want to use a cloud-based AI system, so instead I used a TensorFlow model. I took roughly 15 photos of each piece of art in varying degrees of light/distance, then trained a simple model off of them using Teachable Machine.
After a lot of UI tweaking, I added the bonus content functionality. This checked what NFTs the user already had in their account and once they had collected all 3, it allowed them to unlock a 4th piece of content. There was extra tweaking needed as this NFT also included an MP3 file associated with it that needed to be played in the app.
The final piece was just figuring out how to get it on GitHub, deploy an API on Railway and deploy the web portion on Vercel. This actually took a lot longer than expected because things were finicky and the AI was less help once things left the local device.
Challenges we ran into
Launching the public API was extremely hard and I still don't really know why. The AI kept thinking it was something to do with a file that needed to be deleted but had already been deleted... In the end the biggest contributor was that it was missing the env variables.
The other big challenge I still haven't solved for is load times. Things are slow because of it being on IPFS. If I was to push this into a production environment, I would likely have a centralized database that the app can reference to help with caching the page, while still keeping the IPFS trail in tact.
Accomplishments that we're proud of
I feel that with where things are at, the final product is a level above a proof of concept. The branding and user journey seem pretty straight forward and simple to follow. It was surprisingly easy to create the abstracted wallet experience, which was a happy surprise. Although things are slow, I'm also proud that everything except the front end is hosted on a decentralized protocol. I'm also extremely pleased with the image recognition functionality. It was so much easier than I thought it would be to create that type of user flow and its already got me thinking of a lot of other use cases.
What we learned
I've never built any sort of application before, so every piece of this was a learning experience and I really enjoyed it. Working with AI to build this was fun and frustrating at times. Although it was great at helping build things out and solve for problems, I had to still pay attention to what it was doing and understand the logic of things. Otherwise some issues would have never been solved and the design would have been a lot clunkier.
Figuring out how to use TensorFlow.js to create image recognition within the app, rather than using an external cloud AI was a great level up for my skillset. I also learned how to deploy apps using my laptop as a host/api for the first time. Surprisingly, the biggest learning curve was probably launching the public API for the demo.
This was also an excellent opportunity to dive deeper into the Polkadot ecosystem. Although I built things on a pretty basic layer of the network, I did explore other areas as well as I qualified the optimal architecture.
What's next for Corduroy Collector?
I'm going after local community arts funding in Canada to continue building out this project. That includes working on the technical side of things, but also creating or finding new art to integrate within it. My goal is to get the application stable enough for public use by the time we host our next arts festival at the end of March. That way we can have festival attendees using it to collect pieces at the event.
Additional Note on the Demo
If you're testing the app with the vercel link below, it functions best on mobile (though it is desktop compatible). The three images you need to scan to collect are in the google drive link.
Built With
- cors
- dotenv
- ethers
- express.js
- hardhat
- ipfs
- moonbase
- next.js
- node.js
- pnpm
- react
- tensorflow.js
- typescript
- web3auth
Log in or sign up for Devpost to join the conversation.