Inspiration

At events like hackathons, people are presented with the opportunity to network with nearly thousands of people. Usually, unless you do some sleuthing in advance or with known prevalent people, you mostly come into events like these without much of a clue of who specifically to network with; you are usually left to the luck of the draw of people you walk up to. We then thought, what if you could have LinkedIn in a sense, but specific in scope to a given event (say, HackGT for instance), where you would be able to be matched with people at a given event who align with your interests? We also looked at the HackGT participant cards which are equipped with NFC and then thought, what if we were then able to combine that networking/matchmaking with real-life? Thus came to fruition the idea of NetTag:

What it does

NetTag is the combination of a platform/mobile app that provides event-scoped networking and matchmaking, and a wearable card, designed to be attached to a lanyard, that is Bluetooth Low-Energy (BLE) enabled and allows the user to be notified when they are near somebody who aligns with their interests and would be beneficial to connect with. You provide a profile about yourself—which, for our initial MVP which was centered around use at a hackathon such as HackGT, could include your interests/past projects/project prospects—and your profile is compared against all other profiles using Cedar OS spells and prompt engineering to take into account the data from the LinkedIn upload as well as your optionally provided interest criterion ("I want someone who works in cybersecurity") to provide scored matches. You specify a minimum match percentage, and any people who fall into the match window are sent to the tag to keep an eye out for. If the tag senses the tag of an interested person is nearby, it notifies you with feedback via the LED blinking, and sends a message back to your phone to push a notification, which would allow you to open the app and find precise distance to the person as well as providing info at a glance and conversation starters.

How we built it

The mobile app is built with React Native and Expo, using Gluestack UI for its component library, and agentic interactivity implemented with CedarOS. The backend (after many iterations) is made in Python with Flask, including Mastra for AI and Snowflake for data. We used Gemini models to power AI features within the application. The hardware consists of a Raspberry Pi Pico W, programmed with MicroPython, with a 3D printed case, and a blue LED. As per usual for any hackathon project with a tight time constraint such as this, it'd be remiss to not mention our judicious use of Cursor as well for a good majority of the code.

Challenges we ran into

It goes without saying that 36 hours is not a lot of time to implement even an MVP of an idea as ambitious as this with multiple very involved parts and layers to it. AI is imperfect, so even with our heavy use of it to save us time writing code, lots of time was spent either fighting it to do specifically what we wanted it to, or debugging issues kicked up by Cursor; specifically with issues implementing frameworks such as CedarOS, Mastra, and Snowflake which are a little more obscure and thus less visible to our agents. We made various changes to the platform's architecture at various points into the hackathon, either to start over with issues that were too deeply rooted in a given implementation or becoming aware of specific sponsor challenges we weren't aware of from the start so therefore pivoting to accommodate to them. Those changes both took up time and added up the jank in the codebase, to where a total refactor of the frontend and backend was done past the halfway point to restore everything to a somewhat sane state, with single digit hours remaining after all of that was said and done. Hardware added further layers to the onion, especially interfacing with the hardware over BLE from a mobile device using React Native APIs (much less inside an iOS simulator). All of that led to a very strong product in theory (saying this having talked about our product with multiple people to good feedback) but a lacking in desired features and messy demo and codebase at the moment, simply due to time constraints. Outside of that, at least one of our team members (Garret) had to get used to the vibe-coding pipeline to begin with, as he hadn't ever used AI agents to program on his behalf up until now, having done everything by hand prior, which didn't bode well for the ultra-high-paced hackathon environment. There were many learning curves involved for everyone. To top it all off, very little sleep compounded a lot of these issues by way of reduced cognitive capacity.

Accomplishments that we're proud of

We are proud that we were able to flesh out such a good idea on paper and get somewhat of a demo up and running with the time limitations and issues we ran into. It may not seem like a lot, but it's like scaling good chunk of the mountain for us.

What we learned

We think the biggest lesson of all is that next time we should be well informed of what is offered to us, make a plan of what we want early on and stick to it, be realistic about a minimum/core implementation with time constraints in mind, and don't let changes and disorganization throw us off and waste time like it did this time around. We also learned from the start how to use various frameworks such as CedarOS and Mastra, and solidified our skills in the other frameworks used.

What's next for NetTag

All of us agree that this could potentially be a project that can be taken beyond the hackathon. As it stands right now, if we did, the entire codebase would have to be redone, and lots more planning changes would have to be done to accommodate for turning it into a real product as opposed to good enough yet fitting in 36 hours MVP, but the idea is there and it is ripe to be executed if any of us get the time to work on it.

Built With

Share this project:

Updates