Inspiration

One of the most popular forms of social media involves snapshotting one’s life, whether the event is a milestone in an individual’s life or a small moment the user may look back on and cherish. However, due to the rise in addiction to technology and after-effects of the coronavirus pandemic, many individuals today lead a rather sedentary lifestyle in contrast to the active lifestyle our relatives and ancestors experienced for many generations. To amend this issue, our team aimed to create software that would rekindle our connection to nature and to our communities. We realized that it's easier to connect with our surroundings when given concrete goals that feel tailored to our environment, but it's difficult to automate that kind of personal touch at scale. With the advent of Large Language Models, we realized they presented an opportunity to facilitate the connection to nature. With NatureQuest, we wanted to use the phone and its ability to capture photos and social media as a personalized means to encourage interaction between friends and the environment.

What it does

When you log into NatureQuest, you receive a challenge to go outside and take a picture. The challenge is customized based on location with local landmarks and species. Your friends and neighbors in the same city receive the same challenge so you can complete challenges together and compete for the best shot! Once you take your picture, you peruse your friends’ posts from around the world along with the challenges they had to do and where they had to do the challenge.

How we built it

We started our design by figuring out the features we needed in Figma. After designing what features and ideas we wanted to implement, we coded the frontend with Expo and React Native in order to develop a simple mobile app on Android/IOS. Using Expo allowed us to also test the app on our smartphones.The backend was implemented using a Node.js server to coordinate challenge generation and MongoDB was used to store users' usernames, passwords, and current locations. The server in the backend was also responsible for sending API requests to Google’s Gemini 1.5 Pro Model in order to generate picture-taking prompts based on the day and the user’s location. We leveraged Gemini 1.5 Pro's large token limit to feed examples of ideal challenges, as well as all previous challenges to spark variety.

Challenges we ran into

One unique challenge was, that working with a tool as cutting edge as Gemini, was sdks were not fully documented yet, so some exploration was required to achieve our vision. There were a lot of undefined roles and ideas, leading to team members doing multiple things at once without much order. Without a well defined pipeline or fully fleshed out idea, there were many things that we wanted to implement or explore that were thought to be feasible, but they ended up being out of scope. For example, we wanted to use RAG to help Gemini source and use search information when deciding what prompt to generate for the user using their location, the day, and the history of the location the user is in, but trying to pipeline the information effectively to generate a reasonable prompt was way out of my scope.

Accomplishments that we're proud of

Even if our team was not able to completely implement all of the ideas we were envisioning for the project, the amount of work we were able to accomplish during the hackathon was nothing short of impressive. We were able to have most of the frontend completed in comparison to the design files. Considering that this was the first time everybody on the team used React-Native, it was pretty impressive to have been able to translate most of our designs effortlessly. For the backend, it was the first time our backend programmers set up and worked with the server. Learning how to use prompt engineering with Gemini 1.5 to generate a challenge based on the day and the user’s location was also something that was pretty cool to see.

What we learned

For most of the members on the team, many of the tasks each member was in charge of involved technologies that each member was not familiar with. So for most of the hackathon, a lot of the things we learned were how to efficiently use documentation to get what we needed, even if we weren’t fully familiarized with the technology we were using. For most of our team, we all learned a lot about react-native, the development pipeline for mobile apps using Expo, and designing a mobile app using Figma. We learned a lot about prompt engineering and practical applications of LLMs that will certainly be applied in the near future!

What's next for NatureQuest: Capture Your Moments in the Wild!

There are still many features that we want to implement in our original vision for the project. One of the first things we would want to implement after the hackathon is the ability for users to view their past snapshots. Having this feature would be one of the core functionalities for our app as it promotes one of our original visions of retaining the ability to capture moments vital to a user’s life or small moments they would cherish. On top of that, strengthening our available functionalities are also goals we would like to reach in the near future. This includes implementing stronger security measures and fine tuning our prompts to generate more unique and doable challenges. An exciting possibility is implementing user votes for what challenges are the most fun, then feeding rankings back into Gemini to hone in on the best concepts!

Built With

Share this project:

Updates