Inspiration

We've all been there: you start a new diet with great enthusiasm, but quickly quit. We were inspired by this common experience, believing the problem isn't a lack of willpower, but flawed tools. The core issues with traditional diet apps are clear:

  • Manual food logging is too slow and tedious.
  • Users get no visual feedback on their progress, leading to unreliable data.
  • This daily friction ultimately leads to a loss of motivation. Our inspiration was to solve this fundamental problem by using AI to create a truly effortless experience.

What it does

Snapit transforms diet tracking from a chore into a seamless, three-second task. Our solution follows a simple *"Snap, Tap, Log"* principle:

  • Snap: A user simply takes a photo of their meal. Our AI, powered by Amazon Bedrock, instantly begins its analysis.
  • Tap: The app presents a list of the most likely food candidates with detailed nutritional information. The user taps to confirm the correct one.
  • Log: With that single tap, the entire meal is automatically recorded in their personal diet calendar, and their daily progress dashboard is updated instantly.

Ultimately, Snapit provides users with a frictionless way to stay on top of their health goals, supported by visual feedback and personalized targets.

How we built it

Snapit is built on a fully serverless, event-driven architecture on AWS to ensure it is scalable, secure, and cost-effective from day one.

  • User Management: Amazon Cognito handles all user sign-up and sign-in operations.
  • API Layer: We used Amazon API Gateway (HTTP API) as the single, secure entry point for all requests from our mobile app.
  • Compute: All backend logic is executed by AWS Lambda functions. This allows us to run code without thinking about servers.
  • AI Engine: The core of our analysis is Amazon Bedrock, where we use a powerful multimodal model to understand the full context of a meal from an image.
  • Storage: Images are uploaded securely and directly to Amazon S3 via Pre-signed URLs. All user data and food logs are stored in Amazon DynamoDB tables.
  • Asynchronous Workflow: To provide a non-blocking user experience, we implemented a decoupled architecture using Amazon SQS. When a user requests an analysis, the job is placed in a queue, allowing the app to respond instantly while a separate Lambda function processes the AI analysis in the background.

Challenges we ran into

Our biggest challenge was evolving from a simple, synchronous design to the robust, asynchronous architecture we have today. Initially, our analysis API would often time out, which forced us to re-architect our core workflow using SQS and polling. This was a significant but rewarding challenge. We also faced and overcame several complex CORS (Cross-Origin Resource Sharing) issues, especially after adding authentication headers, which deepened our understanding of how to build secure web applications that interact with cloud services.

Accomplishments that we're proud of

We are incredibly proud of building a working end-to-end solution that solves a real-world problem. Our greatest accomplishment is the user experience itself—we successfully reduced the time and effort required to log a meal from minutes to just a few seconds.

Technically, we are proud of designing and implementing a sophisticated, fully serverless, and event-driven architecture. The asynchronous SQS workflow for AI analysis makes our system resilient and highly scalable, ready for thousands of users from day one. Finally, we successfully leveraged a powerful multimodal AI to provide contextual understanding of meals, which is a significant step beyond simple object recognition.

What we learned

This hackathon was a huge learning experience. We gained a deep, practical understanding of serverless design patterns, moving from simple synchronous functions to a complex, decoupled asynchronous architecture. We learned the critical importance of API security, implementing JWT authorizers and secure upload patterns with S3 Pre-signed URLs. Most importantly, we learned how to apply an advanced multimodal AI model to a practical use case, focusing on prompt engineering to get reliable, structured data from an image.

What's next for Snapit

This hackathon project is just the beginning. Our vision for Snapit is ambitious.

  • H3 2025: We will launch this MVP and gather crucial user feedback.
  • H4 2025: We plan to enhance features by adding barcode scanning and detailed reports, and we will begin to build a user community.
  • H1 2026: Our vision is to introduce AI-personalized meal plans, integrate with popular smartwatches, and launch our B2C subscription model.

Built With

Share this project:

Updates