Inspiration

Museums are incredible hubs of culture and history—but for many, especially younger audiences or those with limited access, they can feel static or out of reach. We wanted to reimagine the museum experience by using technology not to replace the art, but to enhance it. Our goal was to make museum visits more interactive, inclusive, and educational—while also offering the same rich experience to users exploring from home, regardless of location or access needs. That idea became ARtifact.

What it does

ARtifact is a mobile app that transforms art exploration into an interactive, AR-powered journey. It starts at home, where users place 3D artwork models into their space, sparking curiosity and inspiring museum visits. Once on-site, they embark on ArtQuests by scanning selected pieces. Each scan triggers an AWS Lambda function that initiates an image recognition pipeline via Amazon Rekognition, unlocking in-app rewards. By combining remote engagement, real-time image recognition, and gamified discovery, ARtifact makes cultural experiences more immersive and engaging.

How we built it

ARtifact was built using React Native with Expo for the frontend. The backend is powered by a full suite of AWS services, handling authentication, storage, database management, and real-time computing. At the heart of our image recognition pipeline is an AWS Lambda function, which acts as the critical orchestrator — automatically triggered when a user uploads a scan. It seamlessly connects services, processes the request, and launches Rekognition for instant artwork analysis. We used S3 for storing images for artwork and user scans, and GraphQL via AppSync to query data from DynamoDB. Everything is deployed and managed through AWS Amplify, simplifying our end-to-end cloud infrastructure.

Challenges we ran into

Fine-tuning our use of Rekognition for accurate and fast image identification was one of the biggest technical challenges we faced. Getting reliable results required thoughtful training and iteration. Thankfully, our Lambda function made the process seamless — serving as the real-time coordinator that triggered Rekognition and managed the response pipeline with minimal latency. Balancing this with the complexity of AWS services and the simplicity of a mobile user experience took further refinement, from configuring IAM permissions to managing large S3 uploads and rendering AR content smoothly on mobile devices.

Accomplishments that we're proud of

We successfully integrated real-time image recognition into a mobile experience without sacrificing performance or design. The AR placements alongside physical artworks created moments that genuinely surprised and delighted users. Most importantly, we created something that is both educational and entertaining—and accessible to users from a variety of backgrounds and locations.

What we learned

This project gave us hands-on experience working across nearly every layer of the AWS ecosystem, from serverless functions to GraphQL APIs and cloud monitoring. We also learned the importance of consistent data flow between frontend and backend systems, and how to build with scalability and user accessibility in mind from day one.

What's next for ARtifact

We’re exploring partnerships with museums to pilot ARtifact in live settings. Future updates include multilingual support, expanded global collections, and personalized art discovery based on user behavior. We’re also excited to explore how AWS generative AI services like Amazon Bedrock could help us generate content summaries, quiz questions, or audio guides based on scanned artworks.

Built With

Share this project:

Updates