Inspiration
Most vacations and trips result in large collections of photographs. However, scrolling through these photographs can be quite tedious, and does not serve the original experience justice. MemoryLane provides a solution to this by allowing users to relive their travel experiences through AI directed travel vlogs.
What it does
First, the user provides a collection of photographs to the application. Then, these photos are synthesized into textual descriptions through the OpenAI API via Modal. Next, these textual descriptions are used to create a vlog style narration of the journey portrayed in the pictures through ElevenLabs and select background music through the Spotify API. Apple MapKit is also used to locate where the photographs took place in order to provide visual context and ensure vlog accuracy.
How we Built It
The frontend of this iOS application was created using XCode, SwiftUI, and numerous Apple UI libraries. To build the backend, we used FastAPI, Python, Modal, ElevenLabs, and Apple MapKit.
Challenges we ran into
The most prominent challenges we faced was handling git merges to ensure responsible version control, connecting the frontend with the backend, and navigating debugging of a large-scale application.
Accomplishments that we’re proud of
The accomplishment that we are the most proud of is building our first fully functioning iOS application in a collaborative environment.
What we learned
We learned how to divide tasks amongst ourselves for efficient teamwork and navigate conflicting git merges. Additionally, we gained a foundational understanding of iOS applications and Swift.
What’s next for Memory Lane
We are planning to implement features such as personalized background music selection and narration. A more ambitious goal of ours is to turn MemoryLane into a social media platform where users can share their travel journeys.
Log in or sign up for Devpost to join the conversation.