Inspiration

When brainstorming what type of project we should do, we realized it would be interesting to work with Gemini API responses and image generation to make outfit recommendations based on pieces of clothing as well as ways to alter that piece of clothing. This was inspired by us discussing how there are apps that give recipes based on what’s in the fridge, and in that same vein, started thinking about clothing and closets. This resulted in us wanting to make a similar app in relation to clothing that allowed for users to have an app that generated outfits until we realized that both of us were not that good at sewing but want to learn/do projects.

What it does

The main functionality of the app, generating alterations and outfit images, is done through the input of a user: an image of a piece of clothing. Based on that piece of clothing, we output an altered version of the input clothing and also give the feedback on how to alter (sew, embroider, etc) based on skills and sewing level indicated in the user profile. This allows for different feedback based on specific user preferences, style and level.

How we built it

There were various steps/parts that went into building this app. We wanted to create an individual experience for each user, so we used Convex as a backend for our app development to keep track of users and user data, working both as a database and a form of user authentication. We also used chadcn [pre-built design components] as a styling tool for the app development to keep a clean modern look to the app, and also to have built in ui components from chadcn working in our program without having to define components like header, separator, etc. For the alteration photo and suggestion, we used Gemini AI for the description of a suggestion of the image (image to text) and one of Black Forest Labs text to image generation models to output an altered version of the input clothing. Then pieced each part together to get the final product.

Challenges we ran into

There were many challenges as parts of this project were new to the both of us. For example for image generation, I (Jazmine) was mainly struggling with the image-to-image generation. My attempt was having the input image, sending it to gemini to try and generate an output image, which encountered in errors with the limitations of the free tier model of Nano Banana and Imagen which are Google's text to image models. The text to image models were used as I had to prompt the model to use the input image to generate an desired output image. I approached this by trying to output the piece of clothing which took me a long while to figure out as I was trying multiple models and researching models, looking for cheap prices for the generation of images. I ended up paying a dollar but it worked out. For user authentication, the issue I (Nailat) faced was making sure the google OAuth worked correctly so that when a user logs in there is a kept record of the user’s name, profile image, skills if they set any skills in the profile, email address, and the password is saved securely. This was tough as there is a schema.ts file that has to be made to keep track of what is being stored in the database, so connecting what I wanted to store and what the database already automatically stored sometimes resulted in data not properly being saved, this was fixed by extending what the database saves. I also had issues in the authentication for the password resend, because initially it was considering each call to resend or password as different even if the email was the same, I set a function up in the users.ts file to make sure that when the resend code is received the email is checked against current users that are saved in the database and password reset is sent through that.

Accomplishments that we're proud of

I’m proud of finishing the user authentication so that each new user logged in is stored in convex, it was trial and error to get all the sign in possibilities working. We are also proud of having the app do most of what we wanted to. It was one of our teammates' first hackathon, so it was nice to see what we were able to accomplish. We also wanted to be able to do the image generation, refinements of the app if we had time (which we thought we wouldn't have the time to, but we did), and being able to have this app work on laptop and on mobile.

What we learned

We learned a lot from this project, such as how to handle OAuth and using a tool such as Convex to keep track of data that we want to store but also data we want to use to manipulate other aspects of the app. We also learned a lot about how costly image generation can be as we did not expect it coming into the project. As well as planning out the design was easier than we thought as we were just talking and the idea kind of just came to us as. Often we noticed that by having a plan in place to brainstorm we could quickly write it down and refine it by talking it through. It was quite tough to bring everything we wanted to do in the app done within 12 hours, but we mostly completed our goals for this hackathon and enjoyed the process as well.

What's next for Alter’d

Some upcoming changes we hope to add to the app’s functionality is to have an image generating model that doesn’t have to be paid for but rather is constructed for the sole purpose of this app! As well as having the app generate outfits based on what would go well with the piece of clothing that was inputted, as well as keeping individual session history in user profile and stored in the database. Another goal is to have the app be more colorful rather than black and white by the time we are writing this since we focused more on implementation that having the most sophisticated UI.

Built With

Share this project:

Updates