Inspiration

The AI Art industry

What it does

The front-end program receives a prompt from the user and sends a GET request to a custom server. This server sends a POST request to the OpenAI API. The server saves the resulting image and prompt information to a database, then returns the images generated by DALL-E/Open AI inside the response field to the first (GET) API call as URLs. The front-end displays an image at a time and if the user is content with any, it adds that image to the chat field. That image also gets added to a list of most recently used images in the front-end. If the number of images here is greater than a maximum number, the last image gets popped.

How we built it

The front-end was built using Swift alongside XCode. The back end was built in TypeScript using Nest.js.

Challenges we ran into

XCode seems to have a bug for testing on newer iPhone models but we spent a lot of time debugging before reaching that conclusion. The return parameter of the API requests had a convoluted type which was difficult to cast our variables to in swift.

Accomplishments that we're proud of

Learning Typescript

What we learned

Programming in swift. Using xcode. Programming in TypeScript and Nest.js

What's next for iMeme

Bringing this integration to other platforms (requires support for integrations, we found that WhatsApp for example didn't support this).

Share this project:

Updates