Inspiration

There is a fantastic github repo called GANimation, their algorithm enables anyone to edit people's expressions in photos. We saw the possibility to take this complicated but powerful technology and share its magic with everyone. That is why we created SayCheese, to make the world a happier, cheesier place.

What it does

editing expressions in photos and paintings, making people smile!

How we built it

The task has been divided into three parts:

  1. set up the model and its interface, (python, GAN, and the model is pre-trained on AWS)
  2. set up the server, (flask, AWS)
  3. build an ios app. (switf)

we work closely to make sure every part works well both on its own and with other parts.

What's next for sayCheese

  • adjust the DL model, improving its performance,
  • optimize the back and front ends to make it run faster,
  • modify the UI of our app, making it more elegant.

Introduction to SayCheese

SayCheese:

In this project, we utilize Albert Pumarola's novel GAN conditioning scheme and build an easy-to-use ios app that can edit people's facial expression in both photos and paintings, bring him/her smile. See our home page at Devpost.

Presently, our model can generate natrual looked smiles in different styles and intensities for a large group of different types of faces, and of couse we are working on improving the model's performance.

Check out the repo for our ios app. For a simple tour of the app's usage,

  1. take picture within the app, or open one from your album
  2. choose the face your want to SayCheese
  3. wait a moment to see the magic, and it's done!

Now you can choose the smile face you like most from two main categories: big smile and small smile.

demo: SayCheese for photo

 

demo: SayCheese for painting

 

Prerequisites

  • Install PyTorch (we use version 1.0.0), Torch Vision and dependencies from http://pytorch.org
  • Install requirements.txt (pip install -r requirements.txt)

Run

First, one must put the pretrained model(s) anywhere you like, they are files named net_epoch_#epoch_id_G.pth and net_epoch_#epoch_id_D.pth (#epoch refers to the index of epoch)

To run the demo:

python feedforward.py \
--model_path path/to/pretrained_model \
--load_epoch index_of_epoch_for_the_model \
--img_path path/to/img

Citation

Our idea and work are based on Albert Pumarola's GANimation. For more information about the model please refer to the [Project] and [paper].

@inproceedings{pumarola2018ganimation,
    title={GANimation: Anatomically-aware Facial Animation from a Single Image},
    author={A. Pumarola and A. Agudo and A.M. Martinez and A. Sanfeliu and F. Moreno-Noguer},
    booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
    year={2018}
}

Built With

Share this project:

Updates