Poster Link: https://docs.google.com/presentation/d/1n41vvlQYyJTonuBP2I-DDyIi4MK8NlEfrTjCqjNY15g/edit?usp=sharing Reflection Link:https://docs.google.com/document/d/1gMAbyqdljt1xsmVv8e0ahEvhv6Pofx8ut0vI7KMAntA/edit?usp=sharing Code Link: https://github.com/jnaik2/CSCI-1470-Final-Project-Jaideep-Matthias-Alex-

The paper’s objective is to restore old photos suffering from damage such as wrinkles, scratches, holes, etc. Although other models have been able to restore photos suffering from wrinkles and other forms of less harsh degradation, these models often do poorly in restoring photos with harsher forms of degradation such as scratches and holes due to the domain gap between synthetic images (old images synthesized by degrading new photos) and the real old photos which makes these networks fail to generalize. To address this the paper proposes a “triplet domain” translation network, where two VAEs are trained to transform old and clean photos into two separate latent spaces, after which the translation between these two spaces are learned via the generated synthetic paired images. This method allows the domain gap to be closed (using these two latent spaces with the translation), which allows the model to generalize better and thus outperform other models in many scenarios.

Built With

Share this project:

Updates