Inspiration

We were inspired by multiple sources, such as YouTube videos describing auto-encoders and their applications, papers from Google Scholar, and much more. It also came to our attention that if we wished to help the fight against climate change, we had to find a way for data to be more efficiently compressed.

What it does

It compresses images, specifically images from the Mnist dataset. It compresses the 28x28 pixel images into a latent space representation of 4 floats, making it almost a 200-fold compression. Despite that level of compression, the quality of the image remains good.

How we built it

We built it using google colab and VSCode.

Challenges we ran into

We had much difficulty on creating the auto-encoder model for the human faces dataset. In fact, we completely discarded the idea since it was taking too much time.

Accomplishments that we're proud of

We're proud of having completed a model that could compress the Mnist dataset 200-fold. Moreover, we are proud of having been able to create a web server and being able to upload images on it for analysis.

What we learned

We learned the value of a good plan, and the fun of brainstorming as a team! We also learned a lot about deep learning, and especially on all different types and applications of auto-encoders.

What's next for Encode&Vironment

We plan to train the same model on bigger and more difficult to learn datasets, such as a human faces dataset. Moreover, on the web development side, we plan to make it possible for users to modify their images using their latent space representation.

Built With

Share this project:

Updates