Inspiration
We wanted to work on a project that is related to computer vision, and Gaussian Blur happens to be an algorithm that could be implemented on an FPGA.
What it does
It takes an image and outputs a processed version of the image that is blurry or smoothed out.
How we built it
We came up with a block diagram with our inputs and outputs and the processing blocks. It is then converted to a black box diagram with only the inputs and the desired outputs. The black box is being implemented with the help of LLM models such as ChatGPT and Gemini and Claude.
Challenges we ran into
One of the challenges that we ran into is trying to run our code on the actual De1-SoC chip. We suspect it to be a problem regarding the VGA port driver. Our solution to this challenge is to rollback into an older version of our code that successfully runs on digital simulation.
Accomplishments that we're proud of
We are proud of our program actually capable of blurring images in our digital simulation test.
What we learned
We have learned the workflow of working with FPGA projects from coming up with ideas, to implementation stage, and finally the simulation and debugging stage.
What's next for FPGA_Gaussian_Blur
We will dig into the specific logic blocks used in this project and the ram usage, then we will study about how we can optimize the logic gates in the future such that we can come up with an ASIC that performs the same task but uses less space for less latency and more dedicated logic for faster computation.
Log in or sign up for Devpost to join the conversation.