Inspiration

We were inspired by our undergraduate Signals and Systems class where we learned about Fourier Transforms, where given a signal is decomposed into its individual frequencies to reveal its frequency-domain representation. We were also inspired to do an FPGA project because it's what we want to do as a career.

What it does

Our FPGA-based Audio Spectrum Visualizer captures live audio from a microphone, performs a Fast Fourier Transform on the signal, and displays the frequency spectrum on an HDMI monitor. The visualizer then converts the amplitude of each frequency bin into a visual bar that responds dynamically to music or sound in real time. All of the previously mentioned signal processing happens solely on the FPGA fabric with no software implementation.

How we built it

We implemented the project in SystemVerilog, which is a Hardware-Description Language used to build and scale digital circuits by writing code. We targeted our project to a Zybo Z7 board, which has a Zynq FPGA, a 3.5mm audio codec, Analog-to-Digital converter, and an HDMI output.

In terms of the hardware development, we created an I2C master to configure the codec, a hann filter to smooth the incoming data stream, a FIFO to facilitate data from I2S receiver to the sample buffer, an FFT IP, two custom output BRAMs, magnitude calculator, decibel conversion module, and finally an HDMI IP, while utilizing the AXI-Stream communication protocol to communicate to and from IPs. This data path was pipelined and optimized for timing to display continuous real-time performance.

Challenges we ran into

-We had to read the datasheet and understand the required configuration for the SSM2603 (Audio Codec) for ADC-only operation. -We had to pre-calculate any complex math required for pre and post processing of the FFT data, including magnitude calculation, square root, and logarithmic functions. These simplified to a LUT-based approach. -We had to navigate across multiple clock domains from different IPs and communication protocols. -We had to create a multi-stage pipeline and optimize our code to decrease the critical timing path and run the board at a higher clock frequency. -We had to learn the standardized protocol to communicate between IPs (AXI-Stream) and integrate our modules with the IPs and their communication protocol. -We had to figure out the calculations to map the FFT bins to screen coordinates and intensities within the HDMI pixel pipeline to smooth out visuals on the monitor.

Accomplishments that we're proud of

-We created a fully hardware-based real-time spectrum analysis with no CPU involvement or software processing. -We built a clean modular SystemVerilog pipeline that integrates I2C, audio, DSP, and video systems across clock domains. -We learned how to bridge digital signal processing (DSP) and FPGA graphics and techniques into a cohesive real-time system.

What we learned

-We learned smoothing techniques for continuous signals, reviewed fundamental FFT concepts, approximations for magnitude calculations, how to preprocess complex math functions into LUTs on the FPGA, the importance of pipelining and resource balancing for improved throughput, IP integration, and that FPGA development is hard, slow, and not suitable for a hackathon environment.

What's next for Fpga-based Audio Spectrum Visualizer

-We will get it done soon.

Built With

  • fpga
  • systemverilog
  • vivado
Share this project:

Updates