Inspiration:

I wanted to see if AI models could be smarter, not just bigger. Big models are accurate but slow and hard to run on phones or small devices. So I thought, why not make them leaner while keeping performance strong?

What it does:

PeroforatedAI uses Dendritic Optimization to trim unnecessary parameters in AI models. It trains both a baseline model and a dendritic model on CIFAR-10, and shows that we can get almost the same accuracy while using much fewer active parameters.

How we built it:

We built it with PyTorch and Torchvision for the models and datasets, and used NumPy for handling data.

main.py runs training and benchmarking

The model file defines both Baseline and Dendritic models

Training module handles the actual learning

Benchmark module compares accuracy, sparsity, and speed

Everything was done in VS Code, tracked with Git/GitHub, and I recorded the demo using OBS Studio.

Challenges we ran into:

Making the dendritic model efficient without slowing training was tricky. Also, keeping a good balance between sparsity and accuracy took some experimentation. And finally, figuring out how to show live training clearly for the demo was another challenge.

Accomplishments we’re proud of:

We managed to reduce active parameters by 65% and still get almost the same accuracy as the baseline. The model is also fast to run, which makes it ready for deployment on low-resource devices.

What we learned:

I learned a lot about smart pruning in neural networks, efficient model design, and how to benchmark AI models across multiple metrics like accuracy, speed, and sparsity.

What’s next for PeroforatedAI:

I want to try running it on phones or IoT devices and test it on bigger datasets. Eventually, integrating it with cloud platforms could make it scalable for real-world applications.

Built With

Share this project:

Updates