Inspiration
Whilst many high level libraries exist to facilitate the creation of neural network models, our focus was to create a low-level, high performance solution using C++, CUDA, multithreading and unique optimisation techniques.
What it does
Our neural network model accurately recognises alphabetical characters from handwritten bitmaps of images sourced from the MNIST database, curated by authors of Neural Network Template using predefined weights and biases.
How we built it
Our model implements a simple perceptron: performing matrix multiplication between input and weights; adding biases; passing through activation functions. We created custom functions, data structures and utilised unique optimisation techniques to maximise performance. In order to make the most out of our implementation, we used profilers to target functions and methods that were costly in time, and went through many variations to reach what we currently have. After each optimisation, we also benchmarked our low-levelled implementation with a high-levelled implementation using the Eigen Library.
Challenges we ran into
Some of the most significant challenges that we faced include:
- Race conditions introduced from attempts to implement a function using multithreading
- Learning the CUDA language from scratch
- Resolving unfamiliarities with the C++ language
- Setting up the appropriate environment for testing
Accomplishments that we're proud of
This project has been quite the journey with every member having to take on the task of learning many new concepts such as multithreading, libraries (e.g Eigen) and new languages (C++ and CUDA) with limited resources at hand. However, we are most proud of our effective team coordination and the work that we have put together from our hard work and great communication.
What we learned
- Multithreading
- CUDA
- C++
- Makefile
- Specific Libraries
- Optimisation concepts (e.g AVC512, FMA ...)
- Computing Hardware (e.g Cache, Registers...)
- Virtual Machine, Containerisation
- New Neural Network concepts

Log in or sign up for Devpost to join the conversation.