Inspiration
1) Brexting project could be used for communication by the differently-abled. 2) VoxNet project could help people jam even when they don't have instruments.
What it does
1)Brexting - We train a Convolutional Neural Network with adversial discriminator on BNCI_Horizon2020 dataset. The dataset contains brain wave data for motor tasks such as for moving arms, legs and tongue. Once the network is trained, we implement the inference tasks from scratch in c on an embedded system (FPGA). This makes the system portable.
2) VoxNet - We train an adversial autoencoders on a Magenta Nsynth dataset. The dataset contains instrument (violin,keyboard ..etc ) and pitch information. Now to jam without instruments,when we hum the sounds the VoxNet classifies the sound to an instrument. Then the corresponding instrument and pitch sound is produced.
How we built it
1) Brexting - We trained the network on torch and implemented on FPGA. 2) VoxNet - We trained the network using pytorch and preprocessed the signals using librosa library on python.
Challenges we ran into
1) Not enough memory on CPU/GPU 2) WiFi speed to download dataset 3) Hyper Parameter selection
Accomplishments that we're proud of
1) Data handling - Since in both the project the datasets are non-standard, we faced a lot of difficulty in arranging the datasets into torch and pytorch libraries with limited CPU memory. 2) Brexting project could be used for communication by the differently-abled. 3) VoxNet project could help people jam even when they don't have instruments.
What we learned
1) Autoencoders 2) FPGA programming
What's next for 1 ) Brexting 2) VoxNet
1) Brexting - To make it even more real time with acceleration 2) VoxNet - Include pitch information as well using discriminators

Log in or sign up for Devpost to join the conversation.