My project is called: “How Various Activation Functions Affect the Output and loss of a Neural Network”

The goal of my experiment was to find the optimal activation function for basic pattern recognition.

I believe my experiment can be scaled up to solve real-world environment and community problems, but it’s important to optimize your model at the lowest level (Like this) to find the most optimal and efficient method when it’s time to scale up.

By increasing pixel density, my model would have the ability to recognize poisonous plants and lives within neighborhoods for example to keep children safe. There’s frankly infinite use cases for pattern recognition.

My program was built using only Python, numpy, and pygame (For the GUI).

These were my findings: | Activation Function | loss of Slash: | loss of Slash: | loss of O: | loss of O:| Average Loss | |-|-|-|-|-|-| | Sigmoid (2nd place) | 0.04308203 | 0.00474277 | 0.00051447 | 0.00021294 | 0.01213811 | | Binary Step (Winner) | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | Softplus | 0.32320652| 1.99036231 | 0.00069685 | 0.00076116 | 0.57875671 | | JohnStep | 0.0 | 0.0 | 2.73948305e-40 | 3.4227851e-70 | TBC |

Built With

Share this project:

Updates