Recently one of our friends found out he was color blind when he was playing Factorio and couldn't tell the difference between named components. It was a real shock to him to find out that so many games he had been playing were trying to communicate so much information through color. In a couple of the games he plays there were dedicated color blind features that he could turn on, and those allowed him to enjoy the games he played far more. But many of his favorite games didn't have this option, so we took this opportunity to make a program to assist him. In our research for this project, we discovered a great amount of people who have grievances with how many video games treat accessibility, which is often sorely needed. Because of this, we shifted our goals; instead of assisting only color blind gamers, we seek to help to all gamers with disabilities.
What it does
Our program does several things. First of all, it can change all colors on a display to become color blind friendly (you can select which type of color blindness is being corrected for). Secondly, it allows for control remapping at a system level; thus controls can be rebound regardless if a particular game supports it. This provides creative solutions and remedies for those who cannot or do not enjoy using a standard keyboard. It also allows for alternative input methods such as one handed joysticks or other specialty hardware. Our program also has support for lowering the difficulty of single player games by slowing down a game's internal clock, allowing gamers with mental disabilities to enjoy games at their own pace without facing frustration and anger.
How we built it
We built this program using Python as our main language, but due to the large amount of solutions that we implemented, several other languages were needed. By utilizing numerous frameworks and libraries in a creative way, we were able to condense all of these accessibility features into one small application.
Challenges we ran into
While building the program, we ran into several of Window's inherent limitations, and learning to work around that was a really big challenge. We also had a lot of issues remapping the controls for controllers with our program initially, as an enormous amount of inputs would be sent through at the slightest touch, but by implementing deadzones and rate limiters, we were able to fix it. The color blind filters that we are using were extremely difficult to hook into, and it took the majority of our time trying to figure out how to get them to work without requiring a full system reboot every time. And, our speed controller took a large amount of time and troubleshooting, as it used Lua and software we had never used before.
Accomplishments that we're proud of
We are extremely proud of our GUI, it was the first time we ever made something with python that wasn't a command line application, and we think it turned out very nice. Additionally, we are proud of our control remapping scheme. While our difficulty changer wasn't very technically complex, we are proud of our creative use of external programs to resolve a problem and provide help.
What we learned
We learned how to stitch together different languages of code into one unified application that can address one large problem with multiple smaller functions. We learned about GUI's and how to more efficiently communicate information with the end user. We also learned a lot about Window's internal process system, and how it manages to have so many features without slowing to a crawl at any input.
What's next for Helping Hand
In the future we would like to implement a more robust difficulty scaler by piecing together the important parts of the open source software used to allow for a more seamless integration. If we had more time we would have been able to get audio dampening / amplification working to help the hearing impaired. Additionally we would like to actually confirm the support for alternative input devices; currently it is only theoretical as we did not have any to test it on. Even more exciting is the hope that we may one day be able to warn users of potentially seizure inducing flashes before they happen by analyzing the GPU's frame buffer with an AI; seeing that none of us know anything about GPU scheduling, however, this is a more ambitious goal.