Helping Hand

We created an AI screen analysis software engineered specifically to help disabled and less experienced players get the most out of their time playing games.

Inspiration

The inspiration for this project came from recognizing the barriers that many players face when playing games. Whether due to physical disabilities, cognitive challenges, or a lack of gaming experience, some players struggle to enjoy games to their fullest potential. We wanted to create a tool that could improve accessibility to help players understand their game.

What We Learned

Throughout the development process, we researched the challenges disabled players faced . We learned about accessibility features already present in some games and made our software different so it would be useful. As well as this, we also learnt how to use a range of AI specific API's, web crawlers/ scrapers and how to implement all of these together in python.

How We Built It

The majority of our code was written in python to allow consistency between all of our files. The first thing that was built was the web scraper, we made use of beautifulsoup4 which we used in conjunction with a WebCrawler to crawl specific games wiki's. That wiki database was then outputted into a .JSON file.

We created a general UI interface to be able to control every setting from a simplified interface, including keybinds, screenshot settings and more. The backend for this was a screenshot function that would either capture the centre of the screen or allow the user to capture specific areas. We then utilised OpenAI's api to request an analyse of this image which was then compared to our database of blocks and items so that we could get the closest match. We also used Eleven Labs AI for audio voice generation and google library to convert user speech into text to allow the user to have conservations with the AI.

After a match was found, we then send a request with an engineered prompt to get a concise explanation, we then utilise the elevenlabs api to request a text to speech human-like readout of the output. We also utilised tkinter to create subtitles, we ensured to specifically matched up to each outputted word.

Challenges We Faced

  • Real-Time Processing: Ensuring that AI analysis and feedback were quick enough to avoid disrupting gameplay
  • Wiki website rate limits: Some websites would have menus that were difficult to scrape or that would rate limit us meaning information was missed.
  • Understanding what information to pull: The wiki contained a lot of information so we had to refine our search.

Built With

Share this project:

Updates