Inspiration

When I was 13, my stepdad had a really bad stroke that affected the language center of his brain. He developed aphasia--a language/communication disorder that occurs after a stroke or other brain trauma--and was left with the language ability of a toddler. He struggles to try and work his way back to his former linguistic proficiency, which he will probably never achieve. My mom is his full-time caretaker, but we are very low-income, so she cannot take any time away from working while also taking care of him. She doesn't have the time to teach him language, and he doesn't have the ability to communicate his needs because of the aphasia; those with aphasia have lost the neural connection to the plethora of nouns in communication. The good news is that pictures are extremely helpful for caregivers and people with aphasia to communicate back and forth, but my mom reports that she has not found an app or any other solution that has good customizability, all the features needed, and a simple enough user interface for someone like my stepdad to adapt quickly. I decided to code an application for people with aphasia to communicate with their caregivers.

What it does

My mom and I have been in this situation countless times--when my stepdad needs something, say, a glass of water, he knows he needs water, but he doesn't know that what he needs is called water and therefore doesn't know how to communicate to us that that's what he needs. After the stroke, there is no longer a connection between nouns and _ things _. Images work, but another issue is that too much clutter on one screen is visually overwhelming, and it is impractical to sift through hundreds of photos to find the right one. My application is called _ Show To Tell _, and it organizes everything by category and sub-category to prevent on-screen clutter. The images show up one at a time and can be sifted through by clicking/tapping on the thumbs down to indicate that the image is not what the user was looking for. This way, if my stepdad needs water, he can find a category for drinks represented by a small collage of drink images, and from there he can go through each item until he lands on the one he wants--water.

How we built it

The application uses JavaFX to implement a UI with almost entirely photos. It uses a node-based data structure to sort images into categories whose changes are sent through observers back to the GUI for the user to interact with. The categories are made with a simple algorithm that parses a csv file for categories and sub categories, making it really easy to add customizing features with file writers (a future change).

Challenges we ran into

I am not very familiar with front-end development, so that was a challenge in itself. It was also a good challenge to get the entire thing in some kind of a working order in the short amount of time given. Finding a way to implement the first list of nodes (the most broad categories) and all the sub-categories without hard-coding them and allowing for future implementable customization features was an interesting problem.

Accomplishments that we're proud of

This is a project that I've really wanted to do for my mom for a while, and overall I'm really proud of myself that I was able to get a huge start on it. Because front-end development is outside of my comfort zone, I am proud I implemented a working UI that pretty much fit all of the features I was going for.

What we learned

I learned a lot about JavaFX and, for an implementation I have yet to finish, Python to Java communication. I also became more familiar with Git and GitHub as this was my first time making a repository, and I learned how to create one specific for a JavaFX project in VSCode. I am also more comfortable now with the idea of starting projects.

What's next for Show To Tell

Small Features: add subtle animation graphics to make the app more visually interesting and engaging; adding audio to familiarize the user with the word and rekindle the noun-to-thing connection; clean up the layout of the UI.

Big Features: allowing the caregiver to add their own photos and categories; making an auto-scroll feature; adding a setting to change how many images should appear on the screen at once to suit more or less proficient users.

Huge Features: add another section to help with re-learning language through audio/word/image connections; allowing for the caregiver to communicate with the user long-distance via speech recognition and searching through the data to find the right image (this feature is top priority).

Built With

Share this project:

Updates