Inspiration

Illiteracy is a global issue. Locally, Baltimore consistently ranks as one of the nation’s least literate cities. In 2016, five Baltimore schools did not have a single student who met the minimum requirements for reading proficiency. Further, 30% of adults in the city are considered “functional illiterates,” defined as people who have “reading and writing skills that are inadequate to manage daily living and employment tasks that require reading skills beyond a basic level". In an article published just this morning, it was stated that 90% of low-income and minority students in the city cannot read proficiently.

Globally, 1 in 10 people are illiterate. From an economic standpoint, functional illiteracy holds economies back, costing the $1.19 Trillion globally, each year. A lack of literacy in our communities is not an individualized issue. Poor literacy also limits a person’s ability to engage in activities that require critical thinking and/or a solid understanding of literacy and numeracy skills. Such activities may include comprehending governmental policies and voting in elections, making educated financial decisions such as buying a home and managing a mortgage, or completing a degree of higher education.

Today, more than 750,000,000 people cannot read this sentence.

At its root, this disparity in literacy breeds from a lack of access to education supplemental tools. Our team seeks to remedy this widespread issue by leveraging the power of Augmented Reality (AR) with commonly accessible devices.

What it does

The Stell.AR Speller application enables users to practice literacy in an intuitive, engaging, and effective environment. On an iOS or Android phone, tablet, laptop, or desktop computer with a camera, users can scan a letter, and on-screen pops up interactive 3D visualizations of words that begin with that letter. This engages the user, familiarizing them with concepts related to that letter of the alphabet, thus, encouraging word familiarity and literacy.

The app is best optimized for the iPhone X.

How we built it

The development of this project began with exploring the possibilities of Vuforia, an AR platform that is natively supported in Unity.

After much effort, we could successfully overlay 3D data visualizations atop an image. We accomplished this by first uploading the “target”/base images (in our case, the letters A, B, and C) to the Vuforia database.

Vuforia performs image processing, and rates the identifiability of the image that we uploaded. In our case, our target images received a score of 5/5, meaning that they were ideal candidates, and could be identified by the software, likely, with a consistently high success rate.

We then imported those target images into Unity 3D, as well as publicly available 3D models from the Unity Asset Store. These models each correlated to a letter. For example, for the letter A, we have a target image with 3D models of an apple, acorn, ape, and airplane. These visuals were then placed atop the target images.

We created scripts in C++ to include gestures that allow the user to iterate through the various 3D models on-screen.

This functionality was then built into an Xcode community file and deployed for iOS. Given that we utilized Unity, which has platforms for iOS, Android, as well as desktop versions for Mac and PC, our application is usable across most every device.

Challenges we ran into

Our team sought to better support users with disabilities. We put immense effort into integrating sound effects to the 3D visualizations, as well as a swiping mechanism, to be able to see each AR element closer. Their implementation was not fully functional within the 36-hour timeframe. However, we are determined to continue to refine and innovate the application that we have developed here at HopHacks!

Accomplishments that we're proud of

Our team is composed of a diverse set of members, who had not previously met before Friday. The collaboration of skills, support, and efficiency of our team has been beyond admirable.

The integration of open source tools and public APIs in our project, most notably, Vuforia, has been a great success, especially considering that the members of our team did not have previous experience with it.

Our user interface is another feature that we put a lot of consideration and diligence into. We wanted to make this application appealing, intuitive, and efficient for users of all ages and levels of technical skill.

Further, we are incredibly proud of the fact that our project runs! Interoperability (as well as blatant operability) is incredibly important to our group, and the fact that Stell.AR Speller operates across different devices and operating systems, efficiently, is a great accomplishment of our team.

What we learned

Throughout this hackathon, we learned countless lessons in implementing the tools that we utilized, especially in regards to Vuforia and its integration with Unity 3D.

We also learned that AR and humanitarian work operate in tandem far better than we could have fathomed. While using technology as a means of alleviating a global issue may seem far-fetched to some, our group found that it made learning tools accessible to a larger audience, and enables us to tailor the tools to support the needs of our users.

What's next for Stell.AR Speller

We have a multitude of ideas for ways to advance Stell.AR Speller.

As previously mentioned, we would like to better apply sound effects and interaction with the 3D visualizations to allow users to engage with the assets in a more dynamic way.

Understanding the AR capabilities of the iPhone X hardware, and integrating Apple’s ARKit, per the suggestion of the wonderful sponsors Mission:Data and JHU Applied Physics Laboratory, are revolutionary aspects to consider.

In the interest of best optimizing our time, we limited our app to support the letters A, B, and C. Fundamental aspects that we would like to include in future improvements are expanding the app to support the entire English alphabet. We would also like to provide functionality that accounts for more robust linguistics, such as words, phrases, sentence structure (“The cat chases the mouse” vs. “The mouse chases the cat”) and providing animated visualizations to represent these.

Built With

Share this project:
×

Updates