(I did not make the logo – a girl who soon after left the hackathon helped)

Inspiration

Technology addiction is a world-wide epidemic. If you take a look around, you'll likely see most people embedded in their devices. Toddlers are growing up with their eyes glued to iPads, and their parents are oblivious to how damaging this is to child development. Meanwhile, my peers are sending out Snapchats practically every few minutes. While research shows associations between device usage and a slew of mental health issues (depression, attention deficits, lack of mindfulness), there is little awareness how problematic over-usage is to the mind. Through Mind Screen, I hope not only to make users aware of how their device usage can hurt their well-being, but also to promote healthy lifestyle changes, based on large-scale statistical analysis.

What it does

Some users will be willing to contribute insights into their own mental health (currently in the form of the RST-PQ psychological battery), along with their specific "screen times" (the amount of time spent on each individual application). Using a multiple regressions test, it's possible to identify and predict patterns from the data (aka. which apps influence different qualities–specifically relating to aggression, impulse control, fight/flight reaction tendencies, etc.?). Using these insights, users can study and alter their usage to potentially push forward critical self-improvement.

How I built it

I used JavaScript/JSX and the React Native framework (so that it would be cross-platform), along with Expo (for the development environment). The app also makes use of Firebase's BaaS (plus authentication features) and MobX for state management.

Challenges I ran into

There is no programatic access to the time spent on individual applications (neither on iOS nor Android), which means that users must input the time they spent somewhat manually. I devised a method whereby users would screenshot their usage log (Settings > Battery > Last 7 days), and import it into the application, where I hoped to use a text-from-image recognition library. I attempted to implement this with ocrad.js, tesseract.js, and several others. Sadly, the Expo setup I'm using doesn't support "linking", which is critical for these image recognition libraries to work in React Native.

Accomplishments that I'm proud of

The UX is clean and clearly portrays the vision and route to adding value to users' lives. The code works well. The idea itself is what I'm most proud of–for a long time, I've been concerned about people I love, who spend an insane amount of time on their devices. I believe that an app like this could help them regain elements of their mental health and truly enhance their lives. This is only the first version. Not only did I learn a lot during the process of creating the app, I also got started on a project that I think could fundamentally change smartphone user behavior for the better, and safeguard against harmful advancement.

What I learned

It's more important to express the idea than it is to do everything perfectly in the first go-around. For the first couple of hours, I would spend twenty minutes coding, and then ten minutes refactoring and styling the code for clarity. Eventually I realized that over-optimization would work counter to my goal of preparing this first iteration of Mind Screen. I'll definitely carry this mentality with me in the future, as I'll be needing to engage in rapid prototyping and it will be incredibly messy... but at least the ideas will get expressed.

What's next for Mind Screen

I'll return to school and conduct a research study using the RST-PQ psychological battery and a usage survey. Using this data, I'll develop the algorithm for providing insights into people's usage. After this process of collecting data and creating analysis tools from it, I can re-impliment this application without the Expo setup (meaning that I'll be able to implement image recognition AI and a whole list of other features that require un-JavaScript-ported native API access). Eventually, I hope that Apple and Google will open up their respective mobile operating systems as to allow for programatic access to app usage data. This would allow me to create a more seamless Mind Screen user experience. Right now, users are reminded weekly (via a notification) to go into the app and import the screenshot. This is a burden on the user, and will most certainly result in the lost of a large portion of my potential user base. Being able to overcome this challenge would make the app more accessible to those without the patience that this app currently requires... which would mean that the app could reach and benefit even more people.

Share this project:
×

Updates