(I did not make the logo – a girl who soon after left the hackathon helped)
Technology addiction is a world-wide epidemic. If you take a look around, you'll likely see most people embedded in their devices. Toddlers are growing up with their eyes glued to iPads, and their parents are oblivious to how damaging this is to child development. Meanwhile, my peers are sending out Snapchats practically every few minutes. While research shows associations between device usage and a slew of mental health issues (depression, attention deficits, lack of mindfulness), there is little awareness how problematic over-usage is to the mind. Through Mind Screen, I hope not only to make users aware of how their device usage can hurt their well-being, but also to promote healthy lifestyle changes, based on large-scale statistical analysis.
What it does
Some users will be willing to contribute insights into their own mental health (currently in the form of the RST-PQ psychological battery), along with their specific "screen times" (the amount of time spent on each individual application). Using a multiple regressions test, it's possible to identify and predict patterns from the data (aka. which apps influence different qualities–specifically relating to aggression, impulse control, fight/flight reaction tendencies, etc.?). Using these insights, users can study and alter their usage to potentially push forward critical self-improvement.
How I built it
Challenges I ran into
There is no programatic access to the time spent on individual applications (neither on iOS nor Android), which means that users must input the time they spent somewhat manually. I devised a method whereby users would screenshot their usage log (Settings > Battery > Last 7 days), and import it into the application, where I hoped to use a text-from-image recognition library. I attempted to implement this with ocrad.js, tesseract.js, and several others. Sadly, the Expo setup I'm using doesn't support "linking", which is critical for these image recognition libraries to work in React Native.
Accomplishments that I'm proud of
The UX is clean and clearly portrays the vision and route to adding value to users' lives. The code works well. The idea itself is what I'm most proud of–for a long time, I've been concerned about people I love, who spend an insane amount of time on their devices. I believe that an app like this could help them regain elements of their mental health and truly enhance their lives. This is only the first version. Not only did I learn a lot during the process of creating the app, I also got started on a project that I think could fundamentally change smartphone user behavior for the better, and safeguard against harmful advancement.
What I learned
It's more important to express the idea than it is to do everything perfectly in the first go-around. For the first couple of hours, I would spend twenty minutes coding, and then ten minutes refactoring and styling the code for clarity. Eventually I realized that over-optimization would work counter to my goal of preparing this first iteration of Mind Screen. I'll definitely carry this mentality with me in the future, as I'll be needing to engage in rapid prototyping and it will be incredibly messy... but at least the ideas will get expressed.
What's next for Mind Screen