posted an update

Working on this again. Also using a Narrative Clip lifelogging camera (http://getnarrative.com/narrative-clip-1) to take a photo every 30 seconds, that way I can search speech, vision, location (using Moves app API or maybe Google location API).

It takes photos of what you see using a Memento Clip lifelogging camera and record everything you hear using a wired mike an Apple Airpod (ideally better hardware but that's in the future), then analyzes it using off-the-shelf machine learning API's for speech recognition and image recognition. Also could use location tracking (using Moves) and other lifelogging data sources (life fitbit for heart rate) or maybe I can press as button on my smartwatch to save a moment in time as important so I'll remember to review later.

Most of these data sources are already built, most of the hard ML work has already been done. The hardware mostly exists although it's currently pretty rough. Basically someone needs to tie everything together and make it simple to search, easy to use, and clearly useful.

My initial prototype I'm working on uses an apple airpod and a dedicated android phone with the MonoLoggr audio recording app for audio and a Narrative Clip lifelogging camera, but these are clearly just for prototyping, not long term. I just got back from a trip to Shenzhen where I met a couple hardware companies who make knockoff Memento Clip camera which are willing to customize it to have the necessary functionality, although there would be a minimum order so it's not feasible at this point.

Log in or sign up for Devpost to join the conversation.