Monologgr logo

"What company did that guy say he worked at?"

"What movie did I tell her I'd buy tickets to?"

"What was that brilliant pun I came up with last night?"

We can search our texts and emails to remind ourselves of previous conversations, but the things we say are lost forever. Not anymore! Monologgr keeps track of everything you say and makes it easily searchable. This small android app uses any standard wired or wireless headset to constantly monitor audio. It uses the power of IBM Watson to efficiently transcribe your speech and analyze tone, then lets you search your history by keywords and moods.

Let's say we want to search for times I talked about the Techcrunch Disrupt hackathon: I search for "tech crunch disrupt" and I see relevant snippets of my conversation. I can play the surrounding context to remind myself exactly what we were discussing at the time.

Monologgr is forgiving of transcription errors by also indexing alternative interpretations, and it respects others' privacy as headsets are tuned to pick up mainly your own voice. Try the Monologgr Android app yourself at monologgr.com and the Google Play Store (currently unavailable as it was a rough draft and not store-ready).

Created in less than 24 hours for the Techcrunch Disrupt SF Hackathon 2015 by Roger Pincombe and Nolan Amy.

Share this project:
×

Updates

Roger Pincombe posted an update

Working on this again. Also using a Narrative Clip lifelogging camera (http://getnarrative.com/narrative-clip-1) to take a photo every 30 seconds, that way I can search speech, vision, location (using Moves app API or maybe Google location API).

It takes photos of what you see using a Memento Clip lifelogging camera and record everything you hear using a wired mike an Apple Airpod (ideally better hardware but that's in the future), then analyzes it using off-the-shelf machine learning API's for speech recognition and image recognition. Also could use location tracking (using Moves) and other lifelogging data sources (life fitbit for heart rate) or maybe I can press as button on my smartwatch to save a moment in time as important so I'll remember to review later.

Most of these data sources are already built, most of the hard ML work has already been done. The hardware mostly exists although it's currently pretty rough. Basically someone needs to tie everything together and make it simple to search, easy to use, and clearly useful.

My initial prototype I'm working on uses an apple airpod and a dedicated android phone with the MonoLoggr audio recording app for audio and a Narrative Clip lifelogging camera, but these are clearly just for prototyping, not long term. I just got back from a trip to Shenzhen where I met a couple hardware companies who make knockoff Memento Clip camera which are willing to customize it to have the necessary functionality, although there would be a minimum order so it's not feasible at this point.

Log in or sign up for Devpost to join the conversation.