Voice and Natural Language Processing allow us to build application with very low friction human interaction. I built this application to offer the car operators a voice command controls instead of the many click and touches. Also, this application can help car operator with reminders. For example, the user can say: "Remind me to {visit my uncle} when I am in {Brooklyn}". This will create a reminder and notify the driver with the reminder note when he drive near Brooklyn.

What it does

After logging in, the user can say voice command like: "Lock My Car" , "Stop the engine" , "Remind me to check the new office when I am in Queens", "How is my car doing?. The App will use the Ford Connect API to send these command or get the data and display it to the user.

How we built it

I built this App using node.js, MYSQL,, and the Web Speech API. The current testing version works only in Chrome and desktops, and it can be updated and work on any mobile device easily.

How it works

After the user logged in and authorize the application to control his car, the application will request an access token and refresh it every 5 minutes interval. Then, the app will update the location and the car status every 2 minutes interval. All the data is saved into the database and can be queried using voice commands. When the user sends a voice command, the application uses for NLP to identify the intent and entities if any. Finally, the application runs the identified commands and responds back to the user.

What's next for Ford Assist

Built a mobile App version and add more commands to the App. Adding voice transcript to the results. (I skipped this because of the time). Also, I will add hot word listening to avoid clicking a button.

Share this project: