To make Twitter users explore contents on Twitter with more accessibility and get precisely what is needed for the Twitter users.

What it does

The Application searches for Twitter Spaces on Desktop with Voice Commands.

How we built it

First I started off thinking with the Input and output data flow in the application for the Idea I had. Designed the block diagram and requirement gathering. At last finalized the API Integration and Cloud Infrastructure. Then Started to Build it.

Technology Stack

Frontend - React JS Backend - Express JS Runtime Environment - Node JS API - Twitter APIv2 - Search Spaces and Microsoft Azure Speech Services API. Web Server - Ngnix Cloud Infrastructure - Ubuntu Virtual Machine on Microsoft Azure.

Challenges we ran into

Setting up Reverse Proxy for Express JS - Backend and React JS - Frontend on Ngnix Web Server for hosting it on Virtual Machine in Microsoft Azure Cloud.

Live transcription with Microphone with Microsoft Azure Speech Service SDK integration in React JS.

Accomplishments that we're proud of

Designing the Project with Infrastructure as a Service - Ubuntu Virtual Machine on Microsoft Azure Cloud.

Making the frontend and backend of the application communicate seamlessly with API communication.

What we learned

Learned a life lesson that always first we have to start with what the user actually wants in real time and then we have to start with technology usage and designing.

Learned about OAuth 2.0, Microsoft Azure Speech Service, Twitter Spaces and React JS.

What's next for Voice controlled Search tool for Twitter Spaces on Desktop

The application can be scaled up with more features in Voice Controlled for exploring different types of content on Twitter and to Improve accessibility on Twitter.

Built With

Share this project: