The main inspiration for this project came from knowing about the difficulties faced by physically challenged people like quadriplegic people and blind people etc while accessing smartphones. Based on that we planned on developing a news reading application where the user can access the application without using the touch screen input. The user will be able to use the application through voice commands. The voice commands are processed through AWS Wavelength based on which the application performs the necessary functions such as getting news and displaying them in both mobile screen and laptop screen.

What it does

The application can be accessed through voice commands and without using the touch screen input. The main components of the application are a sound sensor for sound detection and a microphone for recording voice commands which are connected to raspberry pi. The voice recordings are sent through raspberry pi for speech processing in AWS Wavelength ec2 instance via S3. Based on the result of the speech processing the mobile application will perform the necessary functions such as getting news, display news, and switching the display from mobile screen to laptop screen.

How we built it

We built the system by connecting the microphone and sound sensor to raspberry pi for voice recording. The mobile application is developed through android studio. Speech processing is done through speech recognition libraries executed in AWS Wavelength ec2 instance. Laptop screen display is built using python and pyqt libraries.

Challenges we ran into

We faced challenges in building the mobile application and integrating with the raspberry pi and aws wavelength ec2 instance

Accomplishments that we're proud of

We were able to develop the whole system and integrate it properly along with the mobile application

What we learned

We learned many things in using sound sensors and integrating with raspberry pi along with mobile application and aws wavlength ec2 instance

What's next for BAQ News Reader

Using advanced forms of voice commands and speech processing while accessing the mobile application.

Share this project: