We understand the importance of a strong education and have seen countless newspaper articles, statistics and pictures detailing the lack of materials in third world countries. Many of the lower class children that live in poorer countries cannot receive a good education because their school cannot afford technologies such as projectors to enhance the material. Surprisingly, however, statistics show that a large majority of the world population, rich or poor, owns a smartphone. However, the current schooling system has no way to integrate this technology in teaching. In an era where exchange of data holds an infinite value, it is vital for everyone to have access to our vast technological resources. We wanted to build a system that is not only convenient for people like us to easily and quickly make and share presentation, but also build a system that allows those in underprivileged communities that cannot afford laptops or iPads to utilize their smartphones to easily access information shared by their teacher.

What it does

Our system consists of four distinct components. The website is built using HTML, CSS and JavaScript and gives admin users the option to create or host a presentation. To host a presentation, they connect their Google Drive account and import their presentation, where each slide is converted to a PNG file and sent to a centralized Firebase Database. The iOS app is written in Swift, Objective-C and Ruby and allows client side users to access shared presentations. They are presented with an enter code page, where they can either scan an RFID tag or input a unique authentication key. Once they are verified, they are presented with a live PNG image of the presentation pulled from the Firebase Database that updates immediately after an admin update. Another core functionality of the website is the create presentation, which uses webkit Speech-to-Text libraries and natural language processing to build a brand new presentation for you. Based on recognized speech patterns, you can add text bullets, add graphs, add images, remove bullets, import online information, etc. Lastly, a C++ based Arduino program allows client side iOS app users to easily enter presentation. By utilizing an RFID reader and HM-10 Bluetooth chip, RFID tags can be scanned and sent to the iPhone, where it is mapped to the authentication key. A user is presented feedback from a LCD, an LED, and a buzzer.

How we built it

We built the iOS app using Swift, Objective-C and Ruby on the XCode IDE. The app was built to support bluetooth connection to the Arduino hardware and WiFi connection to the central Firebase Database. Both base64 encoded and URL images were displayed by the app to the user. The website for creating and hosting presentations was built in HTML, CSS and JavaScript. When a user created a presentation, BCrypt encryption standards were used to ensure maximum security in logins and data transfers. Flickr API, webkit Speech-to-Text and CanvasJS were used to provide basic presentation functionality. The hardware was coded in C++ on Arduino IDE. An Arduino Mega 2560, MFRC522 RFID Reader, HM-10 Bluetooth chip, 16x2 LCD, RGB LED, and buzzer were utilized for functionality. The Firebase Database served as the central cloud structure and integrated each component of our system with one another. The database stored each presentations' current slide URL as a PNG image, thus allowing users to view the current slide without a live stream, reducing bandwidth usage. By ensuring that each component was connected in a secure IoT system, we are able to provide a seamless and multi-platform system for users and admins to utilize.

Challenges we ran into

Our major issue was passing images to the Firebase Database during presentation creation. We had no problem converting premade Google Slides presentations into PNG images, but because our presentation creation interface was written straight in HTML/CSS/JavaScript, we were not able to effectively capture all elements (header, text, images, graphs) into a single images. To work around this, we saved two images and overlapped them to create the original. The base image contained the text in the slide and was base64 encoded and the secondary image contained the image in URL form. While this made our app slightly inefficient, this allowed users to view all aspects of the presentation while still reducing bandwidth usage.

Accomplishments that we're proud of

We are most proud of our presentation creation console. While our entire system integrates well and accomplishes our original vision, we are extremely proud that our speech recognition and analysis allows accurate and on-the-fly presentation creation excited to test our program in our school presentations!

What we learned

Although coming in as semi-experienced programmers, we knew almost nothing about speech recognition and natural language processing. Dedicated to integrate this into our project all of our group members worked together to built the final script. Along the way, we learned many techniques to transcribe speech and search for key words that are used in the presentation creation experience.

What's next for SyncFast

We hope to improve on the efficiency of our app. Although it is heavily reliant on WiFi speeds, there are still ways that we can make our admin and client side interfaces function faster. Furthermore, while we were able to figure out image passage while creating presentations, there was no way for us to export the graphs as images. Because the graphs were plotted in HTML and were therefore not images, we tried to build a script to save screenshots of the webpage but ultimately failed. We hope to find a cleaner and faster way to send information from the admin presentation creation to client side viewers.

Share this project: