Inspiration

The inspiration for the project came from the rise of teenagers wanting to get involved in politics and activism. Activism, however, can sometimes be spread without holding much truth. The goal of this tool is to help inform teenagers and young adults in such a way that only real information being spread through social media sites, as well as informing individuals about falsified information spread through social media. Informing the public while keeping people involved and encouraging individuals to take initiative will help keep activism alive and help counteract inaccuracies often spread through social media sites. Each sign matters, one way to make it count is by signing with agreeable petitions which we make more accessible. There has also been a lot of polarization between people with different political ideas so we provide a platform for them to share their ideas with different people and increase their knowledge/mindset

What it does

VisMe is an app that informs users, makes them more involved in society and allows them to take initiatives easily. The app comes with various useful and interesting features as discussed below:

  1. News - VisMe brings you appropriate and recent news which is catered/sorted for your interests through the help of our powerful algorithm. You can endlessly scroll through the news and even view their original articles.

  2. Petition - Every sign counts. For you to make a change, we provide you with another endless list of petitions from change.org sorted by our algorithm for your interests. There is also a one-click sign feature. If you want to take initiative and sign an interesting petition you could just click the sign button on our app and it will automatically sign it up in your change.org account.

  3. Rooms - Each and every person has their own ideas and thoughts. Some can be conflicting with each other. So to reduce the polarization between people and so that different people can present their ideas to each other and get a better understanding of others' views and opinions, we created the feature rooms - There are many pre-made rooms with a defined topic which are visible to users in a form of a list. Then users can join a room and have a voice/video chat. If they want to have a personal room they can even create their own room. Rooms have all the functionality like voice, video, screen share and even a full fledged chat system which makes VisMe powerful

  4. Detector - After thorough research, we found the spreading of fake news as a big problem. So we created a full-fledged GPT-2 (BERT-Variant) Machine Learning model that can accurately detect the possibility of the news being fake or biased and identify keywords for the users. Whether it is an Instagram article or even an image post, our ml model is compatible with every sort of media (except video). In the case of an Instagram post, our script will scrape the images from the Instagram post and then use optical character recognition to get all of the text, and then run it through the detector. All you need to paste is the link or text in the detector and it will provide you with a sweet analysis

  5. Heatmap - This is one of the most interesting features. The heatmap displays protest and fatality data. To make the heatmap, we used the google maps API to render a map on the page and then rendered a heatmap over that map. The heatmap contained data that we obtained using the ACLED API which returns a spreadsheet, which we parsed to obtain the data. We then dynamically render markers over the heatmap -- one for every state -- and these markers have clear and concise data including the number of protests and the typical cause for those protests. Markers can be toggled at any time. Since we used the google maps API, we also have the option to change between a map and satellite views, increasing customizability.

How we built it

This was a pretty complex app with layers and varieties of technology used at each place. Mainly talking we used:

  1. Frontend - HTML/CSS/JS/REACT
  2. Backend - Python and NodeJS
  3. ML model - Python
  4. OCR - Python
  5. REST API - Express.js, Flask
  6. Heatmap - HTML/JS

Now describing how we made each feature

  1. News - We created a custom REST API for news. In the REST API, we incorporated our custom web scraper for google (news articles) and applied our sorting algorithm to it. We tested the API using POSTMAN. There were many NEWS APIs available but they had a limited amount of requests possible but we had sought to make the news to be endlessly scrollable. So we made a custom REST API. We made responsive cards in frontend and fetched values for all its props

  2. Petitions - Petitions feature used another REST API. We used change.org petitions as it is highly recognized as the main petition-signing website. Unfortunately, they had no API available for public use. Because of this, we created our own REST API to scrape and sort the change.org petitions, and rendered it on on the page in the form of cards. To add the one-click sign functionality, we used the NodeJS library Puppeteer, and had it autofill the information that is stored, click the sign button, and follow through and confirm that it has been signed.

  3. Rooms - For the rooms, we used the Twilio Video API which made it easier for a lot of people to have a video/voice chat. In our case, we made some defined rooms with defined topics and gave people information of current rooms through a list. People could also create custom rooms and pass on the codes. The rooms have all features like voice, video, screen share, a full fledged chat, and much more that makes it unique.

  4. ML model/Detector The ML model was coded in python, it was trained through mixing csvs marked as fake and real articles. We also used GPT - 2 to determine whether or not the text was generated artificially by taking into account the context between words. We thankfully gained 95% accuracy with our ml model. For already text material, you could just paste the text into the field, the REST API would send it to the ml model to work upon. For images, We use requests to pull the images. After that, we used OCR, which generated the text which we provided to the ml model to work upon. After the ml model provided output we displayed it on the application Currently It supports Instagram Image posts only

  5. Heatmap We utilized the ACLED API to fetch data in xlsx format containing tens of thousands of incidents. We parsed this data and plotted it on a heatmap, automatically assigning darker colours for the areas with more protests (for the protest heatmap) and more fatalities (for the fatalities heatmap). We dynamically assigned markers on every state in the United States, picking between a danger, warning, and green symbol. The map uses the google maps API, so we have a slew of features including the ability to toggle between a map and a satellite.

Some Images

Neatly summarised flowchart of the application structure workflow

A meme (kinda)

meme

VISME ROOM group pics

HEATMAP our beloved heatmap

Challenges we ran into

There were definitely some big challenges we fought with

  1. React newbies - We were pretty new to react so we faced a new error every minute like actually. But we managed to get it all done
  2. Twilio API - Integrating the Twilio video API was indeed a difficult task.
  3. Syntax issues for communication between frontend and backend
  4. One of the main challenges faced arisen when creating our REST API communication. The majority of the APIs run on different ports so keeping track was fairly tedious. On the other hand, we problems with cross-origin resource sharing. This was easily fixed with one-liners for both ExpressJS and Flask and also CORS problems with REST APIs
  5. We originally coded the heatmap in HTML and JS, but due to the way it was coded, it was very difficult to integrate in React and JSX. So we rewrote it in JSX, but then after some more difficulties with React since we were inexperienced with it, we decided to run the HTML and JS version on a server and run an iframe on the React page.
  6. No API for change.org - There was no API for petitions, it took us a lot of time to create custom which was very tedious.

Accomplishments we are proud of

  1. We are proud of completing this app with 24 hours 2 Creating 2 own APIs
  2. Getting an accuracy of 95% on the ML Model
  3. Integrating the Twilio VIDEO API successfully
  4. Endless scroll feature
  5. Creating our own sorting algorithm for a personal recommendation of news

What we learned

  1. A great deal of react.js
  2. How to make Custom REST APIs
  3. Browser automation with puppeteer js.
  4. Creating sorting algorithms.
  5. Twilio Video API
  6. Cooperation skills with opposite timezones (barely slept) lol

What's next

  1. Scaling API on the cloud through Docker and Kubernetes
  2. Improving UI - Due to limited time we couldn't really focus on the UI but we did a much better job than we expect ourselves to do!
  3. Potential phone app - We could expand this app to a mobile app.
  4. We also had a future aspect of truth/fake detection for news clips using ML and facial recognition

LIVE PRESENTATION (more content in live presentation)

LINK FOR THE PRESENTATION IS - https://docs.google.com/presentation/d/1tTywJ4MnHfpFO8b7T65bV6JQYeD1j3I5oivzs7768qI/edit?usp=sharing

+ 1 more
Share this project:

Updates