Our team is compromised of a bunch of political nerds, some of us studying political science and others deeply interested in the upcoming election. When we saw that real-world usefulness was a judging criteria, we instantly knew what we would have to do a project about. This election has been full of noise, and severely lacking in signal - there is some great information out there on the web but it's hard to find and even harder to interpret. We hope to solve these problems with Candidata. In our webapp, we only focus on bringing you data about the candidates - no tabloid interpretation or unsupported claims. We spent a chunk of time at the beginning of the project looking into what data people wanted, and we found that overwhelmingly people just wanted to see the facts and data. They want to know if a candidate is lying, they want to know where they stand in the polls, and they want to know if a candidate is "bought-out" with donations from large corporations.

What it does

The strength of Candidata comes from how it correlates data. When you visit the webapp, you get a nice overview of the candidates, with their most recent statements rated by politifact, their current poll standing as an average of a lot of different polls (as calculated by FiveThirtyEight, and the top overall campaign contributors. This data is all out there, and available to us through public APIs. For the average person, however, this data is incredibly difficult to parse. It's spread across multiple sites, hidden among a trove of unimportant figures, and it's not at all clear how the data relates to each other. As someone who studies political science, all of that correlation and important numbers jump out, and we hope that everyone else can see the data the same way with Candidata. We dug through many sources of election data and pulled out the important parts through APIs that ensure our webapp has the newest information at all times, and spent most of our time building a robust back-end that looks into these sources and brings together relevant data displaying them side-by-side like no other site had ever done before. Additionally, you can conveniently see the data on the issues you care about most by choosing an important issue for you and letting Candidata process the rest.

How we built it

We built the webapp on top of Django and python, which lets us grab API data without any crossorigin data and allows us to cache data for an hour at a time so we don't keep hitting the APIs we pull from. Our front end is designed with HTML, CSS and JavaScript.

Challenges we ran into

A couple of us were Django novices, so jumping into using it for the project was very challenging at first but extremely educational by the end!

Accomplishments that we're proud of

In 24 hours, we built a project that brings together data like no site before it. In order to do this, we had to build a robust backend that correlated relevant data, a daunting task that was incredibly rewarding when everything finally clicked.

What we learned

We all learned a lot in this project. We now have significant experience in Django and a much greater understanding of how it works, as well as valuable experience working with a team against a hard deadline.

What's next for Candidata

More data! With extra time, we can put in the grunt work to gather more searches and program in those correlations. We would also like to expand upon the presidential election in the future.

Built With

Share this project:


posted an update

  • More up-to-date info! Before we had a hacked-together cron job. Now the server handles updating the info every hour.

  • More responsive CSS! We tried to make it responsive during the hackathon, but there were still random places where text overflowed their divs. That's fixed now.

  • Background image! We were trying to add it in last minute, but had problems with the image being something ridiculous like 30MB after I applied some filters. Now it's a cool 188kB.

Log in or sign up for Devpost to join the conversation.