Inspiration

The inspiration for this project came from the fact that influential people like Elon Musk and Donald Trump could drastically affect the stock market with less than 280 characters. This phenomenon has only been highlighted by recent events like Musk causing SIGL to rise 6350% and the GME stock surge. This is why we at Polus decided to make a web application that takes a tweeter and a company listed on the NYSE or NASDAQ, and tells the user how credible that tweeter is regarding the company's stock performance.

The name Polus was chosen as Polus is the Roman equivalent of Coeus, the Titan of intelligence and farsight and Polus itself represents the celestial axis around which the heavens revolve. This fits in nicely with a product that can be used to make stock decisions as well as with our space theme, which was itself inspired by the 🚀 emoji and 'to the moon' phrase often used when mentioning the GME stock surge.

What it does

The user inputs a Twitter username, a company name and a search before date. The website then displays up to 8 tweets that fit the given criteria. The user is then able to click on any of these tweets to see a visual representation of the performance of the stock 5 days after the tweet, and a corresponding credibility rating. This credibility rating represents a combination of how accurate the tweeter was in predicting stock performance, and how influential they were in affecting stock performance.

How we built it

The design of the project was prototyped and designed using Figma. The frontend of the project was implemented using HTML and CSS. The language of choice for the project was Python since most of the team was already familiar with it. The backend of the project was made using Flask, a python framework for web development. The tweets were scraped using the twint library since the Twitter API restricted the number of tweets that could be accessed from a Twitter user. The stock market data was retrieved from the twelvedata API. The already parsed data was stored in the postgreSQL database to avoid recomputation and to allow for faster results. pgAdmin was used to better view the data stored in our database, which allowed us to implement and debug more efficiently. matplotlib was used to generate the stock chart.

Challenges we ran into

One of the main challenges we encountered was integration of all the separate components. Although each of our sections worked fine separately, errors popped up when we put it all together. We were able to overcome this by hopping on a call and working collaboratively to mesh each part.

Another issue we ran into was a limit of 8 API calls per minute to the twelvedata API. This was a barrier as each tweet would require the associated stock data, so this limited the number of tweets we could consider/display at a time. Although the additional API calls per minute could be purchased, we were unable to do this as we did not have the budget to do so. We were able to mitigate this issue by filtering tweets by popularity and allowing users to select a search before date to narrow down to the 8 most relevant/interesting tweets.

A third noteworthy issue we experienced was how to find all the tweets from users. This was because the Twitter API heavily restricts developers by only allowing them to access a maximum of 3200 tweets from a given Twitter user. Thus we had to spend time finding a substitute that satisfied our needs. We ended up using twint, a Twitter parser library that scrapes Twitter for the specified user and keyword. The only issue with this implementation was that the scraper was much slower than the API. We were able to address this problem by storing previous query results in a database, allowing faster retrieval for better performance as well as greater scalability, as we anticipated that popular companies would be repeatedly queried.

Accomplishments that we're proud of

When we started this project it was only with a dream and rough vision of what we wanted to achieve. We are really proud of how we were able to turn this vision into reality, especially with such tight time constraints. I am also especially proud of how we were able to integrate as a team so quickly, which is what I think allowed us to work so effectively together.

What we learned

Claude

"On the technical side, this was for sure one of the harder projects I’ve worked on due to the time constraint. I was learning a new framework called flask to bundle all the different components of the project together. I also learned about how to link a database like postgreSQL to my web application where it could communicate with one another. The many API and libraries that were used to reach our desired final product were so interesting as it allowed me to see all the tools available to software engineers.

In terms of soft skills, I can say that I left this hackathon feeling more confident in my abilities to be a better teammate. It was two late nights of working with my team members to assemble a project that we were all happy with. We all came into this hackathon barely knowing one another, but we were able to work around one another, listen to each other’s opinion and collaborate as effectively as possible on the project, devpost and video submission. I am proud of my team for the effort and time we have all dedicated to this project and this hackathon. I’ve truly grown through these two days and I hope to continue learning by attending hackathons."

Michelle

"I was really glad that I was able to learn how to combine HTML with Flask, as well as polish both my design and frontend skills!"

Walter

"This was definitely an eye-opening experience for me as this was the first time I created a multi-faceted product in such a short amount of time. With regards to the technical side, I was able to learn a lot about good programming practices such as using a virtual environment as well as how to combine the front- and back-ends into a polished product. I was also able to learn about and use the various APIs and libraries available for Python.

At the same time, this was also a valuable learning experience on how to develop on a short timeline. In order to achieve this, we needed to be SMART in setting our goals, as well as dividing up the tasks that would often be done simultaneously.

Overall, this was a very enjoyable, although grueling experience for me. I have to say that seeing our vision turn into reality made all the hard work we put in during the last 36 hours worth it. We couldn't have done this without a lot of communication, problem solving and hard work!"

What's next for POLUS

Moving forward, our goals for POLUS include improving the credibility algorithm, enhancing user experience and supporting the tracking of subreddits such as r/WSB and r/investing. For this project, the credibility calculation merely involved calculating the best fit line and comparing that to the sentiment of the tweet, so we feel this could be improved on by tracking additional factors such as whether the tweet altered the trend of the stock, the reach of the tweet through likes and shares (which is an indicator of influence) and how drastic the change in performance was after the tweet. With this, we hope to provide a more accurate credibility ranking per tweet, as well as an overall credibility rating for that specific tweeter and company that could be potentially used to guide financial decisions on the stock.

Built With

Share this project:

Updates