Abstract
Corporate earnings calls contain valuable information about a company’s performance and future outlook; however, oftentimes, individual investors do not have the time or the resources to analyze the information being conveyed in the audio recordings of these calls quickly enough to make important investment decisions. Typically, past sentiment analysis work has mainly focused on transcripts rather than the actual audio of the earnings call. We want to make this information widely accessible by concisely quantifying the sentiment from the audio of the calls so that retail investors can make informed decisions using the information contained in the calls. Our solution analyzes the raw audio content of earnings calls using sentiment analysis techniques for audio signals. We also explore the text transcripts in order to provide further value consistent with more traditional approaches. After building robust deep learning models to achieve this goal, we compile them into an interactive and easy to use website, where users can explore various stocks, upcoming earnings calls, and view predictions for future performance based on these calls.
Modelling Methodology
To achieve this, we've built three machine learning models -- two neural networks focused on interpreting the raw audio waveforms of each call, and one that focuses purely on the transcripts of the calls. To construct the audio feature set, we first ran a forced aligner (Aeneas) between the transcript and the audio to determine the times at which each participant was speaking. After removing unnecessary speakers (like the operator), we divide the remaining audio for each call into five second segments, and compute features on each segment separately. Features were determined using the Geneva Acoustic Parameter Set which includes speech, pitch, and tone features, among other things. Labels were determined from the price change 10 days after each earnings call, and segmented into three categories (buy, hold, and sell). From this feature set, we've constructed two models -- one Long Short Term Memory (LSTM) model which considers inter and intra-call relationships, due to its inherent memory structure, and one CNN, which considers each call in a vacuum. After predicting each 5 second segment for each call, a majority vote is used to determine the overall prediction for a call.
For our text model, we took the "utterances" (or each interval of uninterrupted speech from one participant) from the transcripts, and computed features from these using the Loughran-McDonald (LM) sentiment dictionary, which considers financial language in each utterance. The LM dictionary defines categories such as positive, negative, litigious, etc., and we use these to determine normalized sentiment scores for each category in each call. These features are subsequently ranked on previous company calls to benchmark each call against its historic values, before running these features through a random forest model with the same labels as explained above.
Putting Everything Together
We've built an intuitive and powerful web application around these models, which allow retail investors to explore various stocks and access our predictions for over 12,000 earnings calls. Users can browse various stocks, financial news, historical prices, and predictions that we've made, and they can also request an unlimited number of new predictions for recent earnings calls, which will be made on the fly. The frontend is built using React and bootstrap styling, and this communicates with a flask backend hosted in AWS. User login and authentication is handled using the Auth0 API to ensure a safe and reliable connection. The flask backend handles user requests and communicates with our MySQL database which caches predictions and earnings call dates, and Finnhub, our data provider for financial data and earnings call audio files. All information sent to our backend is protected using the https protocol to further the ensure safety of our communications. The backend can also make a prediction on a new call, and it does all of the necessary preprocessing and predicting on the fly. If no users request a prediction from a new call, then a nightly cron job will automatically run and update these predictions. Our web app is hosted using Netlify, allowing easy access for any investor!
Tech Jargon aside...
We think that this product provides value and can help to level the playing field between institutional investors, who often have teams of people following these earnings calls, and individual retail investors who do not have the resources nor the time to follow these calls regularly. We are so excited to present this to you all. Please check out our app at www.stocksense.org!
View our full demo here: https://youtu.be/MmMeCzJPJc8
Built With
- aeneas
- amazon-ec2
- amazon-web-services
- finnhub
- flask
- javascript
- keras
- mysql
- python
- rds
- react
- tensorflow
Log in or sign up for Devpost to join the conversation.