Eli Shea and Zach Huang CSCI 1470 Final Project Check In 2 Reflection We have made a few significant changes to our project since the outset. We talked about some of the limitations with our first approach, namely that the sample size we were working with was limited to about twelve thousand NFL games, which we ultimately decided was not enough to make a robust model. We therefore shifted our focus to using CNNs to help predict Fantasy Football scores for Wide Receivers in the NFL. There is a lot of data available for this, which has made it easier to make sure we focus on only the most relevant information. So far, a significant portion of our work has been devoted to preprocessing of the data, although we have spent a lot of time discussing our plans for our model architecture as well. One of the features we have decided to engineer is a trailing average of a player’s statistical information, week over week, so that we have a glimpse into how a given player is performing lately. This makes a lot of intuitive sense and we believe will help deliver better results, but it does mean that we have to take extra steps to make sure there is no data contamination between our training and testing data, which has required a little extra work. We have also spent some time reflecting on how to judge the success of our model. Since we have shifted to doing a regression problem, we don’t have accuracy to use as a metric. We believe that Mean Squared Error is a good way to approach this, but recognize that we need to contextualize this error with some baseline performance. Our current approach right now, is to say that our model is successful if it performs better than just using a players score from the previous week.
Log in or sign up for Devpost to join the conversation.