The COVID-19 pandemic has drastically changed the way we operate in the world. One of the most significant changes has been a large shift to remote work. According to a survey conducted by PwC, 80% of CEOs expect widespread remote work to continue even after COVID-19 passes.
With distributed workforces becoming more prevalent, the use of video conferencing technology has skyrocketed. We used our Review Index API to analyze the aggregated consumer reviews of Zoom Video Communications and how those reviews relate to the company’s stock performance.
We include the notebook used for this data analysis at the end of this post, as well as all the raw data for your own analysis.
Extracting the review data
Extracting the online review data for this analysis was made easy by our Review Index API, allowing us to extract Zoom’s online reviews in just one API call:
In the background, this API has done the heavy lifting of indexing all of Zoom’s review profiles across the Internet, along with their associated reviews.
Diving into the Data
Review Rating
Although Zoom has exploded in popularity since the pandemic, it has been a favorite among remote workers for years. Zoom is not only a great video conferencing software but a top software company overall. The Software Report, which ranks companies based on industry professional and customer reviews, ranked Zoom 33rd best software company in the world in 2020.
Compared to its competitors – which include BlueJeans, GoToMeeting, Join.me, Skype, Slack and Webex – Zoom’s consumer reviews are exceptional. Since the beginning of 2019, Zoom’s average review rating has not dropped below a score of 4 (out of 5).
However, Zoom’s average rating has dropped slightly since COVID-19 started. Before the pandemic, Zoom averaged a review rating of 4.66 and averaged a review rating of 4.55 after the pandemic.
Review Text
Zoom’s rating drop can most likely be attributed to a large influx of new users putting more stress on their infrastructure. Our Review Index API also comes with the actual review text for every review across multiple platforms. If we look at reviews with a rating of 3 or worse that mention “quality” in the review text, we can get a good idea of how many consumers were unhappy with Zoom’s quality.
We can see from the chart below that quality issue reviews spiked from July to August, right around the same time Zoom’s average rating dropped. The review rating and the percentage of quality issue reviews have a correlation coefficient of -0.38. So, when quality issues rise, the average review rating tends to drop.
After COVID-19 began, Zoom’s average rating dropped by 2.2%, which was the second to worst drop compared to Join.me’s 2.3% drop. Zoom’s quality issues may be something to keep an eye on in the short-term but a well-managed company like Zoom should be able to make the necessary adjustments.
Review Volume
When it comes to review volume, Zoom’s advantage over competitors becomes more apparent. Since COVID-19 began, the review volume of Zoom’s competitors has dropped or only slightly risen while Zoom’s review volume has surged.
Before COVID, Zoom and Slack were fairly even at 22 and 20 average daily reviews, respectively. After COVID, Zoom averaged 40 daily reviews and Slack’s average dropped to 12 daily reviews. Zoom’s review volume peaked in early October at a whopping 95 reviews per day.
Stock Prediction
Review Data & Stock Correlation
Zoom’s rapidly growing review volume is reflected in its stock price. When the noise of both the stock price and review volume data is smoothed with a 20-day moving average, the two have a 0.51 correlation coefficient. So, generally, when one goes up so does the other.
Stocks overall have performed well since the lockdown due to fiscal and monetary stimulus, but few have performed as well as Zoom. Zoom’s stock price has grown four times larger than its closest competitor since the lockdowns started and eight times larger than its closest competitor since the middle of 2019. Zoom’s average daily stock returns grew from 0.5% before the pandemic to 1.1% after the pandemic.
Zoom’s review volume and stock price are not the only interesting correlation. Zoom’s review volume and trading volume have a correlation coefficient of 0.39.
There are strong similarities between Zoom’s review volume and trading volume, but the two curves were a bit out of sync. Shifting review volume back 12 days raises the correlation coefficient to 0.47. This relationship could be used to reaffirm buy/sell signals from On-Balance Volume (OBV) or similar indicators.
There is also a relationship between Zoom’s review volume and stock price volatility.
If we shift Zoom’s review volume back 27 days, the correlation coefficient with volatility goes from 0.06 to 0.51.
Methodology
Because there’s a clear relationship between Zoom’s online reviews and their stock data, we set out to use the review data to predict Zoom’s stock price movements. The goal was to build a machine learning model with decent accuracy and determine which features (variables) were most predictive.
Along with the review volume, review rating, the quality issues metric and basic stock price data, we used some of the most common technical indicators, including simple and exponential moving averages, Moving Average Convergence Divergence (MACD), Stochastic Oscillator, Bollinger Bands and Relative Strength Index (RSI).
We also built another feature to add context to the review data. We used Python’s TextBlob package to apply a sentiment score to each review text. The sentiment scores range from -1 to 1, with -1 being perfectly negative, 1 being perfectly positive and 0 being neutral. The sentiment scores are normally distributed and centered on ~0.25.
The data was split into training and test datasets, with the test dataset being the last 75 days of trading and the training dataset being the 277 trading days before that.
Several machine learning models were trained and tested, including Logistic Regression, Decision Tree, Gaussian NB, Random Forest, Adaboost and a Recurrent Neural Network (RNN)/LSTM. The variable being predicted was the following day’s price direction (up or down) from the opening price to the closing price.
In the entire dataset, 51.4% of the days had a price direction of “up.” So, if we used a model that guessed the majority class (up) every time, we would be correct about 51% of the time. We used this baseline accuracy as the accuracy to beat.
Results
Out of the standard machine learning models, the Adaboost with a Random Forest base estimator performed the best with an accuracy of 53.3%. Although this accuracy leaves something to be desired, this particular model provides us with an opportunity to evaluate the predictive power of our variables.
With decision tree models, like the Random Forest, Feature Importances can be calculated to determine each variables impact on the accuracy of the model. Out of all the features used in the model, the sentiment score and review volume ranked 2nd and 4th most important in terms of predictive power. Our quality issues metric did not perform as well as hoped.
The standard machine learning models resulted in mediocre accuracy and were quite overfit. The RNN/LSTM model improved prediction accuracy significantly. The accuracy improvement is most likely due to the RNN’s ability to learn patterns across time. We also added dropout layers to reduce the problem of overfitting.
Below you can see the architecture of the Recurrent Neural Network. In this simple design, each LSTM layer is followed by a Dropout layer. The Dense output layer has a sigmoid activation function which keeps outputs in the range of 0 to 1. An output greater than to 0.5 is a prediction of “up” and an output less than or equal to 0.5 is a prediction of “down.” The learning rate was 0.01, the loss was binary cross-entropy and the optimizer used was Adam. The model was trained over 400 epochs and the batch size was 15 samples. The data was preprocessed to reflect a 10-day look-back period.
If we look at the accuracy over all epochs, we see that the training and validation accuracy converges over time. Adding the Dropout layers greatly reduced overfitting. The validation accuracy peaked in epoch 339 at an accuracy of 59%.
If we look at the loss over all epochs, we will notice something strange. Although the validation accuracy improved throughout training, the validation loss drifted upward. This is unusual, but a bit of research revealed that it is due to the model making worse and worse predictions on samples where it was already wrong. For example, let’s say the correct prediction for a sample was “down.” The sigmoid function would output a 0.60 (up) in the first epoch, then a 0.70 in the next epoch, a 0.80 in the next epoch and so on. The prediction was always wrong, but the sigmoid function kept outputting a worse result.
Although the validation loss drifted slightly upward, the epoch where the model accuracy peaked (339) coincided with a local minimum in validation loss.
Below we can see a confusion matrix of the test dataset predictions. The model is more reluctant to predict up days than down days. However, when the model does predict up days, it does so with greater accuracy. The model precision (i.e. true positives over total predicted positives) is 77.8%.
Conclusion
Zoom’s success since the COVID-19 pandemic began has caught the attention of workers, technologists and investors. Online reviews data clearly shows Zoom accelerating past its competitors.
As we showed with our analysis, there are many relationships between Zoom’s online reviews data and its stock data. Review volume and sentiment scores were particularly useful in predicting Zoom’s stock price movements. Our quality issue metric didn’t make a significant impact on the model, but it demonstrates the ability to discover meaningful topics within the data.
Even with a simple Recurrent Neural Network architecture, we achieved 59% accuracy and 77.8% precision. We hope these results inspire you to take your projects to the next level with online review data.
Raw data
Feel free to download the raw data that made this analysis possible, alternatively run the code directly from Google Colab or download the .ipynb notebook file and upload it to a Jupyter notebook environment. If you run the code in Colab, you can simply upload the data files from the panel on the left side of the notebook. Let us know what you find!