Yelp Review Sentiment Analysis (an excuse to study NLP and ML)
Today (actually about two weeks ago) I began expanding a prior project on sentiment analysis, which seemed like a great way to begin learning about NLP and applying machine learning — subjects of great interest to me and that I want to understand better. Here’s a link to the next repo/step in the project.
The Data and The Question(s):
The data for the project are Yelp! user reviews for locations (along with some information about those locations’ and users’ (i.e., rating connected to those reviews, number of reviews, locations’ price point, food category, city and location, etc.), and my question is to see whether the data yield any interesting correlations when analyzed (e.g., a stronger correlation between the sentiment analysis calculated rating and actual Yelp rating when there are more or less reviews for a location or for a single user or if that user is ‘elite’, or for certain regions of the country, or for certain types of food or price point, etc. etc). Such correlations could lend themselves to a number of interesting explanations (either about Yelp, certain kinds of locations, users, geographies or biases in the sentiment analysis?)
Motivation and Background:
Initially, I envisioned using the distribution of sentiment analysis scores of each review to find a ‘more objective’ rating for each location. Because scores of reviews are supplied by users, calculating a location’s score based on those user submitted scores is subject to great fluctuation and pressures users to understand their feelings via a rating system out of 5 rather than the language they use naturally to describe them.
While relying on users to supply truthful and accurate descriptions of their feelings is perhaps subject to a similar critique that they could be dishonest or unaware of the appropriate ‘strength’ or ‘weakness’ of language that correctly matches their feelings, qualitative language is unquestionably both more intuitive and nuanced than a clunky rating. That rating is, after all, a short cut, and in short cutting may gloss over important distinguishing features among the reviews.
So, perhaps users’ qualitative evaluations can grant us more confidence and accuracy in rating a place, because users’ qualitative language is more comfortable territory to them than an arbitrary numerical rating. At the least, user reviews’ words provide more data points by which to evaluate those reviews and thus the possibility for far greater differentiation among review scores.
Finally, for the blog:
Oh blog, how I have neglected thee! Here we are, in week 17, feeling like old acquaintances who hoped to not again cross paths. Though there is still hope for us, for lack of blog does not mean lack of progress. Rather, there is lack of blog for wealth of progress — progress from growth and projects that have kept me busy. Below is the story of one.