Social Science — Text Mining on Twitter Data

Sammy Deprez
Data Fish
Published in
4 min readJan 30, 2017

In my previous blog post I talked about how to get Twitter Data into Power BI. This article will discuss how text mining can be done on that data with some R code, together with some Power BI Visuals.

At the end of the article we will have made an interactive word How dcloud with the frequency of words.

In case you don’t know how to get twitter data into Power BI, then I would like to refer you to my previous post.

So last time our R script stopped with our data frame ‘df_tweets’. But for text mining we first need to clean up all our tweets.

We don’t need:

  • links to webpages
  • username
  • double spaces
  • retweet tags like ‘RT’
  • removal of stop words
  • Stop words are words which do not contain important significance. Or common words like “a”, “and”, “in”, “etc”,”I”,…..
  • We will also include the word ‘powerbi’ on itself since it doesn’t give extra meaning to our data.

Lets look at the code:

[code]
#copy column
df_tweets$cleansedText <- df_tweets$text

#remove graphical characters
df_tweets$cleansedText <- str_replace_all(df_tweets$cleansedText,”[^[:graph:]]”, “ “)

#convert tweet to lower
df_tweets$cleansedText <-tolower(df_tweets$cleansedText)

# Remove RT
df_tweets$cleansedText <- gsub(“\\brt\\b”, “”, df_tweets$cleansedText)

# Remove usernames
df_tweets$cleansedText <- gsub(“@\\w+”, “”, df_tweets$cleansedText)

# Remove punctuation
df_tweets$cleansedText <- gsub(“[[:punct:]]”, “”, df_tweets$cleansedText)

# Remove links
df_tweets$cleansedText <- gsub(“http\\w+”, “”, df_tweets$cleansedText)

# Remove tabs
df_tweets$cleansedText <- gsub(“[ |\t]{2,}”, “”, df_tweets$cleansedText)

# Remove blank spaces at the beginning
df_tweets$cleansedText <- gsub(“^ “, “”, df_tweets$cleansedText)

# Remove blank spaces at the end
df_tweets$cleansedText <- gsub(“ $”, “”, df_tweets$cleansedText)
[/code]

What does it do?

First line makes a new column(cleansedText) in our data frame, a copy of our twitter messages. That way we will do cleanup on a copy an keep the original. Might be useful later.

Next lines are converting everything to lower case, removing RT (retweet), removing usernames, punctuation, links, tabs and blank spaces. All this is done with the gsub() function, a default R function for text replacement and Regex.

Following tweet: The latest The #PowerBI Daily! https://t.co/2ZCFOhO288 #powerbi #bi

Would become : the latest the powerbi dailypowerbi bi

Last cleanup step is removing stop words, to do this we need to convert our data frame as a Corpus. A Corpus is a specific class that can be identified as a library of documents (or in our example tweets). It belongs to the ‘tm’ package.

[code]
# Create Corpus
tweet_corpus <- Corpus(VectorSource(df_tweets$cleansedText))

# Removing Stop Words
tweet_corpus <- tm_map(tweet_corpus, function(x)removeWords(x,c(stopwords(“english”), search_term)))
[/code]

Now that our data is cleansed, a (TDM) Term Document Matrix is necessary. A TDM is a matrix with in the rows all the words that are used in each document (tweet) and in the columns the documents (tweets)

[caption id=”attachment_139" align=”alignnone” width=”287"]

An example TDM[/caption]

[code]
tweet_tdm <- TermDocumentMatrix(tweet_corpus)
[/code]

As you might imagine depending on how much text you are analysing, how big this TDM could become. But what we would like to do is get a word cloud in Power BI that shows us the most common words.

We aggregate our TDM and group it by word and remove the details of the tweet.
By converting it to a matrix we can aggregate with the rowSums() function and then sort descending on the frequency of the word.

[code]
tweet_tdm_m <- as.matrix(tweet_tdm)
tweet_tdm_m <- sort(rowSums(tweet_tdm_m),decreasing = TRUE)
[/code]

Again depending on your tweets or documents this might give you a matrix with thousands of words. So for our report we only want the top 150. Create a new data frame and then keep the top 150. As last step in this code I added a WordId and fill it with a sequence number. Because I would like to link the words back to the tweets. That way I will be able to click a certain word in my Power BI report and then see the tweets.

[code]
df_wordcount <- data.frame(keyName=names(tweet_tdm_m), value=tweet_tdm_m, row.names=NULL)
df_wordcount_top <- head(df_wordcount, 150)
df_wordcount_top$WordId<-seq.int(nrow(df_wordcount_top))
[/code]

I do the same for my Tweets data frame. I add a column TweetId with a sequence number.

[code]
df_tweets$TweetId<-seq.int(nrow(df_tweets))
[/code]

Last step is to create a mapping table between the Tweets & the Words. The function grep() is executed on each word. It looks up which tweet makes use of the word and saves its data. At the end we have a data frame with two columns TweetId & WordId.

[code]
#Tweet_Word_Mapping
list_tweet_word_map <- sapply(1:nrow(df_wordcount_top), function(x) grep(df_wordcount_top[x,”keyName”], df_tweets$cleansedText))
word_map_names <- c(“TweetId”,”WordId”)
df_tweet_word_map <- NULL
for(x in 1:nrow(df_wordcount_top)){
df_tweet_word_map <- rbind(df_tweet_word_map, setNames(data.frame(list_tweet_word_map[x], x), word_map_names))
}
df_tweet_word_map <- df_tweet_word_map[,word_map_names]
[/code]

Now lets load all of this in Power BI and make a nice report.

  1. Open Power BI
  2. Click Get Data
  3. Choose for R Script
  4. Copy the code in to the text box (the full code can be found here)
  5. In the navigator select following three data sets and click Load:

Data is loaded! By default PowerBI does not have a WordCloud visual, but in the Custom Visual Gallery there is one you can download and use. (Find here how to use a custom visual)

I’m not going to go through the whole process of making the PowerBi report. But I made a small video, how I did it and how it looks like.

This was a small introduction into TextMining with R and visualising it in PowerBi. Off course you can make your visuals immediately in R too, but then you don’t have the interactivity with the other visuals in your Report.

Our final report:

--

--