Understanding Natural Language

Falafel Software Bloggers
Falafel Software
Published in
5 min readJan 25, 2017

Isaac Asimov speculated that you could plug a politician’s speech into a mathematical model, zero out the equation, and prove that the politician had said nothing. We know this intuitively, but I never thought you could actually do it. The Natural Language API from the Google Cloud Platform comes close by measuring sentiment found in text. The Natural Language API sentiment score ranges from –1.0 (negative emotion) to 1.0 (positive emotion). Ever watch HGTV? This may sound familiar:

“The kitchen is so cramped. I really detest the backsplash and I absolutely hate the cabinets. I’ve never seen such a poor excuse for a kitchen.”

Emphatically negative. They’ll have to rip out everything out of their first world kitchen and start over. The flip side is equally strong, but positive:

“The kitchen is the best ever! The spacious layout is perfect and has everything for our needs.”

We’ll take a look at the three major features of the Natural Language API: sentiment analysis to measure feeling and intent, entity analysis to identify people/places/things mentioned in the text, and syntactic analysis to describe the underlying linguistic structure of the text.

Sentiment Analysis

We’ll get to the code in a sec, but first try it here.

The scores for each sentence are clearly negative. The magnitude, measured from 0.0 to infinity, indicates strength of feeling. Now once more (with feeling), take a look at this strong positive statement:

The API returns score and magnitude for each sentence and for the entire document. Each magnitude contributes to the document, so longer documents can reach higher magnitudes.

As with many of the Google Cloud API’s, you can hit the API directly from a REST service or use a C# wrapper. In these examples, I’m using Google.Cloud.Language.V1 package from NuGet to install the C# wrappers. Once installed, you can use the Google.Cloud.Language.V1 namespace and its LanguageServiceClient class to access key methods.

The general pattern is to create a LanguageServiceClient, create a Document object for the text, and then call one of the LanguageServiceClient methods: AnalyzeSentiment(), AnalyzeEntities() or AnalyzeSyntax(). Client methods each return a response object. For example, AnalyzeSentiment returns a Sentences collection in the response. Each sentence has text content, score and magnitude.

var response = client.AnalyzeSentiment(doc);

foreach (var sentence in response.Sentences)
{
Console.WriteLine(columns,
sentence.Text.Content,
sentence.Sentiment.Score,
sentence.Sentiment.Magnitude);
}

Note: You can assign text to the Document Content property directly, or assign the GcsContentUri property to point at storage on the cloud. The Document class lives in the Google.Cloud.Language.V1 namespace.

using Google.Cloud.Language.V1;

const string columns = "{0,-30}{1,10}{2,10}";

var client = LanguageServiceClient.Create();

var doc = new Document()
{
Content = "The kitchen is the best ever! The layout is perfect.",
Type = Document.Types.Type.PlainText
};

var response = client.AnalyzeSentiment(doc);

Console.WriteLine(columns, "", "Score", "Magnitude");


foreach (var sentence in response.Sentences)
{
Console.WriteLine(columns,
sentence.Text.Content,
sentence.Sentiment.Score,
sentence.Sentiment.Magnitude
);
}

Console.WriteLine(columns,
"Document Sentiment",
response.DocumentSentiment.Score,
response.DocumentSentiment.Magnitude);

}

The output shows the score and magnitude for each sentence and for the document as a whole.

Caveat: The API was just released in July 2016, so its early days yet. Sentiment analysis is still vulnerable to misunderstanding, particularly for small amounts of text. “I’d really hate to lose the kitchen” scores a -3 sentiment while “I’d really love to lose the kitchen” is a +4. But the API isn’t a special purpose one-off. It’s an outgrowth of Google’s Machine Learning product, so my expectation is that accuracy will improve.

Entities

AnalyzeEntities() identifies key objects in text: their importance (salience) and their types (e.g. person, organization, location, event…) Each instance of an entity is a Mention that includes its position in the text. The text example is a little longer and has repeated mentions of “Google” and “California”.

Here’s the output shown in descending order by salience. This example only lists the entities, but you can rummage through the mentions for each entity as well.

Syntax Analysis

To tease meaning from a document, such as sentiment or entities, there’s grunt work to be done first — extracting sentences and words, labeling parts of speech, and mapping relationships between words. AnalyzeSyntax does this fundamental work for you by examining a document and reporting its linguistic makeup.

AnalyzeSyntax extracts an array of Token where tokens are the words and punctuation that make up a sentence. Each Token has a TextSpan with its text content and position, PartOfSpeech to describe its function in the sentence, DependencyEdge to map the token’s relationship with other tokens, and lemma. The lemma is a base word that other words are formed from, for example “run” is the lemma for “running” and “ran”.

The PartOfSpeech Types defines a token’s role in terms of case, gender, mood, tense and so on.

Here’s an example that slices-and-dices the phrase “It was the best of times, it was the worst of times”. Reflection + Linq list the parts of speech in a text description. Any parts of speech with the value Unknown are left out.

// create the Document object
// Document is defined in the Google.Cloud.Language.V1 namespace
var doc = new Document()
{
Content = "It was the best of times, it was the worst of times",
Type = Document.Types.Type.PlainText
};

// create a client with default settings
var client = LanguageServiceClient.Create();

// analyze syntax in the doc
var response = client.AnalyzeSyntax(doc);

// list tokens
var tokens =
from t in response.Tokens
let partsOfSpeech = from prop in t.PartOfSpeech
.GetType()
.GetProperties(BindingFlags.Public | BindingFlags.Instance)
where !prop.GetValue(t.PartOfSpeech).ToString().Equals("Unknown")
select prop.Name + "(" + prop.GetValue(t.PartOfSpeech) + ")"
select new
{
Offset = t.Text.BeginOffset.ToString(),
Text = t.Text.Content,
Description = string.Join(", ", partsOfSpeech)
};

// print the list of tokens and descriptions
tokens.ToList().ForEach((token) =>
{
Console.WriteLine("{0,6} {1, 10} {2,-30}", token.Offset, token.Text, token.Description);
});

Here’s the output in the console window:

Where to from here?

I’m intrigued by the possibilities of the API, all on its own. But there’s a natural affinity with related API’s — BigQuery, Speech, Vision Optical Character Recognition, and Translation — that could generate some really interesting mashups. For example, analyzing top news trends, scanning help desk email for trouble spots, or gauging reaction to social media (without using a star rating UI). I suppose the API could be used to react in real-time to text as its produced, but the real value may be in getting actionable data from the large chunks of unstructured text data found on servers all over the world.

--

--