Encoding concepts, categories and classes for neural networks

Assaad MOAWAD
DataThings
Published in
4 min readJul 30, 2018

--

In a previous post, we explained how neural networks work to predict a continuous value (like house price) from several features. One of the questions we got is how neural networks can encode concepts, categories or classes. For instance, how can neural networks convert a number of pixels to a true/false answer whether or not the underlying picture contains a cat?

First, here are some observations:

  • A binary classification problem is a problem ‘with a “Yes/No”’ answer. Some examples include: Does this picture contain a cat? Is this e-mail spam? Is this application a virus? Is this a question?
  • A multi-classification problem is a problem with several categories as an answer, like: what type of vehicle is this (car/bus/truck/motorcycle)?
  • Any multi-classification problem can be converted to a series of binary classifications. (like: Is this a car, yes or no? Is this a bus, yes or no? etc)

The core idea of classification in neural networks is to convert concepts, categories and classes into probabilities of belonging to these concepts, categories or classes.

Meaning that a cat is 100% cat. A dog is 100% dog. A car is 100% car, etc. Each independent concept by itself is a dimension in the conceptual space. So for…

--

--

Assaad MOAWAD
DataThings

Interested in artificial intelligence, machine learning, neural networks, data science, blockchain, technology, astronomy. Co-founder of Datathings, Luxembourg