A Beginner’s Guide to Machine Learning

Abd L-Rahman Sharaf
Knowledge Officer
Published in
6 min readMay 9, 2019

What is Machine Learning?

Since the onset of the post-industrialization era, people have worked to create machines that think like humans. The ‘thinking machine’ is artificial intelligence (AI)’s biggest gift to mankind; the grand entry of this self-propelled machine has suddenly changed the operative rules of business. In recent years, self-driving vehicles, digital assistants, robotic factory staff, and smart cities have proven that intelligent machines are possible. AI has transformed most industry sectors like retail, manufacturing, finance, healthcare, and media and continues to invade new territories. Machine learning is an application of AI that helps create these ‘thinking machines’ by providing systems the ability to automatically learn and improve from experience without being explicitly programmed.

© Knowledge Officer

What is the Difference between Machine Learning and Deep Learning?

1- Machine Learning is the broad concept that encompasses all activities aiming at helping machines learn from datasets to take decisions in a way that resembles human behavior, using different learning techniques such as supervised and unsupervised learning, each of which comes with its own various algorithms such as logistic regression, linear regression, neural networks, etc.

2- Deep learning is just a subset of the larger scope ‘Machine learning’ that focuses mostly on building more complex and deeper neural networks (where the word ‘deep’ comes from), using many hidden layers to extract or to detect the complex patterns in datasets. Deep learning uses the same techniques as Machine Learning but most of the research work is on supervised techniques and algorithms. However, there’s quite some research done in unsupervised learning as well such as using Auto-Encoders.

Supervised Versus Unsupervised Learning

Data scientists use a variety of machine learning algorithms to extract actionable insights from the data they’re provided. The majority of these insights are solutions to supervised learning problems because you already know what you are required to predict– the data you are given comes with a lot of details to help you reach your end goal.

Most ML practitioners tend to use supervised data in their work wherein every input is mapped to a target output by a human and the machine uses this mapping to create its decision-making model. However, unfortunately, the majority of the data that we have in the real world is NOT supervised. It comes in many sorts and types with only inputs, without target outputs. This is why academics have started doing research in the field of unsupervised learning in order to make machine learning and deep learning algorithms adapt to the great amount of the more common, unsupervised data available in the world and learn from it effectively.

Unsupervised learning is a complex challenge. But its advantages are numerous. It has the potential to unlock previously unsolvable problems and has, in fact, gained a lot of traction in the machine learning and deep learning community.

Why Unsupervised Learning?

Well, supervised learning poses practitioners with many challenges:

  • It limits the potential of algorithms as we tell the algorithm what to do and what not to do.
  • It takes huge manual effort to create labels for the algorithms.
  • It does not take into consideration other corner cases that could occur when solving the problem.

To solve these issues in an intelligent way, we can use unsupervised learning algorithms. These algorithms derive insights directly from the data itself by summarizing the data or grouping it, so that we can use these insights to make data-driven decisions.

Most practitioners and academics of machine and deep learning depend more on old Machine Learning algorithms when handling unsupervised data such as K-means or Gaussians, but there has recently been a tendency towards developing unsupervised deep learning techniques that deliver better results than traditional algorithms.

Unsupervised Deep Learning

Unsupervised Deep Learning techniques are usually the most impactful where a lot of unstructured data is present. So, in this article, we will study two examples of unsupervised Deep Learning: the first applied to Image Processing and the second applied to Natural Language Processing.

Example 1: Image Processing

How to Organize a Photo Gallery?

Let’s assume you have a plethora of images, like more than 2000 images of factory regions captured by surveillance cameras in your system, and the cameras sometimes take about 10+ shots for the same area.

Ideally, what you would want is an application that organizes the images in such a manner that you can have a look at most of the regions in the factory, without excessive repetition. This would actually give you context as to the different kinds of images that you have right now for the different regions in your factory.

You can start using some of your coworkers in labeling some of the images to sort some of the regions and then deploy a machine learning algorithm to learn from this mapping, but we don’t want that, we don’t want to use any human assistance to our algorithm. We want to use an unsupervised technique to solve this.

A better way to organize the photos would be to extract semantic information from the image itself and use that information intelligently. But if we are eager about state-of-the-art research in this area, we can look at how we can solve this problem by unsupervised deep learning algorithms.

Here, we can use a type of deep learning architecture called ‘Auto-Encoders’.

Let me give you a high-level overview of Auto-Encoders. The idea behind using this algorithm is that you are training it to recreate what it has just learned. But the catch is that it has to use a much smaller representation frame to recreate it.

For example, if an Auto-Encoder with ‘encoding’ set to 10 is trained on images of cats, each of size 100×100, then the input dimension is 10,000, and the Auto-Encoder has to represent all this information in a vector of size 10 (as seen in the image below).

Auto-Encoder for Image Processing- © Knowledge Officer

An Auto-Encoder can be logically divided into two parts: an encoder and a decoder. The task of the encoder is to convert the input to a lower dimensional representation, while the task of the decoder is to recreate the input from this lower dimensional representation.

Example 2: Natural Language Processing

Knowledge Officer Engine

What’s word embedding?

Word embedding is a term given to any method which converts word/text to numbers. We cannot directly feed text data to our ML/ DL models. Because of this, we first need to represent text in the form of numbers. These numbers may simply tell if a certain word appears in a sentence (bag of words approach). They may also take into consideration the frequency of a word in the whole corpus and in the statement (as done in TF-IDF). Other techniques represent each word by a Vector rather than representing a sentence as an embedded vector, such as Word2Vec, GloVe, and FastText. We will discuss two of the previously mentioned techniques, namely TF-IDF and Word2Vec.

In the company where I work, Knowledge Officer, we use a Word2Vec algorithm for our skills corpus that has 52K skills from many sources to map each skill to an embedding vector that gives its semantic and syntactic meaning. This enables us to build a database of job posts and articles where these skills are mentioned.

Word2Vec trains words against other words neighboring them in the input corpus and it does so by either predicting target using context (known as Continuous Bag of Words CBOW) or predicting context given target (known as skip-gram). On larger datasets, we prefer the latter as it gives better accuracy.

These are just a few business applications of unsupervised machine learning. Future advancements in ‘unsupervised ML/DL algorithms’ will lead to higher business outcomes. Machine Learning and Deep Learning, especially the unsupervised type, will become an integral part of all AI systems, large or small. Connected AI systems will enable ML algorithms to ‘continuously learn’, based on newly emerging information on the internet.

If you’re interested in knowing more about machine learning business applications and how you can do them yourself and would like to continuously stay up-to-date with the latest advancements in machine learning, you are highly encouraged to enroll in the ‘Machine Learning Engineer’ career goal on our Knowledge Officer app and start building your skills towards a rewarding career in machine learning.

About us

Knowledge Officer is a learning platform for professionals. Our mission is to empower a generation of lifelong learners and to help people, however busy, learn something new and relevant every day and achieve their career goals.

If you want to progress in your career and learn from the best people and the best resources on the internet, then try our mobile and website and support our campaign on ProductHunt.

And we’d love to hear your thoughts! So send us at team@knowledgeofficer.com

--

--