Image for post
Image for post

Database scaling and elasticity is vitally important to the successful operation of any software application. A failure to scale adequately can result in business impairment because applications are running slowly or are not available. Databases must be kept online and operational to protect businesses. Below we talk about database scaling in MySQL.

Scaling a production database becomes important when the number of customers performing read-write operations at any particular instance of time (seconds, minutes) suddenly increases beyond the capacity of your database. …


Image for post
Image for post

The three most popular yet underrated algorithms in computer science are Heap Sort, Merge Sort, and QuickSort. It seems quite contradictory, how could something be popular yet underrated? That’s exactly what will be explained. Almost all CS students have studied and learned these three algorithms with Sudo code but only few implement them. This is probably because we take care of algorithms and the running time of these sorting algorithms in comparison to implementation.

There are a mass number of materials and references to get into the details of Sudo code and runtime. The implementation part will be more focused on in comparison to the algorithm in this article and Python will be used for the implementation. …


Image for post
Image for post

Many of our readers may have heard about BERT, the acronym for a Bidirectional Encoder Representation Transformer. Some might have used the pre-trained model from BERT for fine-tuning on their own tasks using their own datasets. However, according to a recent survey, even among heavy NLP users, very few understand the architecture of BERT.

The original idea for the BERT came from some advanced work on Transformers. A detailed discussion of what a Transformer is and how it works was published in 2017 paper called “Attention Is All You Need” jointly published by the Google Brain Team and the University of Toronto. This interesting paper was inspired by the paper “Neural Machine Translation (NMT)” published by Google. The core idea behind the Transformer model is self-attention. Self-attention is the ability to attend to different positions of input sequences to compute a representation of that sequence. In this regard, Transformers are similar to human vision. When a person’s visual cortex detects an object and its surrounding, it does not typically scan the entire object and its surrounding scene; rather it focuses on a specific feature or portion of the item depending on what the person is searching for. …


Image for post
Image for post

The types of videos served by YouTube’s video recommendations have been in the news recently due to some controversial content recommendations. Some viewers are delighted to watch fun video after fun video neglecting other important tasks to dive down a rabbit hole, but there is a concern than some videos, both good and bad, are being watched constantly by children. And some researchers and news organizations have found that the algorithms that drive the recommendations may lead to recommending very extreme content even to children. Whatever your personal feelings about YouTube content most will agree that their recommendation engine is an extremely engaging tool and all of us will from time to time be driven to mindless watch one video after another. But have you noticed that occasionally a seemingly random video will appear among your videos stream? One unrelated to your explicit likes or dislikes. …


Image for post
Image for post

In 2014 a group at Baidu labs, including my friend, Awni Hannun who is currently working at Facebook AI Research., published an interesting and influential paper entitled DeepSpeech: Scaling up end-to-end Speech Recognition.

Primarily concerned with the mechanics of taking audio as an input and converting it to a textual output, this paper was followed in succession by a series of further papers in 2015 and beyond discussing further aspects of this novel approach to speech recognition. The discussion below is based on DeepSpeech the second of this series of papers. …


Image for post
Image for post

The concept of serverless computing is not new. It came into existence about the same time as Kubernetes in 2014, but only recently has its popularity markedly increased as the demand for dynamic scalability and the desire to avoid infrastructure tasks has increased. Each provider calls the concept of serverless computing by a different name. In Azure or GCP it is called Function (fx) while in AWS it is known as lambda, but behind the scenes, they all function roughly the same.

In the demonstration below, we will be using Azure’s Function (fx) but we will use AWS for our deployed service as in the past series of blog posts. Let us begin with the simple “Hello world!” Azure template. First, create a directory on your local machine where you want the work to reside. Once you have created a directory to work with, you can create an Azure Function (fx) using the following command: func init --worker-runtime python. …


Image for post
Image for post

BERT, (Bidirectional Encoder Representations from Transformers) the revolutionary new method for improving results in various NLP (Natural Language Processing) tasks, has been for more than a year. There is great excitement in using attention networks and it seems there are new uses for the BERT model on NLP tasks being continually published. Many researchers just take a pre-trained model and retrain the pre-trained model for sentence embedding in order to perform sentence similarity tasks. Han Xiao from Tencent AI Lab has created a BERT-As-Service open-source project for sentence embedding and I have been using this service for almost a year since its release. …


Image for post
Image for post

One of the most asked questions in Stackoverflow or Github repository on Keras is the model conversion. For quick testing and the easy availability of the pre-trained model for image classifiers, most developers/researchers tend to use Keras. Image classifier models such as VGG16/19, Inception, Resnet, etc is easy to download using Keras library. In most cases, these models are used as the backbone architecture. Computer vision tasks such as image localization and image segmentation (Mask RCNN) uses an image classifier model as backbone architecture.

The image classifier model from the Keras can only be saved in h5 format, native to Keras model format. However, when this model has to be used for inference, it should be in the proto buffer format [Note* h5 model can be served using flask]. …


Image for post
Image for post

In previous posts I introduced some simple deployments using Tensorflow. This post will focus more on the Tensorflow’s Transform and Apache Beam. In the TensorFlow Transform Github repository, I posted some issues (and resolutions) related to the census_example. Because you can refer to this link for more detail, I will not post all the code, just what is needed for explanatory purposes.

This example uses census data to predict if a particular person would likely earn more than or equal to 50k or less than 50k. The input features are categorized into:

CATEGORICAL_FEATURE_KEYS,
NUMERIC_FEATURE_KEYS,
OPTIONAL_NUMERIC_FEATURE_KEYS.

We need to know this because we need to transform the data into the respective datatypes using Tensorflow transform. …


Image for post
Image for post

This is the second post in my series on Tensorflow. In part one I detailed some steps to create a model. In this post I will go farther using some real data, with multiple features and some do some transformations of a dataset.

The dataset I used is an open-sourced dataset from the IRIS dataset. I use this to classify flower types (three flower categories) based on input features. Without further explanation, let’s jump into the code to see how it works.

First, we should import some python packages/headers:

from sklearn import datasets
from sklearn.model_selection …

jagesh maharjan

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store