Kafka is an opensource distributed stream-processing platform through which we can publish, subscribe to the stream of records, store these records, and process/extract these stream of records on the go. Kafka is maintained by Apache Software Foundation. Kafka can be used in multiple ways to extract various use cases from it as per our needs. Some of the most famous use cases that can be driven from Kafka are explained and demonstrated diagrammatically in the later sections of this blog. …
Hey! In this blog, I am going to provide you insights regarding which Mobile App Framework to choose for development. A lot of companies/people come across this issue and are often confused about which one is the best for them especially startups. I am going to explain functionality, pros/cons, and use cases of all of them so that by the end of the blog you can easily choose which one is best for you as an individual or a company.
In this blog, I am going to cover the basic aspects of setting up your company architecture on Google Cloud. It is essential that the infrastructure you develop has high cohesion and low coupling, setting up such an architecture helps you to scale your target services and apps at an incredible speed without worrying about it affecting your entire workflow. Also, a well defined and structured architecture enables faster bug tracking/fixation and prevents the problem of single-point failure.
Lets first understand how the resource hierarchy needs to be set up on google cloud platform. While designing your workflow you can…
In this blog, I am going to cover various storage options that are provided by the Google Cloud Platform(GCP). Choosing an appropriate storage option is extremely essential for assuring that your services/apps/data pipeline yields optimum results. The selection of the right storage option not only enhances the performance of your services/apps/data pipeline but also helps you in setting a cost-efficient project. The running cost of an organization’s backend system can be turned into a cost-efficient system by keeping in mind some basic principles and acquiring adequate knowledge before deploying anything. Numerous times we rush into the creation of a service…
This blog is going to cover the basics of Elasticsearch and its Query DSL (Domain Specific Language). But before moving on to details of this topic, firstly we should understand what Elasticsearch actually is.
Elastic search is a search engine which provides a real-time distributed full-text search and acts as an analytics engine. It is heavily used and relied upon by numerous top companies all over the world, such as Netflix, LinkedIn, Stack Overflow, Fujitsu and so on. Even “Medium” itself uses Elastic search for its lightening for searching capabilities. We all love these companies and their search performance don’t…
Obtaining Lightening fast search results is a dream come true for any dynamic website. This blog aims establishing production grade search capabilities “Without even building a Webserver !” . That is right , i am going to walk you through simple steps to make your website capable of displaying the content of your MongoDB database on your frontend website at lightening fast speed by using Algolia.
Predicting future sales for a company is one of the most important aspects of strategic planning. I wanted to analyze how internal and external factors of one of the biggest companies in the US can affect their Weekly Sales in the future. This module contains complete analysis of data , includes time series analysis , identifies the best performing stores , performs sales prediction with the help of multiple linear regression.
The data collected ranges from 2010 to 2012, where 45 Walmart stores across the country were included in this analysis. It is important to note that we also have…
In this R implementation of SVM a dataset of Heart Disease classification is used , i.e classifying whether a patient is suffering from heart disease of not based on some attributes. All Attributes contain Numeric data.
Package Used : Caret
Total attributes in dataset : 14
Prediction attributes : 1–13
Target attribute : 14
The target attribute contains binary evaluation of classification.
Result : 0 = Absence of Heart Disease , 1 = Presence of Heart Disease
Loading required package: lattice
Loading required package: ggplot2
package ‘caret’ was built under R version 3.4.4
>heart_tidy <- read.csv(“C:/Users/admin/Desktop/heart_tidy.csv”…
SVM stands for Support Vector machine, it is an algorithm which is used for classification of linear and non-linear data. A separation between two classes is established using plane called “Hyperplane”. The main aim is to obtain the optimum hyperplane for data point separation into classes. If the data points can be classified using a simple linear hyper plane it is the case of linear SVM. However if the data points can’t be classified into classes using a linear classifier then a transformation function is first applied to the dataset to relocate the data points of one class to other…
Experienced Full Stack/ML Engineer and passionate Blogger. Highly skilled in ReactJS, NodeJS, ELK Stack, Kubernetes, Computer Vision, NLP, Statistical Analysis.