Life is a stream of events

In a World that produces and depends on a data , there was a need for a platform to handle a continuous flow of data , Kafka is a streaming platform that lets you publish and subscribe to stream of data, store them and process them

Kafka has a number of core differences from traditional messaging systems that make. It runs as a cluster and can scale to handle all the applications in
even the most massive of companies.

Every enterprise is powered by data , Every application creates data, Every byte of data…

In this story, we are going to see how Kubernetes helps us in deploying our microservices and takes care of them.

  1. In Google Cloud cloud shell, first lets set the default zone
BinaryMonster@cloudshell:~ (gcp-Project-ID)$ gcloud config set compute/zone us-central1-b

2. Create k8s cluster

BinaryMonster@cloudshell:~ (gcp-Project-ID)$ gcloud container clusters create binarymonster

3. Get the sample code

BinaryMonster@cloudshell:~ (gcp-Project-ID)$ git clone

4. Our first deployment

BinaryMonster@cloudshell:~ (gcp-Project-ID)$ kubectl create deployment nginx --image=nginx:1.10.0

Kubernetes has created a deployment with single instance of the nginx container

In Kubernetes, all containers run in a pod. …

Create instance template, L3 load balancer and application Load balancer in-front of your web servers

Before we begin we need to setup the default zone and region on our gcloud shell

BinaryMonster@cloudshell:~ (gcp-Project-ID)$gcloud config set compute/zone us-central1-aBinaryMonster@cloudshell:~ (gcp-Project-ID)$gcloud config set compute/region us-central1

Creating multiple web server instances using instance template.

Instance Templates define the look of every virtual machine in the cluster (disk, CPUs, memory, etc). Managed Instance Groups instantiate a number of virtual machine instances using the Instance Template.

To create an instance template, we need a script

  1. In Cloud Shell, create a startup script to be used by every virtual machine instance

Please add…

Creating VM, K8s CLuster, deploy an application and expose the deployment.

Creating VM:

In your Google Cloud Console, Click on Google Cloud Shell

BinaryMonster@cloudshell:~ (gcp-Project-ID)$ gcloud compute zones list | grep us-central1

us-central1-c us-central1 UP
us-central1-a us-central1 UP
us-central1-f us-central1 UP
us-central1-b us-central1 UP

BinaryMonster@cloudshell:~ (gcp-Project-ID)$ gcloud config set compute/zone us-central1-cBinaryMonster@cloudshell:~ (gcp-Project-ID)$ gcloud compute instances create “binarymonster-vm-1” --machine-type “n1-standard-1” --image-project “debian-cloud” --image “debian-9-stretch-v20170918” --subnet “default”

Created [].

binarymonster-vm-1 us-central1-c n1-standard-1 RUNNING

The WHY?

Most applications run on servers. And in the past, we could only run one application per server. The open-systems world of Windows and Linux just didn’t have the technologies to safely and securely run multiple applications on the same server.

So every time the business needs a new application, the IT would go and buy the most powerful server they can get because most of the time nobody knew the requirements of the new application.


VMware changed the game when introduced the virtual machines, now you can securely run multiple applications on a single server.



DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale.

  • DynamoDB is stored on SSD storage.
  • Spread across 3 geographically distinct data centers.
  • Eventual consistent Reads ( Default)
  • Strongly Consistent Reads.

RedShift is a fast and powerful , fully managed , data warehouse service.

RedShift can be configured as follows:

  • Single Node ( 160 GB )
  • Multi-Node : which has a Leader Node ( manages client connections and receives queries)

RedShift can offer multiples types of compression and offers MPP ( massively Parallel processing which means it automatically distribues…

There are two types of Backups:

  • Automated backups: allow you to recover your database to any point in time within a “retention period” . during automated backup , storage I/O is impacted and may experience elevated latency. The automated backups are enabled by default.
  • Database Snapshots:these are done manually and

Whenever you restore either an automatic backup or manual snapshot, the stored version of the database will be a new RDS instance with a new DNS endpoint.

The encryption is done using the AWS KMS ( Key Management Service )

Multi-AZ: Allows you to have an exact copy of your…

Relational databases are what most of us are all used to , you can think of a traditional spreadsheet.

Examples of Relational databases on AWS:

SQL server


MySQL server




RDS or Relational Database services has two key features:

  • Multi-AZ for Disaster Recovery
  • Read Replicas — For performance.

Non-Relational Databases are a group of collections, the collection = Table, Document = Row and Key value pairs = fields

you can have as much as you want of fields per collection or row , but for the RDS you need to keep the consistency between the rows.


  • The AWS Storage Gateway service enables hybrid cloud storage between on-premises environments and the AWS Cloud.
  • AWS Storage Gateway’s software appliance is available as a virtual machine (VM) or as a physical hardware appliance.
  • AWS Storage Gateway supports three storage interfaces: file, tape, and volume. Each gateway you have can provide one type of interface.
  • The file gateway enables you to store and retrieve objects in Amazon S3 using file protocols, such as NFS. Objects written through file gateway can be directly accessed in S3.
  • The volume interface presents your applications with disk volumes using the iSCSI block protocol, data…

AWS — Snowball

  • Snowball is a petabyte-scale data transport solution that uses devices designed to be secure to transfer large amounts of data into and out of the AWS Cloud
  • Snowball addresses common challenges with large-scale data transfers including high network costs, long transfer times, and security concerns.
  • Simply create a job in the AWS Management Console (“Console”)under “Migration & Transfer” and a Snowball device will be automatically shipped to you. Once it arrives, attach the device to your local network, download and run the Snowball Client (“Client”) to establish a connection, and then use the Client to select the file directories that you want to transfer to the device. The Client will then encrypt and transfer the files to the device at high speed.

Basma A

wants to learn everything!

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store