Image for post
Image for post

In this tutorial we will use the AWS CloudWatch datasource in Grafana to create dashboards from our CloudWatch Metrics in AWS.

What can we expect in this post

If you follow along, by the end of this tutorial you will be able to:

  • Create IAM User so Grafana is allowed to read CloudWatch Metric data
  • Create Dashboards of AWS Resources such as EC2, RDS, Lambda
  • Create template variables so that you can filter on EC2 InstanceId, RDS Cluster Name, etc

Create IAM User

We will create a AWS IAM User, then we will associate AWS Managed Policies which will allow Read Only access on EC2 and CloudWatch.

The read-only access to CloudWatch will grant Grafana access to read metrics from CloudWatch. …


Image for post
Image for post

Follow me on Twitter: @ruanbekker

This will be a tutorial split up in two posts, where I will show you how to provision a EKS Cluster (Elastic Kubernetes Service) on AWS and in the next post, how to deploy a web application to your cluster ( Part2 — Deploy a Web App to EKS.)

And then came EKS

As some of you may know, I’m a massive AWS fan boy, and since AWS released their managed Kubernetes service, I was quite excited to test it out. …


Image for post
Image for post
Banner created with canva.com

So your application need to store secrets and you are looking for a home for them. You have AWS SSM, but you got tired of Rate Limits (i did), this guide will show you how easy it is to use S3, KMS and Python to achieve the goal.

High Level Goal:

From a High-Level we want to store secrets encrypted on S3 with KMS, namespaced with team/application/environment/value in json format so that our application receives the json dictionary of configured key/value pairs.

We can leverage IAM to delegate permissions on the namespacing that we decide on, for my example the namespace will look like this on…


I love blogging about tech stuff, that’s a fact.

I recently started digging into Kubernetes and thought of something that I would like to deploy on Kubernetes.

The answer was clear, as I wanted to deploy multiple apps and as easy as possible. As the load balancer, definitely Traefik.

It just works, registering apps with Traefik is so convenient and easy, and I personally think, they could not have made it any easier to use. I’ve been using it for my personal hosting for over 2 years with no issues.

After I played around with it, I decided to write up a blog post and this…


Setup a 3 Node Drone CI/CD Environment on Docker with Drone for your Continuous Integrations / Continuous Delivery Pipelines.

Image for post
Image for post
Photo by Diana Măceşanu on Unsplash

What is Drone?

Drone is a self-service continuous delivery platform which can be used for CICD pipelines, DevOps’y stuff which is really awesome.

With Configuration as Code, Pipelines are configured with a simple, easy‑to‑read file that you commit to your git repository such as GitHub, Gitlab, Gogs, Gitea, Bitbucket, etc.

Each Pipeline step is executed inside an isolated Docker container that is automatically downloaded at runtime, if not found in cache.

Show me pipelines!

A pipeline can look as easy as:

kind: pipeline
steps:
- name: test
image: node
commands:
- npm install
- npm test

services:
- name: database
image: mysql
ports:


Image for post
Image for post

In this tutorial we will setup a Basic Kibana Dashboard for a Web Server that is running a Blog on Nginx.

What do we want to achieve?

We will setup common visualizations to give us an idea on how our blog/website is doing.

In some situations we need to create visualizations to understand the behaviour of our log data in order to answer these type of questions:

  • Geographical map to see where people are connecting from
  • Piechart that represents the percentage of cities accessing my blog
  • Top 10 Most Accessed Pages
  • Top 5 HTTP Status Codes
  • Top 10 Pages that returned 404 Responses
  • The Top 10 User…


Image for post
Image for post

This is post 1 of my big collection of elasticsearch-tutorials which includes, setup, index, management, searching, etc. More details at the bottom.

In this tutorial we will setup a 5 node highly available elasticsearch cluster that will consist of 3 Elasticsearch Master Nodes and 2 Elasticsearch Data Nodes.

“Three master nodes is the way to start, but only if you’re building a full cluster, which at minimum is 3 master nodes plus at least 2 data nodes.” - https://discuss.elastic.co/t/should-dedicated-master-nodes-and-data-nodes-be-considered-separately/75093/14

What is Elasticsearch?

First things first: Elasticsearch is an open-source, distributed, scalable, full-text search and analytics engine based on Lucene and accessible via REST API. …


Image for post
Image for post

If you are not familiar with OpenFaas, it’s definitely time that you should have a look at it, plus, they are doing some pretty awesome work!

From OpenFaas’s Documentation:

OpenFaaS (Functions as a Service) is a framework for building serverless functions with Docker and Kubernetes which has first class support for metrics. Any process can be packaged as a function enabling you to consume a range of web events without repetitive boiler-plate coding.

Make sure to give them a visit at openfaas.com and while you are there, in the world of serverless, have a look at how Alex outlines architecture and patterns he applies in a real-world example, absolutely great read!


Image for post
Image for post
GitLab Logo from about.gitlab.com

Today we will build a Restful API using Python Flask, SQLAlchemy using Postgres as our Database, testing using Python Unittest, a CI/CD Pipeline on Gitlab, and Deploying to Heroku.

From our previous post, we demonstrated setting up a Custom Gitlab Runner on Your Own Server for Gitlab CI.

Heroku

If you don’t have an account already, Heroku offer’s 5 free applications in their free tier account. Once you have created your account, create 2 applications. I named mine flask-api-staging and flask-api-prod.

You can create the applications via cli or the ui, from the ui it will look more or less like…


Image for post
Image for post

Today we will set up a Serverless URL Shortener using API Gateway, Lambda, and DynamoDB on AWS with Python.

Overview

The service that we will be creating, will shorten URLs via our API which will create an entry on DynamoDB. When a GET method is performed on the shortened URL, a GetItem is executed on DynamoDB to get the Long URL and a 301 Redirect is performed to redirect the client to intended destination URL.

Note, I am using a domain name which is quite long, but it's only for a demonstration if you can get hold of any short domains like t.co

About

Ruan Bekker

DevOps Engineer and Open Source Enthusiast

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store