Image for post
Image for post
Photo by Andre A. Xavier

Being an active kubernetes user for the last 3 years, I practically learned an old concept in a new way. It's very likely that being a k8s user you haven’t paid attention to or never know what an endpoint object is, however under the covers, you have been using it, full guarantee :)

One liner explanation to 2 key concepts of kubernetes

What is a Service in k8s
A service is a k8s object that exposes an application running in one or many pods as a “network service”

What is an Endpoint in k8s
An Endpoint is a k8s object that lists all the addresses (IP addresses and Ports) that are used by a…


Learn how to move Kafka messages to Ceph S3 Object Storage using Secor

Image for post
Image for post

If your business is your Solar System, then your Data is the SUN, it has both gravity & mass, everything revolves around it, it must live forever — Myself

Introduction

Kafka is one of the most popular messaging systems out there, used for real-time streams of data, to collect big data, or to do real-time analysis or both. Kafka is used to stream data into data lakes, applications, and real-time stream analytics systems.


Image for post
Image for post

Red Hat OpenShift installer by default uses self-signed certificates to encrypt the communication with the web console as well as applications exposed via OpenShift Route. Self-signed certs generally suffice your dev/test environments however, for production environments, it's highly recommended to use proper certificates to secure all your OpenShift routes.

In this post, you will learn how to request TLS certificates from Let’s Encrypt and apply those to your OpenShift 4 cluster as a post-installation step.

Prerequisite

  • Up and running OpenShift4 cluster
  • Registered domain name with access to DNS management (see supported DNS providers here)

Get .. Set .. Go !!

Part — 1 : Certificate Generation

  • Clone acmesh-official repository
cd $HOME
git clone https://github.com/acmesh-official/acme.sh.git
cd acme.sh …

Image for post
Image for post
Photo by Mildly Useful

Application Introduction

OpenShift Container Platform (OCP) cluster administrators can deploy cluster logging using a few CLI commands and the OCP web console to install the Elasticsearch Operator and Cluster Logging Operator. The cluster logging components are based upon Elasticsearch, Fluentd, and Kibana (EFK). The collector, Fluentd, is deployed to each node in the OCP cluster. It collects all node and container logs and writes them to Elasticsearch (ES). Kibana is a centralized, web UI where users and administrators can create rich visualizations and dashboards with the aggregated data.

Elasticsearch is distributed by nature. Elasticsearch index is a collection of documents that are distributedly stored across different containers known as shards. The shards are duplicated across a set of nodes to provide redundant copies (called replicas) of the data in case of hardware/infrastructure failure. In this characterization brief, we will focus on the logStore cluster logging component of the EFK stack, which is where the logs are stored and the current implementation is Elasticsearch. …


Image for post
Image for post

At times, for your use case, you require long term persistence for your Apache Kafka data. This could be either to ingest Kafka messages data to your S3 data lake or simply storing messages for long term audit and compliance usage.

In this blog post, we will learn how to move Apache Kafka (Strimzi) messages to AWS S3 using Apache Camel connector.

Prerequisite

  • Running instance of OpenShift Container Platform
  • Running Strimzi Cluster Operator
  • Running instance of Red Hat AMQ Streams (Apache Kafka) deployed via Strimzi operator

Step: 1 Set S3 credentials as k8s secrets

  • Create a file aws-credentials.properties

Image for post
Image for post

In this blog post, we will learn how to deploy Kafka on OpenShift and make it accessible externally outside of the OpenShift cluster.

Step: 1 Deploy Strimzi Operator

oc new-project kafka-demooc apply -f 'https://strimzi.io/install/latest?namespace=kafka-demo' -n kafka-demooc get all -n kafka-demo

More details here

Step:2 Deploy Kafka Cluster

  • Before applying the manifest file make sure you have default storage class in your OpenShift environment. If you do not have that, you can remove this section to deploy Kafka on ephemeral storage (not recommended for prod)
  • Please note that spec > kafka > listeners > external > type: route is important to access Kafka brokers from outside OpenShfit. …

The fastest way to launch a Kubernetes cluster locally !!!

Image for post
Image for post
Kubernetes in Docker

kind is a tool for running local Kubernetes clusters using Docker containers. It was primarily designed for testing Kubernetes itself, but may be used for local development or CI.

Pre-requisites

  • Docker installed on your local machine
  • Kind binaries (follow the installation instructions)

Deploying k8s Cluster

Let’s deploy a k8s cluster with 1 x controller node and 3 x worker nodes.

  • First, create a cluster config file kind_cluster_config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
- hostPort: 30080
containerPort: 30080
- role: worker
- role: worker
- role: worker
  • Let’s create a k8s cluster named…

Image for post
Image for post
Photo by Ke Vin on Unsplash

We have tested a variety of configurations, object sizes, and client worker counts in order to maximize the throughput of a seven node Ceph cluster for small and large object workloads. As detailed in the first post the Ceph cluster was built using a single OSD (Object Storage Device) configured per HDD, having a total of 112 OSDs per Ceph cluster. In this post, we will understand the top-line performance for different object sizes and workloads.

The terms “read” and HTTP GET is used interchangeably throughout this post, as are the terms HTTP PUT and “write.”

Large-Object workload

Large-object sequential input/output (I/O) workloads are one of the most common use cases for Ceph object storage. These high-throughput workloads include big data analytics, backup, and archival systems, image storage, and streaming audio, and video. For these types of workloads throughput (MB/s or GB/s) is the key metric that defines storage performance. …


Image for post
Image for post
Photo by 贝莉儿 DANIST on Unsplash

Organizations are increasingly being tasked with managing billions of files and tens to hundreds of petabytes of data. Object storage is well suited to these challenges, both in the public cloud and on-premise. Organizations need to understand how to best configure and deploy software, hardware, and network components to serve a diverse range of data-intensive workloads.

This blog series details how to build robust object storage infrastructure using a combination of Red Hat Ceph Storage coupled with Dell EMC storage servers and networking. Both large-object and small-object synthetic workloads were applied to the test system and the results subjected to performance analysis. …


Image for post
Image for post
Photo by JESHOOTS.COM on Unsplash

Starting in Red Hat Ceph Storage 3.0, Red Hat added support for Containerized Storage Daemons (CSD) which allows the software-defined storage components (Ceph MON, OSD, MGR, RGW, etc) to run within containers. CSD avoids the need to have dedicated nodes for storage services thus reducing both CAPEX and OPEX by co-located storage containerized daemons.

Ceph-Ansible provides the required mechanism to put resource fencing to each storage container which is useful for running multiple storage daemon containers on one physical node. In this blog post, we will cover strategies to deploy RGW containers and their resource sizing guidance. …

About

Karan Singh

Sr. Solution Architect @ Red Hat ♦ Loves Kubernetes, Storage, Serverless, Hybrid-Multi-Cloud, Software Architectures, DevOps, Data Analytics & AI/ML

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store