Being an active kubernetes user for the last 3 years, I practically learned an old concept in a new way. It's very likely that being a k8s user you haven’t paid attention to or never know what an endpoint object is, however under the covers, you have been using it, full guarantee :)
One liner explanation to 2 key concepts of kubernetes
What is a Service in k8s
A service is a k8s object that exposes an application running in one or many pods as a “network service”What is an Endpoint in k8s
An Endpoint is a k8s object that lists all the addresses (IP addresses and Ports) that are used by a…
Learn how to move Kafka messages to Ceph S3 Object Storage using Secor
If your business is your Solar System, then your Data is the SUN, it has both gravity & mass, everything revolves around it, it must live forever — Myself
Kafka is one of the most popular messaging systems out there, used for real-time streams of data, to collect big data, or to do real-time analysis or both. Kafka is used to stream data into data lakes, applications, and real-time stream analytics systems.
Red Hat OpenShift installer by default uses self-signed certificates to encrypt the communication with the web console as well as applications exposed via OpenShift Route. Self-signed certs generally suffice your dev/test environments however, for production environments, it's highly recommended to use proper certificates to secure all your OpenShift routes.
In this post, you will learn how to request TLS certificates from Let’s Encrypt and apply those to your OpenShift 4 cluster as a post-installation step.
acmesh-official
repositorycd $HOME
git clone https://github.com/acmesh-official/acme.sh.git
cd acme.sh …
OpenShift Container Platform (OCP) cluster administrators can deploy cluster logging using a few CLI commands and the OCP web console to install the Elasticsearch Operator and Cluster Logging Operator. The cluster logging components are based upon Elasticsearch, Fluentd, and Kibana (EFK). The collector, Fluentd, is deployed to each node in the OCP cluster. It collects all node and container logs and writes them to Elasticsearch (ES). Kibana is a centralized, web UI where users and administrators can create rich visualizations and dashboards with the aggregated data.
Elasticsearch is distributed by nature. Elasticsearch index is a collection of documents that are distributedly stored across different containers known as shards. The shards are duplicated across a set of nodes to provide redundant copies (called replicas) of the data in case of hardware/infrastructure failure. In this characterization brief, we will focus on the logStore cluster logging component of the EFK stack, which is where the logs are stored and the current implementation is Elasticsearch. …
At times, for your use case, you require long term persistence for your Apache Kafka data. This could be either to ingest Kafka messages data to your S3 data lake or simply storing messages for long term audit and compliance usage.
In this blog post, we will learn how to move Apache Kafka (Strimzi) messages to AWS S3 using Apache Camel connector.
Prerequisite
Step: 1 Set S3 credentials as k8s secrets
aws-credentials.properties
…In this blog post, we will learn how to deploy Kafka on OpenShift and make it accessible externally outside of the OpenShift cluster.
Step: 1 Deploy Strimzi Operator
oc new-project kafka-demooc apply -f 'https://strimzi.io/install/latest?namespace=kafka-demo' -n kafka-demooc get all -n kafka-demo
More details here
Step:2 Deploy Kafka Cluster
spec > kafka > listeners > external > type: route
is important to access Kafka brokers from outside OpenShfit. …The fastest way to launch a Kubernetes cluster locally !!!
kind is a tool for running local Kubernetes clusters using Docker containers. It was primarily designed for testing Kubernetes itself, but may be used for local development or CI.
Let’s deploy a k8s cluster with 1 x controller node and 3 x worker nodes.
kind_cluster_config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
- hostPort: 30080
containerPort: 30080
- role: worker
- role: worker
- role: worker
We have tested a variety of configurations, object sizes, and client worker counts in order to maximize the throughput of a seven node Ceph cluster for small and large object workloads. As detailed in the first post the Ceph cluster was built using a single OSD (Object Storage Device) configured per HDD, having a total of 112 OSDs per Ceph cluster. In this post, we will understand the top-line performance for different object sizes and workloads.
The terms “read” and HTTP GET is used interchangeably throughout this post, as are the terms HTTP PUT and “write.”
Large-object sequential input/output (I/O) workloads are one of the most common use cases for Ceph object storage. These high-throughput workloads include big data analytics, backup, and archival systems, image storage, and streaming audio, and video. For these types of workloads throughput (MB/s or GB/s) is the key metric that defines storage performance. …
Organizations are increasingly being tasked with managing billions of files and tens to hundreds of petabytes of data. Object storage is well suited to these challenges, both in the public cloud and on-premise. Organizations need to understand how to best configure and deploy software, hardware, and network components to serve a diverse range of data-intensive workloads.
This blog series details how to build robust object storage infrastructure using a combination of Red Hat Ceph Storage coupled with Dell EMC storage servers and networking. Both large-object and small-object synthetic workloads were applied to the test system and the results subjected to performance analysis. …
Starting in Red Hat Ceph Storage 3.0, Red Hat added support for Containerized Storage Daemons (CSD) which allows the software-defined storage components (Ceph MON, OSD, MGR, RGW, etc) to run within containers. CSD avoids the need to have dedicated nodes for storage services thus reducing both CAPEX and OPEX by co-located storage containerized daemons.
Ceph-Ansible provides the required mechanism to put resource fencing to each storage container which is useful for running multiple storage daemon containers on one physical node. In this blog post, we will cover strategies to deploy RGW containers and their resource sizing guidance. …
About