Taking a minor break from the “ElasticSearch on K8s” series. You can follow the rest of it on chamilad.github.io.
tl;dr: I’m moving back to maintaining my own site rather than depending on Medium as a platform for technical blogging, because of various reasons. Also introducing a tool to convert from Medium to Hugo.
To start with somewhat of a boast, I’ve been writing blog posts for more than a decade now, on various platforms, SaaS ones to self-hosted solutions. Most of the posts are now nowhere to be seen since I cannot seem to make up my mind about where…
This is part of a series of short articles on setting up an ELK deployment on K8s.
The typical task for a log collection tool is to collect a specified set of logs, from a specified set of locations, and offload them to a specified endpoint. Let’s explore these three aspects in detail.
In a K8s environment, all logs of interest are generated as Docker Container logs that collect each Container’s
stdout. These logs are persisted in the host node, typically in
Log aggregation in a K8s environment is something I have lightly touched upon previously in multiple occasions. However setting up a minimal but a reliable log aggregation stack on top of K8s could quickly become an evolutionary process with each step improving on the previous one (and of course, everyone thinks they can do log aggregation before they actually start to do so). The following is a pattern for ELK I came across while improving such a stack. …
In this post, I’m going to tackle a topic that any K8s novice would start to think about, once they have cleared the basic concepts. How would one go about exposing the services deployed inside a K8s cluster to outside traffic? The content and some of the diagrams I’ve used in the post are from an internal tech talk I conducted at WSO2.
Before we move on to the actual discussion, let’s define and agree on a few terms as they could be confusing with each use if not defined as a specific term first. …
Many months ago, a technical writer colleague of mine complained about how they were struggling to keep up with the frequent releases that the company was doing at the time. There were multiple products in their plate, with each having multiple configuration files (sometimes numbering more than 10). Although the configuration files overlapped within each product, because of the componentized platform the company had built the products upon, each product in theory could have different release versions of the components that used these configuration files. All of these had to be documented into readable (and most importantly usable) technical content.
Deploying WSO2 products on Containerized platforms is a well-tested well-resourced activity. There are various resources available to deploy WSO2 products on Docker, Kubernetes, CloudFoundry, AWS ECS, and Apache Mesos, both officially and unofficially. However, designing a Docker image so that optimal non-functional traits like performance, operational efficiency, and security is a separate topic in itself.
WSO2 products follow a standard structure when it comes to configuration, data, artifacts, and logging. Configuration files are found in <CARBON_HOME>/repository/conf folder, data in <CARBON_HOME>/repository/data, artifacts in <CARBON_HOME>/repository/deployment (or in <CARBON_HOME>/repository/tenants folder if you’re in to multi-tenancy). All the log files are written into <CARBON_HOME>/repository/logs folder.
All log events are output as entries to files through Log4J. Because of this, when it’s time to attach WSO2 logging to a log aggregator, it’s a matter of incorporating a tailing file reader agent and directing it towards <CARBON_HOME>/repository/logs folder. For an example, for ELK this could be something like FileBeat.
WSO2 API Manager, the only Open Source Leader in API Management Solutions in Forrester Wave, packs in a wide range of advanced API Management features that covers a number of end user stories. Through customization introduced to the extension points available throughout the product, WSO2 API Manager can be adopted to almost all API Management scenarios imaginable.
An interesting scenario on API Management is how to perform Continuous Integration and Continuous Delivery of APIs. …
In the previous post, we measured the temperature of the water on what Observability is and why it should be a first class consideration in system design. Let’s explore the possibility of a structured approach for designing observable systems.
In short, because Observability has to be designed into a system rather than be considered as an on-the-spot hack.
For example, take High Availability of a system. The critical points in a Solution Architecture diagram are analyzed and improved so that there are no single points of failure. The nodes (and lines that connect them), in other words execution points and…
Before we dive in to the waters, we need to define what observability is. Let’s go for some tweets first.
What’s interesting though is her tweet before that.
Yes, the term has been thrown around with pretty much any personal meaning you could add during the process. It’s not a rad new word for your existing monitoring setup. It’s not a new way to do tracing. And it’s certainly not that new flashy dashboard someone just designed.
developer, #cloud enthusiast, #apacheStratos committer, expect #linux, #containers, #kubernetes, #microservices, and #devops in general