Sign in

Technical Service Engineer, I mostly write curated articles about ARCUS Cache Cluster.

When performance issues occur during service operations due to heavy load, we tend to solve these problems with scale-up and scale-out. Scale-up is simply replacing the server’s hardware with something better. Even it’s the simplest solution, there is a limitation on performance growth and the need to restart the service. Scale-out, on the other hand, compared to scale-up has the advantage of increasing computing power in parallel, allowing continuous scaling, but the additional need to establish a distributed architecture and policies increases complexity. ARCUS supports scale-out through the cluster feature, and by doing so, it enables distributed storage of service…


ARCUS in-memory cache cluster uses memory as a high-performance data storage medium for cache purposes. But as a volatile memory, it has the characteristic of losing all the stored data when the system shuts down for reasons such as system upgrade, system failure, or equipment replacement, etc. Due to the need for data storage besides cache purposes, a data persistence feature was developed on ARCUS that allows data to be preserved permanently. In this article, I will introduce you to the ARCUS Data Persistence feature and how to use it.

Overview of ARCUS Persistence

ARCUS data persistence provides the ability to fully recover data…


If you are a developer who tries to apply cache to the application for the first time, you might not be able to get it done right. When it comes to applying cache to the application there are various caching patterns. Let’s talk about the most commonly used — the Demand-fill caching pattern and the problems you might encounter while applying it to your application. To solve these problems we’ll take a glimpse into Spring AOP, and lastly, I will introduce features available in the Java library ARCUS common module using Spring AOP.

Demand-Fill Cache

The Demand-fill cache pattern is a method…


In order to accelerate service response and increase query throughput in many application services, it has become a common way to apply scalable in-memory cache systems. The original data of the application services are stored in a persistent store such as DB and the frequently viewed data is stored in an in-memory based cache system thereby increasing the performance of the application services by quickly processing repeated query requests. These cache systems will be able to withstand any failure to maintain uninterrupted cache services when they’ll be provided with fault-tolerance characteristics. …


What is an Operator?

Over the last few years, many companies have abandoned traditional Monolith architecture and have adopted Microservice architecture and by that naturally changing the way the software operates. As container technologies are becoming more popular, Kubernetes — a representative orchestration tool that naturally helps automate distributed container environments is increasingly being used.

The benefits of container orchestration technology such as Kubernetes’ scheduling, scaling, self-healing, resource management, and etcetera to automate the operation of application services are enormous. However, moving applications and external solutions running in existing On-Premise environments to Kubernetes are not easy. It’s easy for stateless applications to operate with…

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store