Migrating your app to Kubernetes: what to do with files?

Flant staff
Nov 6, 2019 · 7 min read

While building a CI/CD pipeline with Kubernetes, you may face an issue when requirements of the new infrastructure and an application that is being migrated to it are incompatible. Specifically, it is essential to build a single image when building an application*. It will be used in all environments and clusters of the project.

* This principle underlies the correct approach (according to Google, Red Hat and many others) to managing containers.

Yet there are situations when some site uses a ready-made framework that imposes various restrictions on its future operation. You can easily find a way around this in the “standard environment,” but in Kubernetes, things are much more complicated, especially when you encounter this problem for the first time. Sure, an inventive mind may find infrastructure workarounds that may seem obvious and compelling at first glance… However, you always have to remember that you ought to implement architectural solutions in most cases.

Let’s examine popular workarounds for storing files that may have unintended consequences during the cluster operation. We will also provide more appropriate approach to each of them.

Storing static files

As an illustration, let us consider a web application that uses some generator of static files to produce a set of images, styles, etc. For example, there is a built-in asset manager in the Yii PHP framework. It generates unique folder names. So, in the end, we are getting inherently non-intersecting sets of paths for the static files of the website. (This is implemented for several reasons — e.g., to exclude duplicates when multiple components use the same resource.)

Right out of the box, when the module receives the first request, the directory structure with a unique shared root directory for this deployment is created, and static files are copied (often these are symlinks, but more on that later):

  • webroot/assets/2072c2df/css/…
  • webroot/assets/2072c2df/images/…
  • webroot/assets/2072c2df/js/…

What are the risks for the cluster?

Elementary example

Take a fairly typical case when there is an NGINX in front of the PHP server. It is used to distribute static files and process simple requests. The easiest way to implement such an infrastructure in Kubernetes is to create a Deployment consisting of two containers:

In a simplified form, the nginx configuration boils down to the following:

During the initial visit to the website, assets are generated in the PHP container. However, in the case of two containers in one pod, the NGINX server knows nothing about these static files, while it has to serve them (according to the configuration). As a result, all requests to CSS and JS files will fail with a 404 error. The most apparent solution, in this case, is to create a shared directory (e.g., a shared emptyDir):

Now, static files generated in the container are served by the NGINX correctly. But we would like to emphasize that since this solution is so unsophisticated and primitive, it is far from being perfect and has its nuances and weaknesses, which are discussed below.

Advanced storage

Let’s imagine a situation where some user visits our website, loads a page with styles stored in the container, and while (s)he is reading it, we are redeploying the container. The asset directory becomes empty, and you have to send a request to PHP to generate new files. Yet even then, links to the old static files will not work, which will result in errors in displaying static resources.

Additionally, we assume our project is quite popular, which means that one copy of the application is not enough:

  • Say, we have two replicas of the Deployment.
  • During the initial visit to the website, assets were created in the first replica.
  • At some point, Ingress decides (for load balancing) to send a request to the second replica, but static assets are not there yet. Or they have gone already since we have started RollingUpdate and currently redeploying the container.

Anyway, the result is the same: more errors.

To continue using already existing assets, you can replace emptyDir with hostPath. This way, you will be storing static files in the directory of the node’s filesystem. The disadvantage of this approach is that we have to bind our application to a specific cluster node: in the case of redeploying it to another node, this node will miss required files (or we will have to implement some sort of background synchronization between nodes).

What are the possible solutions?

  1. If you have great hardware and enough resources, you can use cephfs to organize the directory for static files adapted for simultaneous use. The official documentation suggests using SSD drives, triple replication, and a reliable network connection with high network throughput between cluster nodes.
  2. An NFS server would be a less demanding option. In this case, you have to take into account a possible increase in web server response time (as well as lower fault tolerance). Consequences of failure are enormous: the loss of the mount point makes the cluster vulnerable to the LA load skyrocketing.

Also, all options with the persistent storage require background removal of obsolete sets of files accumulated over a given period. In this case, you can create a DaemonSet that will deploy caching NGINX servers in front of PHP containers. They will store copies of assets for a limited time. You can easily configure them by setting the proxy_cache parameter for the required number of days or maximum disk space in gigabytes.

Combining this method with distributed file systems, discussed above, opens up a wide field for imagination limited only by budget or technical expertise of engineers who will be implementing and maintaining the chosen method. Our experience shows that the simpler the system, the more stable it works. The addition of such layers makes the infrastructure maintenance harder, while, along the way, increases the time spent on investigating the problem and recovery in case of failures.


If you find implementing the proposed solutions unjustified (too complicated, expensive, etc), then you need to step back and look at this the other way. What if you eradicate the problem in its infancy? I mean, right in the code — by binding to some static data structure in the image and explicitly defining the contents, by establishing the warm-up procedure and/or by precompiling assets during the image building stage. This way, you will get a predictable behaviour and the same set of files for all environments and replicas of the running application.

If we get back to our example with the Yii framework. Without delving into its mechanics (which is not the purpose of this article), we can highlight two popular approaches:

  1. You can modify the image building process so the assets will be placed in a predefined place. Yii2-static-assets and similar solutions follow this approach.
  2. You can define specific hashes for the asset directory (as outlined in this presentation, starting with slide 35). By the way, the author of the presentation recommends to upload assets (after they were built on the build server) to the central storage (such as S3) with a CDN component in front of it.

Uploaded files

Storing user files on the host’s filesystem is another source of problems when migrating an application to Kubernetes. For example, our PHP application gets user files via the file upload form, processes them in some way, and returns to the client.

The place where these files are stored must be shared between all application instances in Kubernetes. Depending on the complexity of the application and the need for making these files persistent, you can use shared devices mentioned above, but, as we know, they have their downsides.


One of the possible solutions is to use S3-compatible storage (perhaps some kind of self-hosted storage like minio). To work with S3, you will have to make some changes to the source code. (To serve the content at the frontend, you can use different solutions including ngx_aws_auth or aws-s3-proxy.)

User sessions

As a side note, we would like to say a few words about storing user sessions. Often they have the form of cookie files stored on the disk drive. In Kubernetes, such an approach will lead to repeated authorization requests if the user’s request gets redirected to another container.

Partly you can address this issue by enabling stickySessions in Ingress (this feature exists in all popular ingress controllers — check our comparison for more details) to bind the user to a specific pod containing the application:

But this will not prevent problems related to redeployments.


The preferred approach is to store sessions in memcached, Redis, or use similar tools. You have to abandon file-based solutions altogether.


The infrastructural solutions described in this article can be applied as temporary workarounds only. They might be implemented during the early stages of migrating an application to Kubernetes, but you should not use them on a permanent basis.

The general recommendation is to get rid of them altogether and to tweak an application to better suit the well-known 12 Factor App methodology. However, making an application stateless means that changes in the code will be required. So, you have to find the right balance between the capacities/requirements of the business and the prospects for implementing and maintaining the chosen approach.

This article has been originally written by our engineer Oleg Saprykin. Follow our blog to get new excellent content from Flant!


Professional DevOps outsourcing services with a strong passion for Kubernetes.

Flant staff

Written by



Professional DevOps outsourcing services with a strong passion for Kubernetes.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade