An exercise with MinIO & Imgproxy

Oscar Oranagwa
4 min readJan 5, 2022

--

This is a sample setup showing how to wire MinIO & Imgproxy together.

The aim is to provide a functionality for uploading images to any arbitrary cloud storage; complete with dynamic image processing. It was born out of a project requirement a friend shared with me recently. A summary of the requirement is as follows:

  • users upload image(s) to the server backend
  • the server backend performs some predefined validations on the input
  • on successful validation, the backend uploads the image to a cloud storage with optional image processing.
  • and the backend also stores the image URL in a database associated with the related resource.

An optimal solution would be to use a third-party service, like Cloudinary, that offers all the desired features. But as an exercise in curiosity, I set out to explore how various free services could be married together to achieve the same goal.

More so, one strong ask that came with the requirement was to avoid coupling the backend with any one cloud storage provider. That is to say, it should be seamless to switch between providers.

With the requirements set, let’s begin by reviewing the key pieces:

Image storage

This will be any long-term storage solution that is capable of storing images. Our choice of available options does not matter as long as it is not coupled with the server backend (and vice versa).

Image processing

Any image processing service that is capable of processing images in (near) real-time will suffice. “Processing” here includes resizing, cropping, and any other image manipulation. And of course, processing should be done in a way that is also not coupled with the server backend.

Note that there is no requirement that this can not also be the image storage.

Server backend

Whatever infrastructure we arrive at, should be transparent to the user, courtesy of this backend. It will be the service that handles all image upload requests; performing validations, handing off the image to the storage and saving the image URL in the service database.

Now that we’ve analyzed the key pieces of the requirements, enter MinIO, Imgproxy, and Afero.

MinIO

MinIO is a high performance, distributed object storage with an API compatible with AWS S3. It is a highly scalable, and highly available object storage service.

Two of its many selling points that are of interest to this project are:

  • It provides an AWS S3 compatible API similar to what all cloud storage providers try to provide. This will enable swapping between it and any other cloud storage provider with minimal code changes.
  • It’s available for every cloud platform. The implication being that excluding it’s “swap-ability”, it can easily be deploy to any cloud platform.

Imgproxy

Imgproxy is a tool that provides image processing services on the fly. It boasts a wide range of features including resizing, cropping, all delivered with appreciable speed without compromising on security.

The fact that it’s built around processing remote images, with support for s3 compatible APIs, will be the key to this little endeavor.

Afero

For the backend server, we will be using Go. And yes, any language will work fine. The choice of Go is largely to replicate the technologies available to the target project.

Poking through Go’s open-source ecosystem for fileystem abstractions, we find community favorite, Afero.

Afero is a library that provides a uniform interface for abstracting any filesystem. It’s a lightweight, simple, and flexible library which should help us decouple from the specifics of any underlying storage system. Using it the backend should be able to interact with MinIO without depending on the specifics of interacting with MinIO

Putting these together, we have the following orchestration:

icons are trademarks for their respective companies
  • client/frontend/user uploads image to the server backend
  • server backend performs validations on the image
  • server backend generates a unique key for the image
  • server backend uploads the image to the (MinIO) storage with the unique key
  • server backend saves the image unique key in the database
  • server backend generates a signed URL for retrieving (and processing) the image via Imgproxy using the unique key
  • server backend returns the signed URL to the caller

This is all tucked together in this docker-compose.yaml

The obscure piece in the compose file is, possibly, the environment variables. These will be taken care of by docker-compose as long we provide a .env file in the same location as the compose file or pass one to the docker-compose command via the --env-file args.

We can derive a usable .env file from this sample

by running the command:

eval "echo \"$(cat .env.sample)\"" > .env

With that taken care of, the last piece is the following go file to act as the server backend and handle the orchestration listed above.

And that’s all.

Let’s take it for a test run. Create a folder with all the files above (remember to generate a .env file from the .env.sample file using the command above). Open a terminal within the folder and run docker-compose up. Once all the services are running, send

curl --location --request POST 'http://0.0.0.0:50100/' \
--form 'image=@"/path/to/an/image/file.png"'

to receive a response of the form:

Image URL: http://localhost:50200/[some-signation[/rs:fill:300:300:1/g:no/[some-encoded-path].png

And visiting the returned image URL should reward you with your earlier image resized to 300 x 300 🎉

Closing notes

A production setup, bent on not using any of the available paid offerings, will need to review some of the parts not focused on; like validations, better credential management, error handling, and persisting the image key to the service datastore.

By persisting just the key, we allow the image URL to be ephemeral. This provides possibility for the service credentials to be easily rotated when desired with no worry for earlier images.

Similarly, the abstractions achieved will yield many benefits allowing each of the pieces(image storage, image processing and backend) to evolve independently.

--

--