A Resizing Service on Steroi… Azure Functions!

Playing around with Azure Functions and using them in different scenarios starts paying off. Such a lot of stuff is doable with them — in such an super easy way! I know, I’m a fangirl ;)

To be honest, in this very case the implementation scenario was not my idea, but the idea of a customer of our Developer Experience division @ Microsoft. The customer wanted to have a basic and easy to implement image resizing service in place, which is dynamically scalable and… fast!

So @BlauJule and I sat together and though how this could be done in an easy and sufficient way for our customer.

We came up with the idea of using two functions. And this time, because we are using a perfectly fitting 3rd party library, both functions are written in C#.

The whole code for the service as well as the easy deployment can be found on GitHub.

No architecture without a common image

As already said we are using Azure Functions — two instances of them — in combination with Azure Blob Storage to build the service and persist the images.

The service gets called via an HTTP POST request where an image is attached. This image gets saved to the blob storage container “original”.

Then the trigger of the second function “sizeImage” is fired which starts the image sizing by first fetching the original and saving all sized images into the blob storage container “sized”.

In the meanwhile the REST API request responded with all the available URLs of the newly resized image. Nice one :)

But now let’s dive into the details!

Function 1 — Save the image (save the world :D)

This is the function “saveOriginal” which is accessible via an REST API endpoint and takes a HTTP POST request. The function is of the type HTTP trigger where the availability via an endpoint is baked in.

--> image directly attached to the body

This is the explicit name which will used within the resizing system as the image name.

The image data needs to be put directly into the body.

The function is called as soon as the endpoint is called via an HTTP POST request. It takes the image raw data and stores it as an image file into a connected Azure Blob Storage with the imgName given via the URL parameter.

All original images are stored in the “original” blob storage container — which is just some separate “folder”. This way the originals won’t get mixed up with the sized images and moreover won’t need cryptic filenames to be distinguishable.
var attributes = new Attribute[]
new BlobAttribute($"originals/{filename}/{name}"),
new StorageAccountAttribute("AzureWebJobsStorage")
using (var writer = await binder.BindAsync<CloudBlobStream>(attributes).ConfigureAwait(false))
var bytes = req.Content.ReadAsByteArrayAsync().Result;
writer.Write(bytes, 0, bytes.Length);

The mechanism of storing files into a custom path within a blob storage is not that trivial, but as soon as you have a solid look at the code everything becomes clear. First we define the attributes of the CloudBlobStream as an array of values. And then we write the image — as a ByteArray — into the blob storage. Done!

The function then returns an dictionary as a response on HTTP 200. This dictionary contains all URLs for all available sizes of this very image with the following format {“string”, “string”}. The key is the size of the image and the value is the corresponding URL.


Function 2 — The real sizing!

This function isn’t called by the first one. It is triggered via changes of the connected blob storage. In correct technical terms the function has an “Azure blob storage input trigger”.

As soon as an original image is stored to the blob storage, the function is called and starts its job.

The sizing itself is simple and yet statically. A bunch of sizes are configured like width 200, 100, 80, 64… The function takes the image from the original container in the blob storage and starts the sizing.

public static async void resizeImage (Stream blob, Size size, Binder binder) {
using (var imageFactory = new ImageFactory()) {

var outStream = new MemoryStream();

The resizing itself is rather easy — just one line of code. The blob is the originally uploaded image, which is then first loaded, then resized to a given size and then saved to a memory stream.

For the sizing we are using the ImageFactory from the ImageProcessor NuGet package. To attach a 3rd party lib to your function, just create a new file in your function called “project.json” and add the dependency there.

"frameworks" : {
"net46" : {
"dependencies" : {
"ImageProcessor" : "2.4.5"

As soon as you save and redeploy your function the NuGet package gets installed automatically. Nice huh!

The saving of each resized image to their separate “sized” container is the exact same as the one for the originally uploaded image.

After the image sizing is done the content in the blob storage is neatly separated into the two mentioned blob storage containers “original” and “sized”. Within “sized” the images are again sorted by the sizes.

A super important thing to do is to set the “Public Read Access” permission of the two blob storage containers to “Container”. This let’s anybody who has the URL read/download the image.

But what’s with the deployment?

Don’t be afraid — take this! Get the source code on GitHub which is instantly deployable. The ARM template, which contains all necessary information for your successful deployment is placed in the GitHub repo also.

You can choose whether you want to deploy with just a button click, with the Azure CLI or with PowerShell.

To fully understand what happens during the ResizingService deployment with the ARM template, let’s have a look at the illustration.

The repo contains the JSON files of the ARM template and the source code, ordered in folders, for the two Azure functions.

The deployment consists of two steps which are done sequentially:

  • First all services are provisioned in Azure (the Function app, the Blob Storage, Linking, some additional configurations…)
  • Then the GitHub repository gets linked to the Function app and the function instances get deployed — and are immediately up and running. Stunning, right?!

And now you are ready to go. 
Try it out and let me know how it works out for you!

Happy coding :)

This service is not production ready, it’s a prototyped service! It is the first implementation on the way to production. Currently no error handling is included! The asynchronous way the images are sized and stored while the response already delivers the URLs is dangerous too and not make rock solid by now. So be aware — this is a prototyped service!
One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.