User Experience, the missing pillar for well-architected cloud applications

Given the data explosion we are experiencing nowadays, we need to keep up with new cloud architectures that not only consider scalability, reliability, security and cost, but also user experience. — Modern users want to get results fast and hate staring at Loading screens. Our architectures need to consider how to best process and analyze these large amounts of data quickly to deliver delightful user experiences.

User Interface design certainly plays a big role in creating a good user experience. But the real great experiences are achieved when the software anticipates the user desires and does something they didn’t expect but found very useful.

Let’s take Amazon.com as an example; it isn’t very popular for being an artistically beautiful website, but because it has recommendations of products you may actually like or because it notifies you when the price of a product you have seen in the past has gone down. And those things happen automatically! You don’t have to click a button that says “Calculate recommendations” and then wait for 10 seconds for the recommendation engine to come back with results. As soon as you land on the website, the recommendations are already there. — and it doesn’t mean that the server is really fast, it means that Amazon is most likely pre computing your recommendations asynchronously ahead of time, so they are ready when you need them.

Traditional architectures

Sadly, there are still modern applications doing a lot of compute on the fly, instead of doing it ahead of time. Slow websites or mobile apps with a poor user experience that leads to frustrated customers. The following image is an example of a traditional architecture that tends to cause these issues.

Traditional architecture

And this happens mainly due to software being built as desktop applications, where resources like memory or CPU are limited, but this problem doesn’t exist in the cloud — there is unlimited compute power and memory; and not using it is a wasted opportunity to create business value.

Information at Rest
In the previous diagram, we can say that the database tier is where the data at rest resides. And this is exactly the problem; it’s still data, not information. It’s unorganized facts that need to be transformed to create useful information. But what if instead of having data at rest, we focus on having information at rest; where no further compute or transformation is required to display it.

Of course we can still store the raw data for historical purposes, and to be able to re-create the information, but since we won’t be accessing it very often, we can store it in less expensive services: like Amazon S3.

You may not need an analytics database

Often times I hear things like “The UI is slow because the query is taking too long in the database, we need to migrate to an optimized database for analytics”. Or “We need to increase the DynamoDB throughput so the aggregation scan runs faster”. And a lot of the times this can be avoided. — Yes the queries are slow because you are trying to do massive compute on the fly, and you are doing it every time the user lands on the website. What you can do instead is pre-compute as much as possible upfront and store the results ready to be displayed.

If we switch our mindset to think how to pre-compute everything ahead of time, we will see how the architecture changes dramatically; Now things need to happen asynchronously and in response to events or in response to new data. Suddenly we have our applications working in the background transforming data into information and storing it ready to be presented right when the customers need it. — And that’s what can really create a delightful user experience.

Compare the following architecture with the one shown above.

They are vastly different; Compute is happening in response to events and new data. Data is being analyzed and transformed and ultimately stored in S3 in a ready-to-display format (i.e. JSON). Zero compute on the fly is happening: the browser is consuming and displaying information directly from S3.

Pre-compute everything? That sounds expensive

Well, not necessarily. But let’s review a couple of options to mitigate this.

Immutable Data
More often than not, we are dealing with data that never or rarely changes. Let’s take the example of the Simple Beer Service. It is a fairly common scenario that we face in modern cloud applications: You have information being collected and sent to the cloud, and then you have a UI where you can visualize results of the collected data, usually in the form of a chart or a dashboard. In the case of the Simple Beer Service, we have a Raspberry Pi with a flow sensor connected to a kegerator collecting how much beer flows through the kegerator pipes, it sends the data to a kinesis stream and then you can see the beer flowing in real time by going to simplebeerservice.com.

So, think about it, once the Raspberry Pi has collected the data from the sensor and produced an output, it won’t ever change; it’s immutable. The beer that flowed at any given point in time has been captured, that value will not change over time, so we can play this to our advantage; If the data never changes, why wouldn’t we compute every possible analysis against it right away? After all, the results of the compute shouldn’t change either. There is no reason to be computing results over and over again, when we can do it only once, as soon as we receive new data.

Therefore, we can say that pre-computing results is very appropriate and cost effective when dealing with immutable data.

Serverless
This new evolution of the cloud enables event-driven async compute that tends to be cost effective, and I’m talking specifically about functions as a service, like AWS Lambda. This small pieces of source code can be executed based on events, and there are different types of events that we can use and configure. This paradigm is often cheaper than having a virtual machine running all the time, because in serverless functions you pay down to the 100 ms of compute, as opposed to paying by the hour.

The following diagram could be one option to architect the Simple Beer Service without using servers at all, following an Information at Rest approach.

The beer flow is coming through a Kinesis stream where a Lambda function is picking it up and saving it to S3. (This could also be achieved with Kinesis Firehose as well). The browser then reads the JSON from S3 and displays the beer flow visualization.

But lets say we also needed to provide a dashboard with statistics and aggregations of how much beer is flowing per day per kegerator. Since we need a running count of flow, and Lambdas don’t hold any state, and we can’t rely on S3 for atomic updates, we can use a temporal in-memory storage service like Elastic Cache. Where we can keep the running count per day per kegerator. And there will be a Lambda writing to Elastic Cache the data coming in, and another lambda reading from Elastic Cache and updating S3 with the latest statistics.

Another alternative is to use Amazon Kinesis Analytics to query the streaming data coming from Kinesis and create real-time visualizations out of it.

The point is that we didn’t have to use any server or any expensive analytics database. We are able to transform data into information right as the data is coming in.

Information Versioning

Like everything in the software world, every architecture has trade offs. In this case, pre computing and storing information in a ready-to-display format makes it harder to change your data model. It basically means that you need to re-compute everything again to transform it to match the new data model.

A proposed solution for this would be to version the information you store on S3, and have your clients support previous versions. And you can slowly migrate the old information into the new format. This is where we may need to access the raw data again.

Lessons learned

Architecture choices play an important role in creating good user experiences.

Using the Information at Rest architecture is appropriate when data is immutable and when the number of algorithms/processes/analysis is finite.

Amazon S3 is an excellent service to store JSON results. It is very inexpensive, provides high availability and good performance, as well as file versioning and security with pre-signed URLs.

Information at Rest generally results in a more cost-efficient architecture compared to traditional ones. However cost becomes less important when the real value added is a delightful user experience.

tl;dr

Event-driven asynchronous architectures that leverage the power of the cloud can enable modern applications to deliver a great user experience. The goal is to produce information in response to new data, produce it as soon as possible and store the results in a format ready to be consumed by clients, where no further compute is required to display them.

Credits

Thanks to James Briant and Jeff Bonevich as I used some of his ideas to write this article.