High performance — dream and reality

Photo by Austris Augusts on Unsplash

We authors all know it for sure: runtime performance is so important for our software applications that anything else pales down when the former is low. All right, anything else except UX, but that’s partially related too.

We devs know this as well, however: it’s so difficult to get even “bearable” performance for our apps when they are put under high enough data loads. Programmers are simply too optimistic: “nobody will load a million of items.” And then reality strikes: one day, one customer needs to load two million.

Hours and hours of extra development are then burned out on our project trying to improve things “as much as possible” for that customer. And that often means just “a bit better”, unfortunately. Simply because the original software design isn’t flexible enough for the required core improvements.

Indeed, there is a principle in software development, and it’s common sense at first sight: “better build first, and optimize later.” Right?

Right. Until this becomes a showstopper. Such as when “optimizing later” would mean, um, to refactor >80% of the code — and in this case we’d better rewrite the whole thing from scratch. (That’s common sense too, ain’t it?)

It’s indeed very difficult to properly balance runtime optimization needs early in the software development process, i.e. during the initial design phases of the project. Spotting the possible performance bottlenecks require deep analysis and even the most experienced people could miss some things.

In theory, what we must do is just to ensure the architecture is extensible enough to allow optimizations where and when/if they will be needed. Generic cliché, you’re right.

In practice, however, we’ve seen it’s easier to always think about performance early. But, of course, to also make sure we avoid over-engineering.

Or, put differently, we should always think about the common extreme use cases (but not further away, though), while making sure the simple scenarios remain easily handled too.

A concrete example

Let’s say we need to develop a grid screen that is horizontally and vertically scrollable, as needed. We might assume at first that it’s going to be usually scrolled mostly vertically (i.e. the end user would have many rows) but only a bit horizontally (the customer won’t have thousands of columns, ain’t it?)

With this in mind, we add vertical but no horizontal virtualization support to our user interface. And it works well for a while. Until a customer adds 3652 “day” columns dynamically, for a period of 10 years, for example.

Providing horizontal virtualization support at this time might be very, very difficult. Especially if the platform provided the 1D virtualization infrastructure that we needed in the first place, but isn’t ready for 2D itself.

Customizing things now would simply require too many internal engine changes and we don’t want to jeopardize the component either — other customers are happy already. And, as we all know, with every change we get increasingly higher risk of issues, requiring full regression testing each time.

So, could this have been addressed earlier, to avoid the hassle?

We’d say yes: by spending more time in the initial design phase of the project, doing a deeper research rather than just going with the original developer assumptions.

Existing customers (of other products such as running on older platforms) could help us as well — we could simply go again through some of the support e-mail messages that they may have sent over time and learn valuable lessons.

At DlhSoft we had this exact situation when we designed Ganttis framework.

Supporting a few items wouldn’t require virtualization (left), but trillions of items loaded dynamically upon vertical/horizontal scrolling do (right)

Specifically, we could have assumed that the timeline defined for a Gantt chart is always short enough and we could opt for only virtualizing the rows. But after further brainstorming and discussions (and, we admit, also based on previous experience and feedback gained for our other, related products) we’ve reached to the conclusion that 2D virtualization is a must: only the items that are both in the visible row range and in the visible time range of the chart should be actually loaded at presentation time. And therefore, the component should only request those items from the client code.

Of course, however, many customers won’t need this extreme virtualization support. Most don’t even need vertical virtualization — for small data sets this feature would look like “over-engineering” — unless hidden.

To address these simpler situations, we have therefore developed a specialization of the core engine described above, allowing the client code to pass all items at once (as to a non-virtualized component). And we were done.