The Ver-Val disclosure

Ivan Lesar
Distant Horizons
Published in
4 min readMar 5, 2019

Let me put you in a real-life situation presenting the development and testing of an interactive application at Bornfight.

The Lifecycle

A project has been initiated…

Kick-off meeting.
Research.
Plan.

Sprint.

Iteration…
Iteration…
Iteration…

Build.
Milestone.
Product.

A happy client.
A happy team.

An unpredictable case!

Bug detection.

Iteration…
A fix…
A fix for the fix…
A fix that fixes all bugs the previous fix caused…

A nerve cell lost.
A new sleeping habit.
A bucket of coffee.

A new build, with a new concern that something you didn’t predict could suddenly tear your project in half.

A worried client.

But, why?

We analyzed the client’s specification, defined the project domain, wrote the architecture documentation and acceptance criteria, wrote tests and populated our codebase with event loops, services, factories, observers and other fancy structures.

Overall, we did a hell of a job validating that our output is an exact match with the specification we were presented with.

What was going on?

The Revelation

The major issue was the incorrect use of the term validation.

Validation is intended to ensure a product, service or system results in a product, service or system that meets the operational needs of the user.

Verification is intended to check that a product, service or system meets a set of design specifications.

This was just the thing we needed. It returned the confidence into our product. We were 100% sure we did our part of the business logic correctly, since it was thoroughly covered with tests.

We were aware of the current project state — the product was only verified.

The Reality

I’ll be frank with you… all of this didn’t actually happen. It actually went in a completely different direction.

We predicted it during the planing and the research phases of the project, and addressed it immediately.

Let me introduce you to the scope of the application at hand. I’ll leave out the details of the verification part since it would lengthen the post quite a bit. Maybe I’ll write it up in a future post.

The Application

The product we needed to deliver was an interactive application which had several reactive parts that had to be handled on the screen in real-time.

The application behaviour depended on the time of the year, time of the day, random event occurrences and the interaction mode (easy, medium, hard). The parameters for calculating each of the states were delivered by the client.

These parameters kept the product stable in most cases, but during the first few interactions we could easily see that it was not delivering a 100% stability.

Since the app was time-dependent and had a random factor, it multiplied our state cases by a magnitude which would cause us a waste of time to manually test.

This is where the real meaning of validation came to good use, since we were dealing with around 1000 state cases received from the back-end.

A bullet-proof validation

We decided to make a validation tool for the application. It was a graphical user interface which would test all of the possible state cases.

The interface included a panel that allowed us to reduce the testing set and analyze an exact subset of the states. You could interpret it as an interactive “debugging” tool.

It had interactive plots which were intended to show us how different parameters affected the interaction plausibility. The x-axis represented the game parameters, and the y-axis always showed the number of failed cases.

Screenshot of the validation tool

By constructing multiple graphs, we could easily deduct what had to be calibrated to ensure the product stability. All without drastically changing the parameters.

Without this we could only pick out the larger failed cases and try to fix them by changing the parameters. But then again, that would cause new minor failed cases.

We would practically be in an infinite loop of iterations, slowly converging towards our goal. Actually, when I think about it, it would be kind of like the Zeno’s Achilles and the tortoise paradox, which would result in a never-ending convergence.

I’m really glad that was not the case, and that we were heavily prepared for this.

The future

This interactive proofing tool we developed served as a great example for expanding and upgrading the quality assurance process of our interactive applications which depend on client-provided parameters.

Of course, in the end the client accepted our proposal for the tweaked parameters because it had a minimal divergence from the initial ones. We also skipped the whole unpredictable case stage from the introduction, and the product was deployed to the production environment with ease and confidence.

__________
We’re available for partnerships and open for new projects. If you have an idea you’d like to discuss, share it with our team!

--

--

Ivan Lesar
Distant Horizons

Automation, psychology, mathematics — Senior Backend Engineer @ Superology