As-a-service offering is changing what a developer is

Why I believe this path leads to not write code anymore

Dario De Agostini
THRON tech blog
5 min readOct 22, 2019

--

I clearly remember the day we migrated our solution from a set of geographically distributed data centers to “the cloud”. It was the 15th of August, an Italian national holiday, we were supposed to be relaxing and we had one of our hardest days instead. We had been preparing for months, had been planning to move to “the Cloud” by the end of that year but, on that day, a fault in one of the datacenters made us take the red pill. That incident was one too many: over the years we experienced several different faults related to “systems”, while our software has been as reliable as ever.

AWS at the time was much simpler than it is today, but its “promise” of being able to leverage a distributed architecture without having to deal with the “colocation things” was still very appealing: it stated that we could focus on creating culture and grow our experience on software while totally ignoring hardware and most of the systems. We could kiss goodbye to managing switches, routers, blade servers, redundant optical-fiber paths and so on.

I felt a relief that was similar to the one I felt, a long time ago, when I started developing C instead of x86 ASM, I didn’t need to focus on the hardware implementation of the system, I could just focus on the algorithm.

I believe the defining milestone was the release of AWS S3 storage. Yes it was highly available, yes it was infinitely scalable, yes it had a high-level interface (over HTTP) but the most relevant change to me was that it was not a storage.

The cloud block storage magic: extract what you need without worrying about how it’s done.

It was a paradigm shift, we were shifting from the domain of systems to the domain of applications. We started ignoring how systems were architected or how they worked, we didn’t need to manage reliability or optimizations, we had to rely on stable interfaces and SLA only. It required a leap of faith (how does it behave under stress? Does it scale with no issues? How can we lower latency?) but as soon as we could experience that the result was “good enough” the benefits were immense. The story shows that we have not been the only ones.

Over the years the offering evolved and the Cloud providers moved beyond the basic Computing, storage and DB services. AWS (as an example) added SQS (a scalable queue), data pipeline (a visually manageable flow of communication between components) and started adding services that addressed the common needs of developers instead of “virtualizing” hardware or software components, they became more customer-centric than product-centric. AWS Lambda (serverless computing) forces you to forget any detail about the systems, even virtual ones such as containers. AWS Athena (we are grateful to Facebook for creating Presto, its original implementation) forces you to almost forget about scale and data schema on queries.

We’ve already told how we did improve our productivity by embracing this change, the price to pay was less control over the “internals” and we had to learn how to achieve the desired performance and reliability metrics without being able to leverage the full potential of the underlying hardware and software stacks: we have been limited in working just on the architecture, since all the components have been offered in a fully managed way. We had to trade the capability to perform low-level optimization with the reduced cost of management: it was a very good decision for our type of company and product.

I had the recent chance to describe one of our architecture choices in a This is my architecture episode dedicated to the Italian market (it has English captioning too) where I decided to talk about how we collect and analyze events in real-time to feed several data-intensive systems such as recommendations, analytics and other parts of our Intelligent Digital Asset Management. The architecture is quite complex because it needs to manage both data and model changes (new events as well as change of events taxonomy), it must provide continuous availability of service (we must serve recommendations even while we update data and models) and it works on big amounts of data.

For the tech guys out there: we implemented a lambda architecture with a blue-green deployment of elasticsearch clusters.

This kind of design choice would have been a nightmare to implement with traditional hardware or software stacks but thanks to the fact that we could leverage high-level components to manage the flow of the events we have been able to create the first complete, production-ready, working architecture in just 7 man-days.

Our engineers are among the best in the world (no joke) but this result would not have been possible with the same cost and time-to-market if we wouldn’t rely on building blocks like Lambda, Data Pipeline, EMR.

In my opinion, the recent hype on AI has common roots with this shift: I think Neural Networks (and any variant such as Capsule Networks) hype is born from the fact that they are another example of the transition from “having to describe how to solve problems” to “having to describe which result we want”. When you work with NN you don’t tell the machine what to do, you provide it with what you expect as a result when you give a specific input… and you do this by providing lots of examples. AutoML systems are a very fitting, yet rudimental (as today), example: you don’t even know which kind of NN architecture will be used, you just throw data into the system and describe the result you expect, everything else is (should be, will be) “magic”.

Neural Networks are not interesting because of an hypothetical superintelligence, they are interesting because they are an affordable way to make the machine compute a result without telling it how to do it in detail.

NN approach became viable in the recent years because infrastructure power and scale became good enough, and I believe that we are now ready for a 4th or 5th generation programming language that will be applied to the cloud: it will allow us to describe the results we want to achieve, not how to achieve them.

Neural networks are new languages as a way to describe what you need instead of how to achieve it

Think about it, we have been using 4th generation languages for decades, we now have tools like graphQL that generates API starting from data relationship design, Step Functions to orchestrate serverless code execution… the toolset is good enough right now.

We are just waiting for the arrival of a 4th generation programming language (ok, maybe a 5th generation one) for the cloud: describe the result you want to achieve, the constraints and both the infrastructure and the code will be generated by the “compiler”, which will also decide the most appropriate architecture for the scale at any time. And change it dynamically over time, as needed.

I believe coding will be very different in 10 years, coding will be about identifying objectives, requirements and constraints, not about instructing the machine how to manage them. I’m expecting some big cloud provider to release something along these lines in the next couple of years.

I’m excited, I can’t wait for it, can you?

--

--

Dario De Agostini
THRON tech blog

THRON Co-founder. We live in the most interesting time ever for humankind. Tech is changing everything, increasing pace every day. Surf the wave.