Enterprise Software Architectures & New Programming Paradigms

Bogdan Burnea
METRO SYSTEMS Romania
11 min readFeb 6, 2018

In this fast evolving software engineering world I think is a good value to take a look around from time to time at how new software is done and at least be aware of new technology trends.

Of course it would not be possible to have in depth knowledge about all new stuff, nevertheless, talking to IT colleagues, following conferences and various sources always would help to get a sense of the new industry trends and directions.

The new buzz word in software is Cloud. In this post I try to present one perspective about the history of enterprise software architecture and development methodologies that finally got us to this point: Cloud, serverless architecture and Agile development methodology. What I am trying to highlight is the evolution and maybe not all the particular aspects.

Client-Server, 2 tier Architecture

Old style, classic, in a client-server architecture we have, in the same LAN or WAN, a central database server and many clients that work with the data by means of application GUI.

This 2 tier design can of course have other components too, maybe security layers, maybe firewalls but I think this is not so important for our architecture evolution that I try to highlight.

With this approach, business layer is not clearly delimited. We can have business logic both in the application or the database.

Another worth mentioning point is the relative high development cost and deployment cost, as the client software is “waterfall” build and should be maintained on each particular client workstation.

Back on those days the development methodology was waterfall.

With this approach there was a high development time to “waterfall” the system, meaning to analyze, design, develop, deploy and test the system. Until the system is ready to be used, new functional specs are getting in place. So, we found that the waterfall methodology is not very useful for large enterprise systems, is not so effective if the requirements are not clear from the beginning (often the case), difficult to move back to some previous phase, high risk and less flexible.

3 tier Architecture

When internet became widely used, a natural step further was to build enterprise applications in 3 tier architecture, also using waterfall as development methodology. The clients that consume the data can be world wide spread, so one web application server was added between database server and clients. One particular advantage was that the clients can consume data via browsers with no particular client software to maintain.

Another major advantage was the faster deployment, only one application server was updated with the new software version and then all clients worked with the same updated version.

However, as businesses grow, more and more such independent enterprise systems gather to sustain current activity. One company could have a financial system, a CRM, a HR-Payroll system, just to name the most commonly used ones. Those systems were built on different technologies and there was no standard communication protocol between them.

Service Oriented Architecture — SOA

Service Oriented Architecture was a natural step forward. Why not divide our application in smaller, self contained, “black-box” application business units, install all of them on the same application server and communicate between them via web services and XML? Web services are exposed and consumed by application GUI and also by other web services. Database layer is isolated in this way, so that business logic can be also implemented in the web services. We can use the same database server for all our business modules or maybe each business module with its own database.

There was one problem with the above approach, availability. Having only one application server, at maintenance down time all applications were stopped. Also, connection problems when many users connect.

One can see that in this particular architecture the modules begin to be decoupled, to be “black boxes” that can communicate only via web services and using some established data format.

Thus, XML was introduced, to get some sort of communication standard between modules. In such architecture with a clear communication protocol between modules and with a standard API being exposed, one could easily remove one business module and replace it by another as long as the API and XML data format are respected.

We gain also on availability, only one module in application server layer is deployed at a time, other business modules are available.

Containers and Orchestration

So why not going further? Why not virtualize and decouple things even more? If we already would have smaller and smaller web services, why not install them one per machine? In such container orchestrated world we have many virtual machines called containers, grouped in POD’s, and also an orchestrator above them that must take care of the containers restart if some problems do occur. Each POD has a unique IP address and each container share the resources (RAM, CPU) of the same POD. Containers can be created by a container management software, of course depending on the physical server resources.

If now we have containers why not “split” also the database? Database data can be replicated on more and more such POD’s, thus gaining availability. Also, if our microservice is deployed on, let’s say, more than 3 PODS there should not be a problem if one container or POD would have a hardware problem or crash, so what we have is availability. Also, if database would be replicated on many such POD’s then we also gain database availability.

What else? Of course, scalability. We monitor the load on RAM, CPU and HDD of our microservice and if it is the case we deploy the microservice on more and more POD’s. Hardware should be more and more affordable these days so adding more should theoretically not be a problem.

Here is our cloud basically. If someone would have in its own backyard such container orchestrated architecture, with enough hardware, that would mean that it has its own cloud.

Cloud and Serverless Functions

Furthermore, to save time for setting up the internal hardware environment, why not rent such architecture? If those containers and orchestration is provided as a service by a cloud provider, why not use it? In this “fast and furious” times, why not write an application fast, deploy fast in cloud and be able to validate fast our idea? The Cloud enables us to write smaller and smaller stateless functions (AWS lambda or google cloud functions), decoupled as much as they can be, deployed fast and payed only per usage. Also, databases are available in cloud. So, save time and save money.

Again, this fast technology world got us here. After waiting 2–3 months for example to set up the hardware, then 1 year to waterfall build and deploy the software, one can see that maybe the same idea is getting on market faster and one year would mean too long time. Using the cloud, the infrastructure can be rented and then the idea can be validated much faster by the market. Going back is also easy, to free the cloud rented resources. Not to say that it would save our company the hardware infrastructure, data center, admin team, software licenses.

Of course, cloud services can be in different flavors, I think they worth mentioned here:

  • SaaS — Software as a Service — would mean that the cloud provider offers both hardware and also software platform to their clients.
  • PaaS — Platform as a Service — cloud provider offers application platforms and also database as a service to their customers.
  • IaaS — Infrastructure as a Service — the cloud provider offers the hardware architecture and the client can build and deploy his own software — database, middle tier, front end.
  • DaaS — Development as s Service — cloud provider offers web based, community shared development tools

The idea behind those serverless functions is the same “divide et impera” that got us here in the first place. Split our applications into the smallest possible units or functions that would do only one thing. If we deploy those functions to cloud and if the function load would be automatically scaled by the cloud on as many containers as necessary, those functions must not have state, they must be “stateless” functions.

Event Driven Architecture

What else? In our models until now, at design time, we think about data model. You know, starting with our client, product, employee or supplier data object model, what data should be maintained about one particular client, supplier, product, order aso. So, data is stored in the database with a particular model, we send the queries to the database and the database returns the result. What if we think first at events? What if we change the paradigm? What if we store the query in the “database” and send the data to the query? We can imagine a microservice responsible with a standard relational database where clause. We send all the data to the microservice as input and we have only the data that respects the where clause conditions as output.

For example we have a bank account. First, we do not establish the columns of the bank account object, but establish the operations. What if we store only the operations on that bank account and not the state? If we record only the operations in an “immutable” way, we would always be able to determine the account actual state by re-applying all operations from beginning.

This is where “key-value” message queue systems came in sight. We store our events in such a “massively scalable” message queue and thus all the interested systems/services/microservices can consume such “key-value” messages in a consistent manner.

Let us consider a common example from today’s world, a virtual shop. We can divide each major parts of our virtual shop business into smaller parts: products, clients, order, payments. As long as they communicate in a standard way, each of these units can:

  • Evolve independently — more and more “business” can be added independently on each of these units as the real business demands.
  • Be developed by independent teams — we can thus have teams around the globe to develop each her own business unit.
  • Be developed in different technologies — different technologies can be used as they are more popular or not in areas around the globe, or maybe depending on cost considerations.

Also we can distinguish two ways of communicate between those business units modules:

  • If the entire business can be considered small, each module can easily communicate with the other: order module can call client microservice for client data and also product microservice for product data for example.
  • Each module can place the event in a “key-value” message queue, such that any potential interested client module would consume the data when it is available in the queue. Thus, building a pipe commonly used by all our modules to both signal some “key-value” changes or listen for those changes and respond locally to them.

Another aspect of the new approach is how to solve batch processing problem. In a “monolith” application when all the needed data is available in a single database, all such batch processing was much easy to solve. In our new microservices world we have to scale the processing on as much as needed containers and also programmatically take care not to attack or process the same data with our, remember, stateless services.

We have also some downside aspects, when gaining availability and scalability we must trade:

  • Consistency — by developing our modules in such way, at one particular time moment not all data is consistent. In our example, order module can have inconsistent product or client data as the new product or client data is somewhere on the messaging queue. We only know that at some point in time data should get consistent. If this can be accepted or not I think it depends on business needs, clearly should be workarounds if consistency is needed and business critical.
  • If we use real time synchronous service calls (if we try to re-couple our microservices), then our order can be blocked and not respond when waiting for product or client data.
  • In some cases data replication must be used, for example in order module we must replicate only part of product and client data that is particularly used in orders (so not all product or client data entirely).
  • The architects and programmers must also think about parallelism, not processing the same data more than one time.

Thus, we gain availability and scalability but trade off some consistency.

In fact, we can address the CAP theorem (or Brewer’s theorem) from theoretical computer science: “It is impossible for a distributed data store to simultaneously provide more than two out of the following three guarantees: consistency, availability and partition tolerance (the system continues to operate despite an arbitrary number of messages being dropped (or delayed) by the network between nodes)”.

Waterfalling to Agile

What we saw until now is “divide et impera”. Split the “monolith” into smaller and smaller pieces developed and deployed independently. We split things into smaller and smaller chunks and build software accordingly, to solve smaller and smaller dedicated problems. It seems that somehow the old waterfall way has to change too. Instead of waiting for the entire development waterfall to take place, we start fast, building small things, validating and changing on the way. Thus, we can validate our software idea fast, not waiting for the entire waterfall long development/deployment time.

Agile is the latest wide spread software development methodology corresponding to this fast changing world. It advocates adaptive planning, early delivery, continuous improvement and encourages rapid and flexible response to change.

Also, comes hand in hand with continuous delivery, stating that new developments had to go to production as soon as possible. After the new deployment passed the automated tests, the new stuff can be automatically deployed into a “green-blue” manner. Green is the production environment, blue is the test environment that gets the new deployment. After validation, blue becomes the new production environment and green the new test environment.

Conclusion

Finally, competition got us here, to Cloud, event driven architecture, Agile methodology and microservices. Today, slowness is not an option and gaining speed is the “killer” feature. Fast growing companies are those able to start an idea fast and having a cutting edge organizational speed.

As final words, consider this story being an invitation to debate, how do you see further this evolution?

Also, thanks a lot for reading, give it a clap or two if you enjoyed this article and think this story is relevant.

--

--