What is Dynamic Edge Application Architecture

Aaron Allsbrook
Dynamic Edge Application Architecture
7 min readJul 28, 2020

Apps and how we build them have come a long way over the years. Looking back at the giant shoulders we all stand on today we can see 4 movements of application development over the years. While each of the application patterns have never fully gone away we see developers reacting and taking advantage of the changing underlying landscape of hardware

1. Monolithic 1980s — Great for big mainframes doing batch processing or dumb terminal green screens applications

2. Distributed 1990s — Leverage the power of the PC on the desktop. Remove the dependency of having to request compute resources for every little task. Create graphical appealing user interfaces

3. Service Oriented 2000s — Leverage the connectivity of the internet by creating self describning communication. Remove the fragility of UI to backend lock that was created with distributed systems.

4. Micro services 2010s — Leverage the horizontal scaling on demand thanks to virtualization and cloud. Remove the complexity created by overly verbose SOAs.

As we approach the next decade it’s obvious that trends in computing are beginning to shift again. These new trends will not mean the micro-services die or goes away, but instead they just become the next set of legacy architectures we have to maintain and support.

Major Technology Trends

As we near the end of the micro-service era it’s time to look at what early trends are appearing that application developers will be reacting to.

Trend 1 — Sensor proliferation — To monitor a factory, building or city use to take millions of dollars in specialized expensive sensors. In the last few years we have seen the prices of those sensors drop dramatically. While these new sensors have yet to standardize on a single communication protocol or a standard data structure they are definitely cost effective for the ROI they can provide. These new sensors pose the following architecture challenges

1. need to be physically near via wired or wireless computer that ingests the data information

2. need to have security wrapped around them to prevent hacking and tampering

3. need to process many different protocols to get a common understanding of what the sensor is communicating

Trend 2 — Low cost industrial gateways — In addition to expensive sensors industrial PCs have been the traditional computing machine in the field, in the factory, and on the vehicle. These PCs were expensive, clunky and often built to perform a single function. In the last few years we have seen small, powerful devices that are able to execute a wide variety of tasks including normalization of data, ingest many protocols, execute AI algorithms and host web pages. This capability is packaged into many form factors capable of surviving extreme environments that so many new usecases require. These new gateways and their broad capabilities pose the following architectural challenges and are sometimes called IoT Gateways or Intelligent Edge

1. How best to manage the devices OS they are running

2. How to segment of a portion of application logic or work to run on the gateway device and leverage its compute capabilitiy.

3. How to rapidly communicate with the protocols the gateway is connected with

Trend 3 — DevOps — While the cloud with its huge set of available VMS has been the major driver of devops within the enteprise and today heavily leverages containerization technology. The technologies have been fundamental to adoption of the infrastructure hosted at Amazon, Google and Microsoft. Containerization allows for rapidly and securely running a process on any number of operating systems or cpu architectures. DevOps brought many of the day to day running IT operations back to developers with controlled, managed assets. While devops has made it much faster, safer and reliable to deploy applications they have hit hard limits on what they can do in the changing ecosystem.

1. Containers used by devops don’t have any inherit functionality but rather a host.

2. DevOps has little ability to contol into the specific functions of an application it deploys

3. DevOps assets at best are managed by the same source control tools but in general are very disconnected from the application developers and app source code itself.

Trend 4 — Regionalized Data Centers — Like hardware vendors beginning to make available a powerful regionalized data center to be more closely located to metros and neighborhoods. In many ways this processing sometimes is categories as colocation, near cloud or edge. This emerging mid tier physical compute layer is forcing developers to consider the following challenges

1. How to define the requirements of applications that may be running in the local data center

2. When assets physically move from one nearest data center to the next, how does the application move the session and state

3. How is data kept private and secure when moving and being processed when there are so many points of transit and rest

What’s Next?

So what is the next architecture model that will allow developers to rapidly create applications that fully leverage all the sensors and compute capability in the future? Seeing these trends one concludes that many of the interconnectivity challenges of the past are addressed and now taking incremental steps forward with 5G and other narrow band wireless protocols.

Instead we are now entering an era where the capability to dynamically run all or parts of our application wherever and whenever we want. To spread our applications across a spectrum of compute architectures. To have subsets of functionality rapidly sent to ingest from specific sensors with specific protocols. In many ways we are bringing the dynamic easily configurable, reproducible behaviors of devops into the application itself

Welcome Dynamic Application Architecture -> the software design model for 2020–2030

Distributed Application Architecture allows for

· Defining applications into segments of function for data storage, logic execution, ingestion of sources, localized security, domain specific analytics

· Mapping functional segments of application to deployment targets

· Providing transparent movement of data structures to best align with their source, their points of integration, and their necessary consumers.

· Implementation that’s agnostic to the underlying OS / chip architecture or virtualized environment

· Compute we be leveraged dynamically no matter whether its cloud, VM, gateway, PC, or Mainframe

Dynamic Application Architecture will continue to promote

· Open standards for communication with defined APIs

· Support separation of concerns between client and server –

· Support separation of concerns between logic and user interface

Ultimately thanks to Dynamic Application Architecture we will be able to better use this spectrum of compute capability, leverage the existing high speed communication we have and better serve the diaspora of new devices for gathering, sharing and interacting with the information around

Practical Embodiments

We see the inklings of the Dynamic Application Architecture emerging in vendor offerings today.

Dynamic Logic

Many applications today are built almost entirely with microservice calling each other as needed. In the cloud this allowed for easily having small bits of compute, easily owned by a developer, deployed to compute. These microservice generally offered stateless behavior which provided the ability to rapidly scale. With the emergence of new compute, we have already seen the rapid rise in desire to send a subset of microservices and push them to gateway devices.

ClearBlade — In a ClearBlade system, developers create services that implement logic in JavaScript. These services execute rules, send data to third parties, notify others of events. The deployment model in a ClearBlade system allows for selecting one of these services and deploying it to any number of edges that are selected as part of that deployment. Thus, the end developer is able to near instantly move code to runtime environments on the gateway devices. In essence the microservices are able to be run anywhere

Azure Edge — Azure has followed the approach of many device management platforms offering what is the ability to push small containers from the cloud and run them on gateway devices. These containers are essentially entire web server applications written by end developers from the ground up. The ability to rapidly get your server application on the gateway device represents a first step at moving towards being able to dynamically shift application processing.

AWS Greengrass — In an effort to keep enterprises AWS IoT as a premier cloud offering, AWS has created Greengrass free for its ecosystem. Its intent is to allow for a subset of its cloud services to be run outside of AWS environment and instead on gateway devices. Part of the GreenGrass offering includes its Lambda capability meaning that end developers can author lambda services in javascript or python and then have those logic units run on the gateway running Greengrass.

Dynamic Flow

While microservices have traditionally defaulted their behavior through an HTTP REST interface the rise of IoT has created the need for a more efficient, smaller footprint, and nearer realtime protocol. Rather than a REST request response / polling model the PUB / SUB message receive model has proven far more efficient. Most often today we see MQTT as that preferred interface but ultimately the underlying protocol is not a requirement of a dynamic flow model. Ultimately the ability to provide ingestion from any source into our compute instances where our logic can run is the required capability for dynamic flow. In MQTT terms this would mean that a device can as easily subscribe to a topic on gateway broker as it can to a cloud server. The transparency and common behavior is the necessity of the dynamic flow capability.

ClearBlade — ClearBlade’s edge platform allows for devices to connect to it with the same connection authentication model and permission model whether on a gateway or cloud server. This ability means that a protocol specific adapter for a protocol modbus may just as easily stream data to cloud compute instances or to edge gateways. The consistency of this model allows for a dynamic application to rapidly change the ingestion point of the streaming device information. This model also allows for vehicles (cars / drones / trains / boats) that may travel from edge region to another to continue their communication model without specific connection properties.

--

--