Evolving API Management into a Harmonized API-led Interoperability Framework

Tame Complexity with Harmonized REST, Event, HATEOAS, Data-as-a-Product and Security Frameworks

TRGoodwill
API Central
11 min readSep 20, 2022

--

In a large organisation a primary concern of an interoperability framework will be controlling complexity and ensuring the maintainability and evolvability (and thus survivability) of the platform over time, and in the face of inevitable changes to budget and strategic priorities. The framework and the platform must find a balance between simplicity and sophistication.

The following discussion covers integrated models, platforms and lifecycle management incorporating:

  • Business Events,
  • HATEOAS and Affordances,
  • Distributed Data Mesh,
  • Security Platform Integration and
  • Integrated Devops.

It builds on the Model Driven Design approach discussed here.

One Model, One Framework, One Platform

Complex multi-modal integrations are inherently difficult and expensive to comprehend, change, secure, manage and scale. Coordinating data-as-a-product, REST API, affordance, HATEOAS and business event models around the same authoritative domain business resource model simplifies development, specification, discovery and comprehension, security and management.

In a harmonized interoperability framework the business resource model captures not only operational entities and value objects, but also the state lifecycle and temporal analytical models. The state lifecycle becomes a first-class business resource description document, alongside the REST and analytical models. From the state-lifecycle definition are derived business events and triggers, state transition affordances and HATEOAS links.

The Business Resource Model: Captures the REST data model, affordances and business events.

At an operational level, an Interoperability Framework will seize opportunities to streamline and align protocols, interfaces, analytics, access control and workflow management across heterogeneous API management, event messaging and security platforms in multi-cloud and hybrid-cloud environments.

Harmonized Business Events

“Across boundaries, handle updates asynchronously” Evans, E 2003, Domain Driven Design.

The documented business resource state-lifecycle describes the states and transitions that constitute publishable business events. Every business System-of-Record will publish business events in a predictable way to the enterprise event messaging platform in real-time.

System-of-Record business resources emit real-time update and state transition event messages. Event messages are an abstraction that do not expose internal database schemas across domains

“Events that originate from a bounded context must be explicitly defined and then they can be consumed by the bounded contexts of others” The Open Group, Open Agile Architecture

In a harmonised framework, the business event is intrinsically bound to the REST model. An event message may be the entirety of, or delta to the current representation of a business resource as per the W3C Websub protocol or a REST aligned AsyncAPI schema, or it may simply be a notification that the resource has changed with a link back to the REST API as the source-of-truth of the current state of the resource, as per the secure ‘claim-check’ pattern.

Reasons to consider a light-ping / claim-check patternThe ‘light-ping’ or ‘claim-check’ event messaging pattern is inherently secure, efficient and simple to comprehend and manage. This pattern inherits the real-time security rigour of managed REST APIs, mitigates right to be forgotten complexity, imposes relatively predictable capacity demands, and alleviates event topic versioning headaches. There is a uniform schema for all enterprise events, and no requirement for field encryption or fine-grained encryption key management.The event message would ideally include the transition event and resultant state — in many cases subscribing systems will be interested to know only that that a certain state transition for a resource is of interest has occurred, for example, an invoice has been paid, or an application has been approved.

An event message envelope (such as CloudEvents) will contain metadata about the event, such as a date-timestamp, unique event Id, the transition event, resultant state, system actors, the resource Id and REST link. Trace headers should link chains of API interactions and business events. An ETag value might anchor an event to a REST representation, and API response document Link headers might route a client to a subscription hub.

A harmonized interoperability platform; Discoverability and self-service subscription extends to business events

Easy to discover and comprehend self-service interfaces to the event message platform should be offered via the API developer portal, providing discoverable, secure APIs back-ended by a subscription portal/hub and opening up the platform to mechanisms that will support dynamic subscription and un-subscription to events on a resource instance granularity.

Platforming the Subscription HubThe W3C Websub protocol provides a solid template for secure programmatic event subscription - and it can be applied regardless of payload pattern. Organisations that maintain an in-house, platform agnostic API management portal have a unique opportunity to integrate with an event hub service to consolidate API and event discovery and subscription into a single platform.

Event streaming platforms (such as Kafka) can enable both business event messaging and log-streaming of technical events to enterprise logging, SIEM and analytics subscribers. Technical event streams should however be platformed on instances or tenancies separate from business events — they will have a different QoS profile and are less predictable in terms of size and volume.

HATEOAS and Affordances

Alignment with the State-Lifecycle Model

A focused, minimum-viable pattern for HATEOAS will help identify valid use-cases and enable broader application. In a minimum-viable implementation as part of a harmonized framework, HATEOAS links are not a medium of discovery, but are employed to indicate the subset of state-lifecycle transition affordances applicable to a resource in its current state, and in the current invocation context.

The flow is predictable and tightly aligned with the domain state-lifecycle model:

  1. The resource response document will represent the current state and present valid links for transitions to adjacent states.
  2. Each HATEOAS link is keyed to a documented state-lifecycle transition affordance operation.
  3. Successful invocation of the affordance would result in a state transition and a published business event.
  4. Where valid state-transitions change, HATEOAS links returned with the REST resource will change accordingly.
Alignment of HATEOAS, State Transition Affordances and Business Events

HATEOAS and affordances are covered in more detail in The Engine of Application State. Aligning HATEOAS, Affordances and… | by TRGoodwill | Aug, 2022 | Medium

Federated Data Platform and Distributed Data Mesh

Federated Data Platform

System-of-Record REST models that clearly express and expose composable, filterable aggregates within the domain, and an enterprise event message log of temporal business events, together provide the foundations of a discoverable, decoupled, secure and scalable federated data platform.

“Source domain datasets represent closely the raw data at the point of creation and are not fitted or modelled for a particular consumer. Data consumers can always go back to the business facts and create new aggregations or projections.” — The Open Group 2019, Open Agile Architecture.

Aggregated temporal views are constructed from source-of-truth representations and event metadata. Analytics platforms are downstream, non-blocking consumers of business data.

Data Mesh principles as articulated by Zhamak Dehghani (Thoughtworks) are an extension the above concepts. The enterprise may elect to evolve a Federated Data Platform into a Distributed Data Mesh as the agile organisation and the platform mature.

Data-as-a-Product and an API-enabled Distributed Data Mesh

Following data-as-a-product principles, product teams…

“… are not only responsible for providing business capabilities but also responsible for providing the truths of their business domain as source domain datasets” Dehghani, Z 2019, Data Monolith to Mesh.

These ‘business truths’ include operational data, historical data, and immutable aggregated temporal views of events, entities and actors. So for example, it is not only possible to extract the contents of a customer’s open or historical order, but also to discover how many times they have ordered a particular item in the last year.

The modelling of domain-relevant temporal views is in the hands of domain experts, in close consultation with consumers of analytical data. Temporal data models, business APIs and events are all predicated upon the canonical business data model — the need for a unified practice of model design and governance should be obvious.

A data-as-a-product approach will leverage (and potentially extend) existing API and event management platforms and pipelines to publish and secure data products. Enterprise distributed query and data/analytics platforms are downstream consumers of business data products and are responsible for the experience/discovery layer and for the composition of esoteric and cross-domain temporal views.

Security Platform Integration

Below is a brief overview of a more detailed discussion here: Securing APIs with an Integrated Security Framework

Security By Design and Continuous Security Testing

Security posture is a product of Information Rights Management (IRM) aligned security/regulatory controls governed, enabled and enforced by a framework of patterns, processes, platforms & automation.

In a process of model driven design security is embedded into every stage of the process. Data is classified and regulatory controls identified during the domain modelling phase. DevOps processes will validate API management security controls against a pre-configured set of security policies and schemes. API management and API gateways are closely integrated with security platforms and enforce security policies at run-time.

Security by Design: A fundamental element of Model Driven Design

Penetration testing targeting known risks and vulnerabilities (incl. OWASP top 10) is overseen by Cyber-Security specialists and is routinely reviewed and maintained. Continuous testing of security controls applicable to each business API is a delivery team responsibility and is built into integration and delivery pipelines and regression tests.

Zero-Trust, OAuth 2.x and OIDC

A zero-trust posture and mandatory OAuth2/OIDC token requirement for API invocation provides opportunities to layer defence-in-depth, to harmonise API management (APIM) with Identity and Access Management (IAM) workflows and to provide levers to align both API and event access management with an organisation’s security posture and policies.

System identity is assured by an OAuth2 ‘client_id’ claim, and OIDC end-user identity by the ‘sub’ claim. Knowledge of the target API resource and the use of API scopes provides security platforms with a mechanism for authorization and analytics driven (SIEM) threat protection based on client, user or invocation context.

Indicative OAuth 2.0 Token Exchange flow
Patterns for API scope authorization are expanded on here

Workflow and Profile Alignment

Organisations that manage sensitive data should provide a single portal and process for domain and external-organisation enrollment to both STS and API management platforms. Similarly, a single portal and process should be offered for simultaneous registration of a client system to the STS and API management, provisioning to event platforms, and for the management of client Id, profile, credentials and certificates.

Shared onboarding process to synchronize org and client profiles

A shared, single point of entry process for organisation and client system on-boarding for IAM and API Management platforms provides an optimal client experience, reduces duplication of effort and ensures synchronicity between platforms.

Security Platform Integration is discussed in more detail here.

Logging, Tracing and Security Incident and Event Management

API management and event message platform integration into analytics driven Security Incident and Event Management (SIEM) platforms is essential and will require API/Event platform and security teams to work together on event schemas and taxonomy.

The cyber security team (or specialist proxy) actively maintains solutions to monitor and protect resources. These solutions may leverage increasingly sophisticated AI tools to detect, investigate and respond to abnormal invocation patterns and user behavior.

Dynamic threat protection may include denying resource access at the authorization server based on specific contextual data, such as user, client, geography, time, identity assurance level or other relevant factors. At the API Gateway, individual source IPs, clients, users or access tokens can be blacklisted and denied service.

Distributed tracing can provide detailed incident data and valuable insights into interconnected systems. Native cloud tracing is easily employed but difficult to stitch together across platforms. Consider a non-proprietary, cross-platform standard such as OpenTelemetry for hybrid and multi-cloud environments.

Model Driven Development and Integrated DevOps

A well-defined process and complimentary tooling to enable model driven development and vertically integrated DevOps will reduce (the often considerable) friction between development and maintenance of the domain model, and implementation and delivery of the model.

Model Driven Development

A domain modelling platform must enable all stakeholders, from business owners, enterprise and domain architects, security architects, platform SMEs, tech leads and developers, to be able to interact with (and contribute to) the same domain data model in context, and to be notified of changes that interest them.

Ideally, modelling tools will generate OpenAPI specifications directly from the domain model, support source control, and manage semantic versioning across the model and its derivative artefacts — features that are helpful in maintaining the currency and traceability of published APIs.

Model Driven Development: A collaborative modelling platform (e.g. Jargon) may generate interface specifications directly from the model

Vertically Integrated DevOps

Microservices architectures are “built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services” — Microservices (martinfowler.com).

As managed APIs are the interface to deployed microservices, DevOps tooling must ensure that new and updated interfaces generated by domain modelling tooling and implemented by a business service are published to relevant API management (and potentially event, security and distributed graph) platforms simultaneously with the deployment of the business service. Deployment of APIs must be automated, domain-autonomous and as frictionless as possible.

Vertically Integrated DevOps: API specifications and event config are produced by a collaborative modelling platform, validated and tested by integration and deployment automation, and deployed simultaneously with the business service

Design standards and security policy compliance is automated wherever possible. DevOps pipelines will apply Policy-as-Code controls, enforce mandatory standards with specification document linting, and will validate, or even inject API security schemes as appropriate for a given context.

These measures serve to ensure robust, secure APIs of a consistent quality, provide early feedback on document and configuration issues, and minimize blocking engagements with a centralized API management team.

API lifecycle mgmt and DevOps is discussed in more detail here.

Summary

Coordinating REST API, affordance, HATEOAS, business event and data-as-a-product models around the same authoritative domain business information resource model simplifies development, specification, discovery and comprehension.

A harmonized interoperability framework provides opportunities to streamline and align protocols, interfaces, analytics, access control and workflow management across heterogeneous API management, event messaging and security services, enabling the enterprise to manage interoperability as a consistent, contiguous platform.

An overview of the foundations and evolution of API maturity is discussed in the related article API Interoperability Maturity Model | by TRGoodwill | Oct, 2022 | Medium

--

--

TRGoodwill
API Central

Tim has several years experience in the delivery and evolution of interoperability frameworks and platforms, and currently works out of Berlin for Accenture ASG