API Interoperability Maturity (Archived)

A Model for REST Enabled Interoperability

TRGoodwill
18 min readSep 24, 2022

This article has been retired in favor of a briefer version here: API Interoperability Maturity Model | by TRGoodwill | Oct, 2022 | Medium

Introduction

As domain autonomy increases, so do the integration and dependency management challenges. Left unmitigated, these challenges will become a drag on agility. An interoperability strategy can reduce coupling and friction between domains, and improve the quality, security and accessibility of business data.

Before we can propose an API-led interoperability framework, we need to define what a mature interoperable APIs look like, and how we can facilitate and measure progress toward maturity.

This article introduces an API interoperability maturity model specifically tailored to an enterprise manged API capability context. The intention is not to be prescriptive, but rather to identify challenges and opportunities, spark discussion and hopefully inspire the adoption of an agreed and contextually fit-for-purpose model of API interoperability maturity.

API Maturity Models

The Richardson Maturity Model and Web API Design (Amundsen) Maturity Model are highly regarded models of API maturity. The most mature tier of each of these models, as per Fielding’s original vision, enshrine a vision in which application state transitions are driven purely by contextual affordances provided via hypermedia links. Moreover, services have unfettered control over their own namespace, and instruct clients on how to address relevant operations in real-time — only the root URI need be known to a client.

The vision is indeed compelling; however we must temper the dream with practical considerations:

  • There is a level of misalignment between HATEOAS and a specification-first approach to API management that must be addressed.
  • HATEOAS and dedicated affordances are architectural ‘features’ that come with development and maintenance costs. Implementation should be business driven and employ an evolvable, minimum-viable design.
  • APIs do not exist in a vacuum. Implementation must be decoupled, but also consistent both across the enterprise and within a multi-modal interoperability framework.

An API may tick all of the Richardson and Amundsen API maturity boxes and yet be barely usable and/or costly to maintain. When taking an organisational view of REST enabled interoperability there are many highly impactful considerations that might be embraced by a maturity model — such as the consistency of APIs, provenance and quality of the model, security, interoperability with asynchronous events, contribution to a federated data platform and frictionless API management.

An exploration of the Richardson and Amundsen maturity models and tactics to improve compatibility with specification-first imperatives and URL versioning are covered in the related article The Engine of Application State | by TRGoodwill | Aug, 2022 | Medium

A Managed API Interoperability Maturity Model

We have already seen that different measures and models can sharpen our thinking about the problem space. To this small collection we might add another — a model that is little more contextual; concerned with the unique challenges of a large enterprise spanning numerous domains, and in which is embedded the common contemporary constraint of an enterprise managed API capability based on an API specification standard (such as OpenAPI). We might call this an API Interoperability Maturity Model.

The focus of this maturity model is the confluence of the domain model and API design standards as expressed in the API specification document.

API Specification: Derived from the REST model and API design standards

Maturity is a measure of the domain alignment and interoperability qualities of resource APIs, and thus is also concerned with strategic guidance.

The trend in diagramming such things appears to be four levels starting from zero, and arrows indicating progress from bottom to top. Let’s go with that.

A ‘Managed API Maturity Model’

Level 0

No Domain Model and/or No API Development Standards

Level zero represents the lowest barrier to entry to REST-like APIs for a large enterprise, and might take one of two forms; reactive and project driven, or centralized control. Each is deeply problematic.

Reactive and Project Driven

It may be that individual pilot projects have been running autonomously outside of legacy change management and integration constraints in order to deliver specific capabilities fast. The need to exchange data via REST APIs ahead of any coherent enterprise strategy will typically result in project focused, highly coupled APIs with little potential for reuse. APIs may have been delivered across the organisation with little consistency or predictability and may be hosted without an API management/portal service, or across multiple API management platforms. There is likely no self-service integration and perhaps inconsistent, incompatible security architectures. Integration will often require direct human-to-human engagement and additional development to cater for a new client and their specific needs.

Centralized Control

A centralized REST API competency center is for some organisations the first evolution from a SOA/ESB or other centralized integration capability. A competency center may grow out of a proof of technology / pilot approach to implementing an API management capability. Typically the competency center will be responsible for REST models, API lifecycles and dependency management, and responsibilities may even extend to supporting platforms.

Although the approach may deliver some of the benefits of level 1 maturity, such as consistent models and interfaces, it does so at the cost of a blocking dependency on a central team. Agility is limited. CI/CD aligned with product streams is unlikely to be supported.

Level 1

Minimal Domain Model and API Development Standards

Domain Autonomy

Level 1 is a significant step up from level 0. Business sub-domains are identified, and provided with registered domain namespaces. SMEs still review REST models, however model development, implementation and deployment are under the control of responsible domains. Domain decoupling is facilitated by an API Management platform which segregates API lifecycle and workflow roles and responsibilities by domain. The platform may expose management APIs to DevOps tooling. API version management and depreciation schedules are employed to provide stability and reduce coupling.

Discoverable, Self-Service APIs

While API management platforms should differentiate domain ownership of APIs, the platform must support a single portal and a single enterprise catalog for API discovery. The client portal will allow client developers to onboard client systems, manage their service registration, and to discover, test and subscribe to business resource APIs wherever they are hosted.

Data Stewardship and Oversight

The “enterprise data model” exists only as a federated model, an aggregate of the contributions of relevant subject/domain matter experts, each working within their allocated functional space. Guidance and active engagement by the responsible enterprise office will ensure consistency between domain models, appropriate security classification and transparent management of dependencies.

A shared enterprise vocabulary will minimize unnecessary semantic interoperability issues with common data elements (e.g. date-time formats). Where similar data models recur across capability domains, for example address data, responsibility for the resource architype is delegated to a single domain.

Enterprise API Standards

API Standards are a focused collection of imperatives, conventions and guidance, and are intended to improve the consistency, stability, generality, predictability and usability of business resource APIs. They may articulate enterprise, industry and regulatory policies and frameworks. They may offer best-practice recommendations and provide a basis for quality assessment.

Governed, opiniated standards and patterns will be required to enable seamless interoperability between independent, decoupled domains. Balancing the benefits to development teams of an enterprise landscape of rich, composable, self-service business data against the impost on implementation flexibility is a difficult line to tread — standards, just like models, must learn from implementation and improve through iteration.

API design standards might be self-contained or reference and augment external standards, and would typically cover naming and addressing (path) conventions, versioning, security patterns, caching strategies and logging/tracing conventions and mechanisms. They would standardize payload, header and query parameter (incl. pagination and filtering) conventions, and cover error responses and HTTP status codes.

Experience APIs

UI/Experience APIs will tend to be tailored to specific applications (SPA/PWA/Mobile) or use-case requirements. Enterprise data requirements are mediated to and from enterprise resource-oriented and functional APIs. In this model, level 1 would be the target maturity level for these APIs. Closer alignment to a Domain Driven Design derived domain model may not applicable, or desirable.

Level 2

Strong Domain Model, Strong Interoperability Strategy and Standards

In a large and complex organisation, robust interoperability demands a consistent approach to synchronous APIs and asynchronous events, so that each business capability contributes to a dynamic enterprise catalog of discoverable, predictable, composable, coherent and subscribable business resources.

Domain Driven Design

Domain Driven Design (DDD) is the cornerstone of any strategy to deliver domain aligned, agile capabilities. Domain driven design “divides up a large system into Bounded Contexts, each of which can have a unified model” Fowler, M 2014, BoundedContext. The scope and context of the business information managed by a system of record is arrived at through a process of DDD.

In a large enterprise DDD tooling and practices (such as Event Storming) are defined and may be guided, however individual business domain owners own the process, as well as the domain models, capabilities and interfaces that are derived from it. The process must be open and multidisciplinary, iterative, and learn from the realities of implementation.

The Domain Model and Model Driven Design

DDD addresses both strategic and tactical design. Through a process of broad stakeholder consultation, distillation and frequent iteration, it will build a comprehensive model of the domain by identifying aggregate roots and boundaries, entities and value objects, vocabulary, business events and state-lifecycle, commands and assertions, context and dependencies. It may also encompass guiding concepts such as a business value proposition statement and organising principles. It will explore existing applicable formalisms such as industry models and frameworks.

Domain data model and State-lifecycle diagram. Image by permission, Jargon.sh

The model is rigorously maintained as the source-of-truth document for the capabilities and interfaces that implement it.

Model Driven Design is discussed in a little more detail here

Security By Design

In a process of model driven design the capability development pipeline is vertical, and security is embedded into every stage of the process.

Security by Design: A cornerstone of model driven design

Data is classified and regulatory controls identified during the domain modelling phase. Data handling and security controls are implemented accordingly by the business service responsible for the data. DevOps processes will validate API management security controls against a pre-configured set of security policies and schemes, and automated security testing is incorporated into CI/CD pipelines. Client onboarding and API subscription via API developer portals may be validated and authorized where required. API gateway platforms are a key component of the enterprise security architecture and will be integrated with enterprise security platforms such as token services and SIEM.

Logging, Tracing and Security Incident and Event Management

API management and event message platform integration into analytics driven Security Incident and Event Management (SIEM) platforms is essential and will require API/Event platform and security teams to work together on event schemas and taxonomy.

Event tracing will provide detailed incident data and valuable insights into interconnected systems. Native cloud tracing is easily employed but difficult to stitch together across platforms. Consider a non-proprietary, cross-platform standard such as W3C Trace Context for hybrid multi-cloud environments.

The Canonical Business Resource

The bounded context and its aggregates (entity groups defined by consistency boundaries) will encompass one or more business capabilities — realisable as microservices. Business information managed by a core business service, modeled appropriately, constitutes a canonical business resource, which represent the business facts about a domain with which external systems can interact via standard RESTful operations and subscription to business events.

A Business Resource represents the nouns of a capability, such as ‘applications’ and ‘applicants’. The model expresses the distillation of the core domain and conceptual contours to achieve a balance of composability and cohesion. To maximise intelligibility and utility in the enterprise context, domain models are not something to abstract away, rather, their intelligible expression is a key interoperability enabler. The business resource model is a communication medium, a means of sharing core business concepts in the ubiquitous language of the domain with client developers.

In some cases, depending on the level of integration and trust, business information will need to be carefully abstracted for clients external to the organisation — these should be managed as mediated services, and should not impact the domain model.

Composability

Microservices architectures and the REST architectural style enable decoupling, self-service and re-use by moving the responsibility for choreography from the resource server to the client. This shift in responsibility allows business systems to build stable, genericised interfaces to their business resources and capabilities, without tight coupling to client systems, which in turn allows client systems to compose data via self-service integration without a blocking dependency on external teams and coordinated releases.

Composability and cohesion are discussed in more detail here

Anti-Corruption Layer

Systems operating across multiple domain boundaries “must be prepared for gradual and fragmented change, where old and new implementations co-exist without preventing the new implementations from making use of their extended capabilities… the architecture as a whole must be designed to ease the deployment of architectural elements in a partial, iterative fashion” — Fielding, R.T. 2000, Designing the Web Architecture: Problems and Insights.

From an enterprise self-service data point of view, it is particularly important to cater for this scenario as it is likely that numbered among legacy or proprietary systems will be core business capabilities managing data of critical business value. The application of an anti-corruption layer will enable a legacy or proprietary capability to expose discoverable, domain aligned, enterprise consistent interfaces to business data, and to consume APIs and events published by other business capabilities. An anti-corruption layer might be as simple and tactical as a mediating façade, or as comprehensive and strategic as decomposed component services managing replicated data.

Affordances

Significant state-lifecycle transitions not naturally driven by external, asynchronous business events may warrant a dedicated affordance API. Affordance operations are always verbs that describe the action in the context of a resource (or resource collection). The intent of the affordance and its context are then clear.

For example:

“checkout” : “/users/98D76C/cart/checkout”“cancelSubscription” : “/subscriptions/12B34C/cancel”

Not every client system will be interested in all of the information provided by a business resource API — even if it is modelled around core business data and streamlined for composability. A predictably implemented parameter driven ‘read’ affordance model or consistent HTTP QUERY method scheme that includes field and collection filters can reduce over-fetching and enhance composability — without risking a proliferation of client-coupled response document models.

For example:

GET /v1/subscriptions?subscriberId=12B34C&fields=subscriptionId,nameQUERY /v1/subscriptions  {...}

Note: the relatively new cache-able, idempotent HTTP Query method remains unsupported by OpenAPI 3 at the time of writing.

Business Events

Domain business events are the asynchronous compliment to synchronous REST interfaces, and align with the state-lifecycle events identified during the Domain Driven Design process. Each System-of-Record business resource will emit real-time update and state-lifecycle transition event messages that constitute the definitive log of temporal events.

System-of-Record business resources emit real-time update and state-lifecycle transition event messages

Like a resource API, event messages are an abstraction that do not expose internal database schemas across domains. For the sake of intelligibility and manageability, business events and APIs would ideally be rooted in a common abstract resource model.

Federated Data Platform and Downstream Analytics

A distributed data platform “is founded in decentralization and distribution of responsibility to people who are closest to the data in order to support continuous change and scalability” — Dehghani, Z 2020, Data Mesh Principles and Logical Architecture.

System-of-Record REST models that clearly express and expose composable, filterable aggregates within the domain, and an enterprise log of temporal business events provide the foundations of a discoverable, decoupled, secure and scalable federated data platform. “Source domain datasets represent closely the raw data at the point of creation and are not fitted or modelled for a particular consumer. Data consumers can always go back to the business facts and create new aggregations or projections.” — The Open Group 2019, Open Agile Architecture.

Aggregated temporal views are constructed from source-of-truth representations and event metadata. Analytics platforms are downstream, non-blocking consumers of business data.

Level 3

Domain Model Driven DevOps and a Harmonized Interoperability Framework

While realization of consistent, discoverable, domain aligned business resource APIs is without a doubt a centerpiece of this API maturity model, a managed API capability strategy poses some challenges that will impede agility if not mitigated.

HATEOAS, the highest level of API maturity in the Richardson model, is in part about giving a business services control over, and flexibility within its own namespace. When our API management strategy dictates any or all of published, self-contained API specifications, application on-boarding and API subscription, then we have inevitably lost some of the flexibility and control promised by the hypermedia vision. In which case, we need to find a way to recover that agility, without compromising the strengths inherent in an API management capability.

Model Driven Development

A well-defined process and complimentary tooling to enable model driven development and vertically integrated DevOps is required to reduce (the often considerable) friction between development and maintenance of the domain model and implementations of the model.

Perhaps the most important feature of a domain modelling platform is the ability of all participants, from business owners, enterprise and domain architects, data modelers, security architects, REST and EDA SMEs, tech leads and developers, to be able to interact with (and contribute to) the same domain data model in context, and to be notified of changes that interest them. In this way, the domain model “acts as a Ubiquitous Language to help communication between software developers and domain experts” Fowler, M 2014, BoundedContext, maximising collaboration, providing the tightest possible feedback loop, and ensuring that the domain model remains the definitive source-of-truth.

Some tools will generate OpenAPI/Swagger API specifications directly from the domain model, support source control, and manage semantic versioning across the model and its derivative artefacts — features that are helpful in maintaining the currency and traceability of published APIs.

Model Driven Development: A collaborative modelling platform (e.g. Jargon) may generate interface specifications directly from the model

Other domain modeling platform features strongly supportive of model driven development include:

  • Model validation
  • Support for meaningful documentation of state-lifecycles, e.g. state-lifecycle diagrams
  • Mapping and management of external dependencies, including notification management
  • Model discoverability, sharing and re-use

Vertically Integrated DevOps

Microservices architectures are “built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services” — Microservices (martinfowler.com). As managed APIs are the interface to deployed microservices, DevOps tooling must ensure that new and updated REST interfaces generated by domain modelling tooling and implemented by a business service are published to API management and event platforms simultaneously with the deployment of business services. Deployment of APIs must be automated, domain-autonomous and as frictionless as possible.

Vertically Integrated DevOps: API specifications and event config are produced by a collaborative modelling platform, validated and tested by integration and deployment automation, and deployed simultaneously with the business service

‘Code-first’ automation is a significant component of a mature API management capability, and is enabled by management APIs exposed by API management platforms.

Design standards and security policy compliance is automated wherever possible. DevOps pipelines will apply Policy-as-Code controls, enforce mandatory standards with specification document linting, and will validate, or even inject API security schemes as appropriate for a given context.

Global API security testing targeting known risks and vulnerabilities (incl. OWASP top 10) is overseen by Cyber-Security specialists and is routinely reviewed and maintained. Continuous testing of security controls applicable to each business API is a delivery team responsibility and is built into integration and delivery pipelines and regression tests.

These measures serve to ensure robust, secure APIs of a consistent quality, provide early feedback on document and configuration issues, and minimize blocking engagements with a centralized API management team.

A Harmonized Interoperability Framework

REST API, affordance, HATEOAS, business event and data-as-a-product models are coordinated around the same authoritative domain business information resource model.

The business resource model captures not only operational entities and value objects, but also the state lifecycle and temporal analytical models. From the state-lifecycle definition are derived business events and triggers, state transition affordances and HATEOAS links.

The Business Resource Model: Captures the REST data model, affordances and business events

At an operational level, an Interoperability Framework will seize opportunities to streamline and align protocols, interfaces, analytics, access control and workflow management across heterogeneous API management, event messaging and security platforms.

A harmonized interoperability platform: coherent, secure and streamlined

HATEOAS and Affordances

In a minimum-viable implementation as part of a harmonized framework, every HATEOAS link is keyed to a documented state-lifecycle transition affordance operation, and successful invocation of the affordance would result in a state transition and a published business event.

HATEOAS links should employ the most efficient link format for the use-case. The name of every HATEOAS link will correspond to a documented operation with an explicitly defined request and response document, referring to an operationId or to an OAS 3 link name.

HATEOAS, affordance and event alignment are explored here.

Harmonized Business Events

Business events are intrinsically bound to the REST model. An event message may be the entirety of, or delta to the current representation of a business resource as per the W3C Websub protocol, or it may simply be a notification that the resource has changed with a link back to the REST API as the source of the current state of the resource, as per the secure ‘claim-check’ pattern.

Business events are intrinsically bound to the REST model

Easy to discover and comprehend self-service event message platform APIs should be offered via the API developer portal. A subscription portal/hub will support programmatic and dynamic subscription to events on a resource instance granularity.

Harmonized Business Events are discussed in more detail here.

API Enabled Data-as-a-Product

Data Mesh principles as articulated by Zhamak Dehghani (Thoughtworks) extend and evolve the Federated Data Platform concept, placing responsibility for analytical data with the operational system and its team. Data products will include operational data, historical data, and immutable aggregated temporal views of events, entities and actors. The modelling of domain-relevant temporal views is in the hands of domain experts, in close consultation with consumers of analytical data.

An API enabled, business resource predicated data-as-a-product approach will leverage (and potentially extend) existing API management platforms and pipelines to publish and secure data APIs. Enterprise distributed query and analytics platforms are downstream consumers of business data products and are responsible for the experience layer and for the composition of esoteric and cross-domain temporal views. GraphQL is a good fit for this use-case.

A distributed graph of domain business data

Security Platform Integration

A zero-trust posture and mandatory OAuth2/OIDC token requirement by API gateways and resource services can provide defense-in-depth, harmonize API management (APIM) with Identity and Access Management (IAM) workflows and ensure alignment of API and event access management with an organization’s security posture and policies.

System identity is assured by an OAuth2 ‘client_id’ claim, and end-user identity by the ‘sub’ claim. Knowledge of the target API resource and the use of API scopes provides security platforms with a mechanism for authorization and analytics driven (SIEM) threat protection based on client, user or invocation context.

Indicative OAuth 2.0 Token Exchange flow

A shared, single point of entry process for organisation and client system on-boarding for IAM and API Management platforms provides an optimal client experience, reduces duplication of effort and ensures synchronicity between platforms.

Shared onboarding process to synchronize org and client profiles

End-to-end tracing and integration into Security Incident and Event Management (SIEM) platforms is assumed at L2 of this maturity model. In a process of active and continuous engagement with API, event and IAM platform SMEs, the cyber security team maintains and evolves solutions leveraging increasingly sophisticated AI tools to detect, investigate and respond to abnormal invocation patterns and user behavior.

Security Platform Integration is discussed in more detail here.

Summary

Agreement on an intentional API maturity model can underpin coherent and evolvable API and interoperability standards, identify interim and target states, and minimize technical debt.

The Richardson Maturity Model and Web API Design (Amundsen) Maturity Model focus on HATEOAS and dedicated use-case affordances; architectural features that may conflict with contemporary API management practices. In an enterprise environment in which an API Management capability is a given, reliance on these maturity models alone, as worthy as they are, can lead to inconsistent guidance, impede discoverability, and leave important challenges unaddressed.

We have outlined an API Interoperability Maturity Model, adjunct to, and derived from the Richardson and Amundsen models, concerned primarily with the interoperability qualities of APIs, and in which is embedded the common contemporary constraint of an enterprise managed API capability based on an API specification standard. The model provides guidance on the importance of standards and patterns, DDD derived business resources, security by design, composability, business events, model driven development, integrated platforms and vertically integrated DevOps, in achieving domain alignment, decoupling and agility.

--

--

TRGoodwill

Tim has several years experience in the delivery and evolution of interoperability frameworks and platforms, and currently works out of Berlin for Accenture ASG