API DESIGN AND IMPLEMENTATION: OUT OF SYNC? MUSINGS FROM OUR CHIEF ARCHITECT.
One of the questions I am often asked is “What happens when the design gets out of sync with what’s actually running?”
It’s a good question, and important to address. But it’s a bit like when a tourist asks for directions and a local responds “Well, I wouldn’t start from here!”
There are a few assumptions.
Assumption #1: There are two things: API design and implementation.
It’s a fair assumption, but even describing the design and implementation as such implies they are disconnected; two discrete things. Sure, the implementation starts from a design, but when something is missed in the design, or requirements have changed — then updating the implementation is critical. Updating the design is a nice idea, but never done in practice.
There are good reasons for this disconnectedness. Good as in, “I get it,” but not good as in best practice.
I see this all the time — customers have teams doing detailed designs in spreadsheets. Different teams specify and build, there are handovers and responsibilities. Modern software development practices help by encouraging multi-disciplinary teams and bringing the business focus and technical focuses together — but even within those teams, you see this disconnectedness… what is the team’s goal? What is the MVP, a running API or a running API with accurate design and documentation?
And tooling focus is important — how you build the APIs. Almost all the tools customers use to build are code-focused. Okay, the “code” may be configuration files to link caches, routing lookups, or setup logging rather than traditional languages Java, Go, or C# — but the thinking is still very much like using a procedural programming paradigm.
“Not starting from here” means changing the way we think of design and implementation… by keeping them connected, by describing it as the implementation flowing from design. Aiming for a change in the design to “flow” into the implementation.
Assumption #2: The implementation always has requirements overlooked by the design.
I often say “No design is complete until you’ve actually built it,” meaning that there’s always something you discover as you build which impacts the design. Perhaps it’s some logic that needed a new variable? Perhaps a target system is different from what was expected, or some new requirement is discovered half-way through a project?
If these are simply reality — the question really is where do I make the change? It’s probably tempting to go for the “simplest” option, of just changing the code… after all, it’s all in an SCM and we’re good because we’ve put a comment in describing the change before we generated documentation from the code… so what’s the harm?
The problem with “simplest” is that it’s rarely quite that simple — changes have side-effects. Are existing tests updated and new code paths verified? When a cache is added to a called API — how does a user know that information might be stale? When an orchestration API uses a different API because it’s more convenient — how does anyone know the API lineage? And don’t get me started on using runtime log files to work out lineage!!!
Simplest often just means quickest, and quickest for me so I can get the ticket resolved. And it’s quicker because if the implementation doesn’t flow from the design then I need to make the change in two places!
“Not starting from here” means updating the design first and then regenerating the implementation from it.
Assumption #3: The implementation is always more complex than the design.
This is a fair point — if the design is only high-level, then sure; it’s not going to have enough detail to code off of — but it shows the intent and what the API is aiming to do.
I remember when I first used Maven and discovered the concept of “coding by convention not configuration,” where if I followed the naming patterns then I didn’t need to set things up. Later I came across the idea of “opinionated” frameworks… forcing me down certain routes but promising great “freebies” such as metrics, logging, and simplified security.
I guess some developers feel constrained while others feel relieved at less boilerplate code.
But when an organization is developing hundreds or thousands of APIs — what’s wrong with being opinionated? What’s wrong with having constraints, if it means that everything can work together?
Take an example of an API to pull back customer information… it’s used a lot, but the data doesn’t change all that often, so we can cache the result.
- Approach 1. The developer adds a cache item to the API — they could add 304 tag checks, they could set up a Redis cache, or if their API management system has the option — add a response cache to the proxy. All viable. Each has some differences in performance and scope. The developer weighs up the choices and makes a decision.
- Approach 2. Have the design say “This call can have a 5 minute TTL” and let the “flow” determine how that gets implemented. The developer doesn’t have to make a decision. The developer doesn’t need to write any code or configure the management system.
If all you’re building is a single API, it probably doesn’t matter all that much. But when you need this to be repeatable and done at scale — the developer adding things manually hits some problems:
- Assuming all developers are equal and have considered all the relevant factors to make the right choice.
- Having a variety of implementations or ways of configuring caches makes it harder to maintain.
- You need to be able to read the code (or have access to the runtime) to be able to know actually what’s happening.
Often there’s the argument “I need to optimise this,” — and there are some valid cases. But in the vast majority of the time, isn’t horizontal scaling a better option?
In all of the APIs I see, well over 95% of them are “simple” — either adaptors, embellishment patterns, CRUD operations, simple orchestration, or routing APIs… all play well with the convention or opinionated approach of design flowing into implementation.
“Not starting from here” means choosing to give up some flexibility to gain consistency and reliable results.
Assumption #4: We always need to tweak the API runtime.
There’s always a case to monitor and react to how the API runtime is actually performing — but that’s not what I’m thinking of here.
It’s that nagging feeling of not being in control, and feeling like I need to be able to jump in when need be because I need to be reactive when a problem occurs.
I understand that feeling — and it’s important.
But it’s also important to realize it should be the exception rather the norm. If it’s the norm, then it probably highlights a deeper problem of software quality, rapidly changing requirements, poor design, or poor requirements.
So, where should I start from?
Given the background of the above assumptions, the first question is “Why is my runtime out of sync with the design?”, and then “How frequently does this happen?”, which begs the question “How do I know this has happened?”
Reconciling these is important — but the approach is vastly different if it’s an exception rather than the normal API lifecycle flow.
ignite provides a DevOps dashboard, which shows the running policies by dynamically querying the runtime. These can be checked against expected policies for an API, and an exception report can be made.
You can either 1. update the design in ignite to reflect the changes made (as long as the changes are good ones), or 2. you can import a new version of the API from the runtime. Importing new versions isn’t best practice, as often the runtime contains little or no metadata other than that which is critical for the run, such as business taxonomies and classifications — but if it’s an exception, this shouldn’t happen often.