Are you prioritising faster delivery with your Agile adoption? Don’t.
Go slower to go faster. Or you will end up going slower.
“Continuous attention to technical excellence and good design enhances agility” — the Agile Manifesto
“No matter how it looks at first, it’s always a people problem” — Gerald Weinberg, The Secrets Of Consulting
“The only constant is change” — Heraclitus
In 1938 Marjorie Courtenay-Latimer, a curator at the East London museum in South Africa, discovered a 127-pound fish that was, until that point, thought to have been extinct for 66 million years, having died out during the Cretaceous period.
The coelacanth has been described as a “living fossil” — a living part of the evolutionary record.
Evolutionary theorists were, for a long period of recent academic history, separated into two schools: those who believed that evolution proceeded gradually over time (figure 1), and those believing that instead that evolutionary change happens through “punctuated equilibria”: long periods of time with no change, then a significant geological event causing new species to rapidly evolve (figure 2).
Many contemporary evolutionary academics have reconciled these two schools: it’s now believed that evolution happens on both gradual scales, together with occasional significant step changes (figure 3).
Enterprise software has much in common with biological evolution.
In many older companies, living fossils are still evolving: in some of these companies, COBOL skills are being taught to new graduates to carry on developing mainframe software initially written in the 1960s and 70s; elsewhere massive monolithic databases built for client-server architectures from the 1990s and 2000s and thick-client user interfaces still stand at the heart of critical processing for many others.
And the landscape of applications — in typical medium and large enterprises numbering in the hundreds or even thousands — evolves over months and years at a gradual pace through new projects modifying existing components, as well as undergoing occasional significant step changes with transformation projects. We’ve seen some of this in previous chapters with the 2-dimensional project and product model.
In this chapter we are going to look at how enterprise software can evolve in a more guided way to meet the goals of Better Value Sooner Safer Happier in this environment, and see how to radically improve software agility, meeting and the increasing demand for change while maintaining high quality.
We’ll see how a continuous focus on excellence in architecture and design– at both enterprise and application scale, as well as coding, build, deployment and testing will allow both.
And how a focus on the human aspects of software development are at the heart of any agile or DevOps initiative.
Antipattern 1: as teams become feature factories, neglected aspects of the underlying technology slow them down — or fail to speed them up
In many organisations we have seen, IT leaders are focussing on “agile” for work management, with a separate focus — sometimes a different department — looking at “rolling out” DevOps (in some limited senses) — for modernising technical ways of working.
The “agile” transformation initiatives typically focus on a combination of trying to implement more efficient, predictable and transparent delivery of business features through work management practices: often using a combination of SAFe or SAFe-like outcome or — more likely — output decomposition and planning practices at portfolio level; and Scrum-like practices at team level. Often they’re focused on implementing or making best use of tools like Jira, Rally, Version One or Microsoft Team Foundation Server.
The “DevOps” initiatives typically focus on introducing or consolidating more tooling, and we’ll look at that in a later antipattern.
But often with the exception of looking at work management tools like those mentioned above, the agile and DevOps initiatives are separate and distinct — and the agile initiative has little or no focus on actual software.
The split between business and technology departments in enterprises often leads to a deep misunderstanding of the nature of software development work: that the only valuable work is that spent directly on implementing business features. Portfolio and team boards get filled with business features — usually decomposed into work items or product backlog items or — most commonly — “user stories” which often don’t have clear users.
Occasionally there is a break in the clouds, and the technology department is able to convince their business sponsors to spend money on a big technology-only project to clean up old out-of-favour technologies or use the opportunity of new ones: a re-platforming off years-old (or sometimes decades-old) technologies, or on to a favoured vendor platform; a move to the cloud, or ‘API-enablement’. Or more recently, implementing ‘DevOps tools’.
As the principle from the Agile Manifesto quoted above states, this shouldn’t be an occasional concern: there needs to be continuous attention to technical excellence and good design. And continuous attention often requires work and time.
Gojko Adzik writes about the analogy of dental hygiene: you don’t schedule time to brush your teeth. There may be many aspects of technical excellence and high quality design that simply happen as part of high quality teams building high quality software, and the time taken and overhead on top of delivering features are so small and low that they are not worth mentioning — or separately tracking work or booking time against.
This is particularly applicable when building greenfield applications: teams should not have to plan for the time to use good practices such as test-driven development and continuous integration separately from the actual work of delivering business features.
However, in a typical enterprise brownfield application estate this is more challenging: significant automated test suites might not exist as a safety net for rapid frequent change and automated pipelines might only build and deploy part of the application (often database changes are handled out of band, for example).
The fastest companies, those in the high or elite performers in the State of DevOps Report, realise that Continuous Means Continuous: that a portion of all software development effort, week by week and even hour by hour, must explicitly be dedicated to improving the quality of the software written.
And some of these require more effort than simply brushing your teeth: they require work that must be scheduled — and not deferred.
We have frequently seen teams hide technical improvement work from business sponsors or product owners: rather than being transparent about the need to do improvement work, instead they are opaque about spending time and effort planning and executing on ‘version 2’ of an application or framework, or moving it to modern infrastructure, or some other technical improvement. They are hiding the work because they fear that their business sponsors or product owner will insist that it is continually deferred because of high priority business features (typically ones that are funded in a project business case).
Technical excellence in a brownfield site
Another confounding factor with paying attention to technical excellence is that much of the advice, many of the books, articles and techniques related to software design quality and technical excellence are targeted at greenfield code, rather than the state of most large enterprises’ software inventory. This inventory is typically a sizeable amount of legacy code spread across hundreds or thousands of line-of-business applications which get continual requests for new features.
Many of these books were written in the 2000s when must of the software being written in enterprises was greenfield: automating processes that were manual; developing new channels to market (the internet!); or, pre-the Great Financial Crisis, investing significant amounts in rewriting old systems from scratch.
The state of the enterprise application landscape today, in most medium or large enterprises, is such that a significant amount of the processes that could be automated have been; modern channels to market have all been built; and in tighter financial circumstances, old systems need to be exploited rather than replaced wholesale in a single planned effort.
An exceptional book here is Michael Feathers’ Working Effectively With Legacy Code: Feathers defines legacy code as code without [automated] tests: the point being code without tests is hard (read costly) to modify. Conversely with a safety harness of automated tests it becomes cheaper and easier both to add new features and to refactor or more radically re-engineer to a better design.
Recognising technical excellence as a distinct form of work
“The Third Ideal is all about enabling improvement and innovation. This is the basis of the dynamics of a learning organisation where learning is a part of daily work and where everyone prioritises improvement of daily work over daily work itself. Why? Because winning in the marketplace means outlearning the competition” — Gene Kim, The Unicorn Project
Many of the recent books on enterprise agility, DevOps, and faster software delivery, make clear that there are multiple types of work that teams should distinguish, be aware of, balance the priorities of, and, where appropriate, measure. Examples are:
- Gene Kim et al — The Phoenix Project — 4 types of work: Business Projects, IT Operations Projects, Changes, Unplanned Work
- Phillipe Kruchten — 4 quadrants — feature / architecture / defect / technical debt
- Mik Kersten — Project to Product — 4 types of work — feature / defect / risk / debt
- Dan Terhorst-North — Software Faster — 3 types of work — feature / discovery / kaizen
Technical debt has become the catch-all term for the characteristics of a stock of software that slows down future development. Ward Cunningham originated this term to describe decisions that teams made that involved short term compromises in design quality which might slow down long term development of software, but which would speed up things in the short term in order to deliver software early or on time.
Over time the term has evolved. Martin Fowler wrote a blog entry about the technical debt quadrant where the two dimensions of technical debt are described as reckless vs prudent, and deliberate vs inadvertent. Where Ward Cunningham’s original definition applied only to prudent and deliberate decisions, Fowler’s quadrant expanded the meaning to a broader set of design decisions.
Even this definition of technical debt however is not sufficient to cover the balance of work item types that The Unicorn Project refers to with “elevate the job of making the work faster” — in its third ideal “enablement of improvement and achievement”. Technical debt is looking only into past decisions — but the work to be done to make delivering value faster may involve new opportunities too: new tools or techniques that weren’t available when the software was originally written.
Dan Terhorst-North’s split of work item types in Software Faster takes this into account: instead of labelling improvement work ‘technical debt’, it is labelled kaizen — continuous improvement. Regardless of whether this work involves reflecting on and improving on prior design decisions, or taking advantages of new possibilities for improvements, balancing these work types is key.

Antipattern 2: agile product management not agile software development — at team or enterprise level
A significant proportion of agile or lean transformations we have seen are concentrating on predictable feature work management, not iterative incremental software construction, or worse, trying to combine upfront traditional architecture and design with Scrum, Kanban or other methodologies.
As Dave Farley points out, the Scrum guide does not mention the word software once. The strength of Scrum is its approach to iterative, incremental product delivery. The downside to this is that often, agile transformations are run from project management or portfolio management departments.
Neglecting the technical nature of software delivery: team level
Two of the original agile manifesto signatories have written separately about the dangers of this at team level: Ron Jeffries with Dark Scrum, and Martin Fowler with Flaccid Scrum, covering both concerns with command-and-control approaches to work management in general, and a lack of understanding of the nature of engineering software in an iterative and incremental world in particular.
In software development there are two primary dimensions of uncertainty: what you are developing — typically called the “requirements” or “capabilities” or“features”, and how you are going to develop it: the “architecture” or “design”.
Traditional waterfall or heavyweight methodologies try to deal with this uncertainty by performing paper exercises before coding starts to try and define what (e.g. in Business Requirement Documents) and then how (combined with the a more detailed what in Functional Specifications Documents, or High Level Designs followed by Detailed Level Designs). And then to put these documents under strict change control.
The lightweight waterfall
A characteristic of many modern enterprises we have observed is that they are following methodologies that could be characterised as ‘lightweight waterfall’.
Very few modern software development projects follow a formal heavyweight process or one that would be recognised as such by the original authors of the Agile Manifesto. At the time — the early 2000s — there was a growing reaction to formal methods that had been developed over the previous two decades such as SSADM (from the same UK Government department that gave you PRINCE2), Schlaer-Mellor, OMT, and the attempt to unify several of them, the Rational Unified Process.
The genesis of the agile software development movement was a reaction away from these heavyweight formal software construction methodologies which contained detailed step-by-step descriptions of templates to fill out and roles to play during phases of software construction — usually many pieces of paper or their electronic equivalent, prior to beginning actually writing code.
Prior to the 2001 Snowbird conference, the alternative methodologies — such as XP, Scrum, DSDM — were collectively known as lightweight, however this term wasn’t considered to cover the depth and discipline of these methods. At a session at the conference facilitated by Alistair Cockburn, after ruling out a number of alternatives one by one such as ‘Conversational Development’, ‘Agile’ was chosen (Cockburn would have preferred to have stuck with both of the final two names left on the board, ‘Adaptive & Agile’).
However, modern typical enterprise software development processes are typically now a faint shadow of the heavyweight methodologies, whilst still retaining their sequential structure. Document templates might exist — sometimes specific to a small department — but formal processes have either been lost entirely, or are not enforced.
This ‘lightweight waterfall’ is typically sustained in enterprises, not to attempt to ensure quality of software delivery — typically the nod to software quality is to have a ‘test strategy’ or ‘test plan’ artefact — but to provide an illusion of predictability. Because it retains the concept of sequential phases from the heavyweight methodologies, it aligns with PRINCE2 or PMI typical project phases— or if not directly with the phases, then at least with a phased approach, with milestones and interim artefacts.
Even when agile work management practices are nominally adopted — through Scrum or SAFe, typically — they are combined with sequential aspects of the lightweight waterfall: water-scrum-fall or water-scrum at least — with at the very least Big Requirements Upfront (BRUF) and likely — as discussed later, possibly imposed by the enterprise architecture and organisational structure — Big Design Upfront (BDUF) too.
However, the notions of software quality and software construction practices are left to teams — it’s not part of the formal process or practices any more. There’s an assumption that magic happens in translating “requirements” into working software.
Embracing change and uncertainty
Kent Beck’s subtitle for eXtreme Programming Explained was embrace change. And Mary Poppendieck notes that the ability to absorb a late change in requirements “is a competitive advantage”. More recently, the approaches to product development described in The Lean Startup and broadened to enterprises in The Startup Way make it clear that a modern enterprise whose functionality reaches customers via the web and smartphones had better be extremely rapid at reacting to changing or novel customer demands — and ideally anticipate them — if it wants to stay in business.
These demands of business and customers can’t be satisfied with a process where concept-to-cash or concept-to-learning takes 18 months. Many of the demands can’t be satisfied if the process takes 6 months, and in some markets and for some features the turnaround time needs to be weeks, or even days.
So agile software development methods allow rapid change — but what about changes we know are complex and are going to take time?
The Cynefin analytical framework is a good way to demonstrate that both what and how are, for many software development initiatives, hard to pin down upfront without exploration.
An essential read here is Liz Keogh’s blogpost Estimating Complexity — here Liz introduces a scale of uncertainty:
1. Just about everyone in the world has done this.
2. Lots of people have done this, including someone on our team.
3. Someone in our company has done this, or we have access to expertise.
4. Someone in the world did this, but not in our organization (and probably at a competitor).
5. Nobody in the world has ever done this before
(Liz colours this list green to red)
This uncertainty can act on multiple dimensions (e.g. working with specific stakeholders) — but in particular when you apply it to the what, then the more uncertain business requirements are likely to have uncertain how as well (at 5 on the scale they are not “requirements” at all, but hypotheses that some business feature or capability might have value). The most valuable business features tend to be at numbers 4 and 5 — and often, if not always, it is not obvious how these features should be architected or designed most efficiently — both to deliver value soon, and ideally to not slow down future delivery.
Simon Wardley’s mapping techniques suggest (amongst other things) that for business or technical capabilities lower down the scale — in particular the 1 & 2 — then there is little value in developing new custom software: buy or rent the capability from someone who has done it already, and focus your scarce software development resources on building differentiating features using agile methods.
Neglecting the technical nature of software delivery: enterprise level
Above the level of teams and projects, the enterprise architecture processes we have seen are typically not lean or agile. Many organisations we have seen which have attempted to install or support agile processes at team level, and agile work management processes — even agile business outcome driven ones as described in Chapter 4 — have still not considered enterprise architecture processes and enterprise architects as being key to the transformation.
In most companies we have seen, the enterprise architects are technically oriented, rarely looking at the architecture of the business — the top-to-bottom enterprise architecture seen in frameworks such as Zachman and TOGAF; typically enterprise architecture functions are concerned with some form of technology governance: to technical standards and software and infrastructure development policy, and are typically only involved in software development initiatives in the early stages: reviewing design documents for adherence to policies or standards.

Even when teams are taking months rather than minutes to deliver features, early design reviews in the context of the lightweight waterfall are close to futile: application delivery teams discover complexities during actual software construction that mean designs that were presented for review by some kind of Design Authority or Architecture Board rarely resemble the final delivered software.
Antipattern 3: enterprise agile organisational patterns are focusing on organisational structure and forgetting about architecture
A common enterprise architecture pattern: optimising for skillsets or costs
The enterprise architecture and organisational design of a number of enterprises we have seen is often optimised for skill or cost rather than flow: component teams and even component departments rather than feature teams. Craig Larman has written about the challenges with component teams: in particular self-organisation around both the what (stakeholder value) and how (architecture and design) is challenging to say the least for these teams.
In particular, a significant amount of the design for valuable features needs to be performed outside the teams — typically by end-to-end or enterprise solution architects, across these components, before the teams begin work on them. The teams lose autonomy, and, to a significant degree are pressured to both estimate and commit to timeframes in order to support synchronised delivery.
As mentioned above, the technical practices that form part of eXtreme Programming, as well as those detailed in The DevOps Handbook, and which are correlated with high performance by the DORA State of DevOps Reports and Accelerate, lose significant effectiveness when a component architecture is in place — in particular for improving flow.

A number of large consultancies have begun to look to sell a business agility transformation based on an operating model that was briefly used by Spotify and written up by Henrik Kniberg using the terms tribes, squads, chapters and guilds.
The positive side to this type of transformation is a focus on customer-oriented value streams, and aligning teams around delivering value.
The challenge, however, is that a naive reorganisation of people and teams, without a focus on how the underlying organisation is able to rapidly, independently deliver value based on the existing and evolving technology landscape — the enterprise application architecture — will lead to suboptimal improvements at best.
At worst it adds another layer of coordination and dependency management. The new value stream squads working with an application architecture where features require changes across a number of components end up having to coordinate with other value streams.
Thierry De Pauw puts it this way on Twitter:
Where all hell really breaks loose is when management decides to reorganise things so the organisation is not compatible anymore with the architecture […] Both sides gonna push on each other. If your software is around long enough, I would bet on the software.
Antipattern 4: focusing on installing tools not developing and rewarding people
No matter how it looks, it’s always a people problem — Gerald Weinberg
DevOps transformations are only focusing on tooling or automation
The acronym CALMS has been used as a reminder of the key elements of DevOps that an organisation should focus on to get transformative value from adopting DevOps ways of working: Culture, Automation, Lean, Metrics and Sharing. Patrick Debois, the founder of the DevOps movement has stated that “DevOps is a human problem”. This echoes Gerry Weinberg’s much older quote above.
Modern tooling — for example build and release automation, source code control, binary artefact repositories — provides a foundation for much of the collaboration and flow efficiency that underlies the common efforts of DevOps and agile and lean software development.
But many technology executives that believe that paying consultancies to install a modern tool chain is the end rather than just the foundation of the DevOps story.
As an example, efforts are often focused on the tools supporting automated build and deployment, and often mislabelled as “CI/CD” when most teams using it are practicing neither continuous integration nor continuous delivery, and certainly not continuous deployment.
This “little DevOps” focus on individual tools and integrated tool chains — as with many other antipatterns we have seen — can only hope to offer incremental rather than radical gains in efficiency.
The foundations of DevOps, Continuous Integration and Continuous Delivery (or even Continuous Deployment) are based on skills and a culture focused on cooperation and collaboration.
Tools can only hope to support the skills and culture, and at best to encourage them, certainly not replace them.
The breadth of “big DevOps” — for example the three ways elaborated by Gene Kim team al across The Phoenix Project and The DevOps Handbook — for rapid collaboration and feedback from business to customers through software development and operations — requires an organisation-wide focus on collaborative practices. For typical waterfall and siloed technology departments moving to this requires a significant cultural shift and skill uplift.
No senior positions for hands-on software developers — “when I grow up I want to be an architect”
Many enterprises are failing to recognise sole contributors at senior levels. To progress in seniority — in job title, in grade, and in reward and recognition — in many large companies software developers have had to stop coding and take up either line management or PowerPoint.
The first track — progressing to manage people whose job you previously did — is a well-worn track to seniority, and is an aspiration for some. For those with a software engineering mindset in particular, the transition from hands-on engineer to managing engineers can be a challenge, but there are a number of high quality resources to help, for example Michael Lopp (who blogs as Rands in Repose) has written Managing Humans.
However, management isn’t for everyone and it shouldn’t be the only choice for engineers wanting to further their career in both reward and recognition.
The second track — moving ‘upwards’ from hands on coding to ‘design’ or ‘architecture’ roles where the output is paper or its electronic equivalent — is also something we continue to observe in many organisations.
But as we’ve seen earlier in the chapter, in modern software development there isn’t anything more advanced about paper exercises in ‘design’ or ‘architecture’ before iteratively beginning to write software.
Given the continuing rapid advances in technology there’s often a significant danger in having those who don’t develop or operate software, or haven’t for a number of years, giving command-and-control instructions — in the form of architecture and design artefacts — to those who do.
Testers are replaced by robots
One of the key practices of modern software development is automated testing. It is foundational: automated functional tests that protect against new functionality causing defects in existing features (aka “regression”) that can run repeatedly as small changes are introduced to software are essential to move to small batch work and continuous delivery — none of the practices around agile software development, DevOps and Continuous Delivery have a fraction of the effect without comprehensive automated functional testing.
And Test-Driven Design — otherwise termed Behaviour Driven-Design or Example-Guided Design — is a core practice for driving agile, simple, maintainable software design that can grow over time. Michael Feathers has made it clear that testing and design are complementary: “[i]f it’s difficult to write a test for a code change, your code could be more modular, and the modules should be relatively small. […] Bugs are a symptom of misunderstanding. With modularity, quality follows”.
However a significant people-oriented antipattern we have seen is where organisations see this as an excuse to replace testers with software: to assume that because software developers can write code that executes tests, then testers — who, the assumption is, only exist to execute tests — become surplus to requirements.
Michael Bolton and James Bach, proponents of the Context-Driven school of testing, use precise language to warn against the dangers of this approach.
They distinguish testing — which they describe as an essentially human activity in the same way that programming is — from checking — the potentially repetitive task of setting up, acting and asserting behaviour that often is performed by “manual” testers and which is able to be automated in software.
Bolton and Bach assert that the human element of testing is exploring a built product or feature in operating context, in order to assess and assert its quality through experimentation — the unplanned activity often termed exploratory testing, though Bolton notes that the adjective here is unnecessary, like “vegetarian cauliflower”.
They use Herb Simon’s term satisfice — a portmanteau term combining satisfy and suffice — to describe the activity to make it clear that testing activities can never completely prove that a product or feature is entirely defect free; they can only be used to communicate to people what exploration and assessment has been performed, and based on that assessment to advise about the risk of releasing the feature to users.
Liz Keogh puts it bluntly: good testers have the ability to wear an Evil Hat, and to try to — and usually succeed in breaking new features that have been developed.
Antipattern summary
The common thread across these antipatterns is a widespread lack of executive comprehension for the complex nature of software development: one of the most complex types of knowledge work, and one which has intrinsic importance to the entire world economy.
In 2011 Marc Andreassen noted that “software is eating the world”. Elements of human activity which previously operated using purely human, or mechanical, or electronically supported processes — or ones where software had been incidental or secondary — were being or had been taken over by software, and usually networked software: television, telecommunications, newspapers, retail, automotive, and so many more.
For enterprises — older ones in particular which have built up technology departments over decades, sometimes evolved from “data processing” departments — the growing omnipresence of software in everything they do, the dawning of the fact that their business is technology — has rarely coincided with a significant change in the relationship of “technology” departments with “business”. In many, the Chief Information Officer or Chief Technology Officer does not even report directly to the Chief Executive Officer, but instead is seen as part of “operations”.
So the nature of technical excellence remains devalued in two main ways:
- The value and time needed for continuous technical work and outcomes in order to at the very least ensure a sustainable pace of delivery, and at best radically improve speed to market is missed: from daily practices, through planned explicit chunks of roadmaps, to strategy and architectural outcomes
- The value of experienced, skilled and often highly talented technologists is missed — from assuming that tools can replace smart people, through recruitment and development practices, to misunderstanding the value of seniority in hands-on engineering roles
For companies that grew up as software was beginning to rapidly eating the world — the dividing line is somewhere in the mid-2000s — technology is the business. Those companies — often sharing markets with those older enterprises, and often started by engineers — are more likely to understand the crucial nature of technical excellence and of the human aspects of engineering.
A number of patterns have emerged to counter this — to leverage many decades worth of work on how the most efficient and effective software development management differs from pure project management, and even from modern product development, by deeply understanding the nature of software development — design, construction and testing at the level of individual applications and the broader enterprise application landscape — and the nature of the people who are talented at doing it.
Pattern 1: disciplined, balanced, continuous focus on technical excellence alongside feature work
Balancing improvements with delivery
The key message of Mik Kersten’s book Project To Product is a focus on long-term flow of value through a combination of distinguishing and then deliberately balancing types of work.
As a bare minimum, organisations — not just teams — should be holding themselves to a proportion of technical improvement work (described in more detail below), positive technical excellence as well as paying off prior technical debt — of 20 per cent, at least. The balance should be committed to long term — and can and probably should be measured over weeks or a small number of months, so that if one iteration or month requires a specific higher focus on business features, the proportion evens out over the next one or two periods.
Project to Product describes this metric of differing work types as flow distribution. Mik Kersten describes in Chapter 4 of Project to Product:
…our target debt work for each release was 20% of flow distribution, a number that we based on our own historical flow metrics as well as, and similar to, best practices reported by others.
The balance should form part of performance indicators for senior managers in the organisation. Incentives matter, and if managers are solely incentivised to meet feature deliveries, the longer term stewardship of the enterprise’s application estate will fall by the wayside. If there is a conscious trade-off to be made (as Mik describes again in Chapter 4 of Project to Product) it should be transparent that this will affect future productivity.
Dominica de Grandis’ book Making Work Visible is an essential read to support the transparency required to manage this balance.
This approach requires a strong relationship between technology and business. It will be a difficult long term commitment if there is a legacy ‘order taking’ relationship between business and technology. This style of relationship is still widespread in many enterprises.
This relationship matters at team level too: typically what we have seen as a high functioning team pattern is that a good product owner with focus on the stakeholder value works alongside a strong tech lead (what Disciplined Agile calls the Architecture Owner, or in eXtreme Programming the XP Coach) to understand how technical work and feature work need to balance.
In addition the source of funding the improvement work must be agreed with company finance department. In particular it must be agreed whether the work tracked as technical improvement is capitalisable expenditure (CapEx) or operational expenditure (OpEx). This distinction is significant in a number of organisations. There’s no clear cut answer to this: companies can either see technical improvement as a tax on the cost of doing business, and convince their finance departments to treat it as part of capitalisable software development work, or as operational expenditure that must be accounted for distinctly.
Once the 20 per cent (at least) time is agreed, there are a number of effective patterns to use the time:
- ‘DevOps’ days
- Hackathons
- Improvement rota (what James Shore calls the ‘Batman’ role, after the military role not the DC Comics character)
- Generalising specialist lead
Teams should be empowered to pick the best pattern for their context. Agile coaches can help here.
The technical practices recommended in Accelerate and the State of DevOps Report are a good place to start.
Your robot friend
“Linting” rules are available in a plethora of open source and commercial tools and should be a baseline for any enterprise software component — we have seen that most of the time they aren’t.
These tools — examples include ESLint for Java/ECMASCript; FXCop for C#; SpotBugs and CheckStyle for Java — help code quality with respect to maintenance by at the very least enforcing coding conventions, some of which can be seen as design quality conventions — for example, restricting the number of parameters in method calls.
They can and should replace written coding standards or style guides, and replace to a degree some aspects of formal code reviews.
Experimenting with continuous improvements
In chapter 2 we talked about slow and steady agile adoptions versus large scale transformations: improvement through many small J-curves (or Kübler-Ross curves) versus one big transformation. The same approach applies to taking on practices around technical excellence and good design within application delivery teams.
In particular, while good practices exist, there are few if any best practices: most of the practices which embody technical excellence and good design at the very least require tuning to context. And most of them are best adopted by teams with the help of software development professionals with experience in using them.
Coaching success
Most good practices for highly maintainable software — such as Test-Driven Design/Development (or Behaviour-Driven, or Example-Guided); Domain-Driven Design; SOLID principles (or not); Refactoring (a kaizen practice — anything bigger and it’s restructuring); Trunk Based Development / Continuous Integration; Continuous Delivery; Observability; Testing in Production — are not ones which can be implemented by tooling (though some tooling can help).

Here teams and organisations should either take advantage of internal expertise and experience, or bring in external specialised agile software development coaching: unfortunately these coaching skills are considerably more rare than agile process coaches: again the split between agile-as-work-management (or at best agile-as-product-development) and agile-as-software-development is evident.
The pattern of design coaching used by eXtreme Programming (XP) is to have the team continuously coach itself on appropriate technical practices with “promiscuous” pair programming. As previously mentioned, XP has an explicit role of Coach which effectively seeds the team with experience — a collaborative rather than command-and-control approach to design quality.
As mentioned in the antipattern description, much of the literature around these technical practices assumes greenfield development. There are a number of specialist coaches, and a few books, with experience around applying these techniques to existing code: for books, as referenced above, Michael Feathers’ Working With Legacy Code is the best starting point — specifically starting with surrounding the legacy code with sufficient automated test coverage to give the confidence to restructure.
In addition Ola Ellnestam and Daniel Brolund’s The Mikado Method is a synthesis of technical, visualisation, and planning techniques for evolving a legacy codebase towards technical and design excellence.
The Toyota Kata techniques discussed in a later essay in this series are highly applicable here — with teams learning by doing, with coaching support if needed.
Measuring success
One of the underlying themes of this book is the focus on sustainable flow: the Sooner of the title, and the primary measurement of success for improvements in technical excellence and good design is the sustained speed of delivery — if possible cycle time.
The DORA State of DevOps Reports and Accelerate concentrate on a subsection of the concept-to-cash (or concept-to-learning) value stream: the time taken from code check-in to production, and it’s a good metric to focus on.
Many improvements we discuss in this chapter will help with that particular metric, but optimising this — getting this as fast as possible, including its component elements such as build time, where radically fast builds including automated test execution are possible — may simply expose further, much more material constraints.
We have talked earlier in the book about the paradox of urgency — but we will see later in this chapter that enterprise architecture history and decisions will have a significant impact on feature flow outside of the improvements that individual teams can make on the check-in to production time.
We have not seen any out-of-the box tools to measure the DORA Service Delivery and Operation metrics, but we have seen organisations build these metrics with more or less ease, depending on the complexity of their development and operations tooling estate.
The other metric to look at — one which will have a secondary impact on Sooner, is quality, or Better. Gerry Weinberg defined quality as “value to some person”; Michael Bolton and James Bach augmented this as “value to some person who matters”.
With design quality, the people who matter are the team that are modifying and extending the existing software. Maintainability is key: understanding code from individual lines, functions up through the nested modules of the design.
Tools such as SonarQube allow a high quality data visualisation of at least some aspects of code level design quality, in addition to spotting security vulnerabilities, potential bugs, and measuring code coverage. We have seen significant success in implementing these in a phased approach: start by visualising code quality and surfacing it to teams, and then have teams hold themselves to a quality bar. SonarQube, for example, helps to do this by distinguishing stock and flow of code, and allows teams to set a quality bar on the flow of code (i.e. new or modified code) so that continuous improvement is supported.
Pattern 2: understand the history of agile and go beyond Scrum in your agile fluency — and consider how to make enterprise architecture lean and agile
Diana Larsen and James Shore introduced the Agile Fluency model in 2012 as a way of tracking and planning an evolutionary approach for teams and organisations to adopt layers of agile approaches.
Fluency level 1 is solely concerned with agile work management: “agile fundamentals” as Larsen and Shore describe it; “it’s a great way to demonstrate success and create buy-in for further investment”.
Team level software development excellence
Agile fluency level 2 is where significant benefits accrue from going beyond this. Larsen and Shore talk explicitly about how the complementary approaches of Extreme Programming technical practices and some of the technical aspects of DevOps go beyond the initial “focusing” benefits of pure work management (e.g. Scrum approaches) and into much more benefits of “low defects and high productivity”.
Building on our initial pattern on balancing technical excellence, this pattern asks us not to treat technical work and feature work in software delivery as two entirely separate concerns. Extreme Programming and its advocates understand that to deliver software agility, you need to start with folding lightweight software practices into all of your process. The cutting edge of DevOps — characterised in mouthful extensions like BizDevProdOps is the same.
Many of the practices we mentioned above — in particular Behaviour Driven Development, but also Continuous Delivery, as well as techniques that apply more to product management such as User Story Mapping — need to be applied in combination continuously — from high level definitions of a Concept to delivering value (Cash or Learning).
The practices are not just about quality (Better) but, when practiced in combination, can support radically faster flow (Sooner) as well as providing some of the foundations for higher job satisfaction (Happier) — typically through supporting Dan Pink’s triplet of autonomy, purpose and mastery.
These technical practices — which also form a crucial part of a Continuous Delivery approach as described in Jez Humble and Dave Farley’s book, and are statistically correlated with high performance in the DORA State of DevOps reports and Accelerate — are largely focused at team and individual application level. But we need to look as well at the interaction of the application estate.
Enterprise level architecture development excellence
As we saw above, existing enterprise architecture patterns in organisations — particularly heavily layered architectures and their corresponding organisational structure — are significant impediments to flow, and can significantly impact the effectiveness of team level technical practices.
Modern enterprise architecture practices should be analogous to the business outcome driven approaches described in Chapter 4.
In particular — and following Dan Terhorst-North’s exhortation in Scaling Without A Religious Methodology — enterprise architecture practices should focus on faster flow as a primary strategic driver. This is contrary to many if not all enterprise architecture strategies we have observed in the past: the stated, or implicit primary goals of traditional enterprise architecture functions have been reduction of duplication — with a typically qualitative rather than measured impact of reducing total cost of ownership for the enterprise.
A pivot of strategy for enterprise architecture to relegate reduction of duplication to a secondary concern compared to architecting for flow and autonomy: not just individual applications, but the interplay of the enterprise application estate — is counterintuitive to many traditional architects.
These architecture outcomes should go onto the portfolio roadmap in the same way that business outcomes do.
Kaizen — continuous improvement — outcomes, ones that gradually evolve the architecture, should be ideally attached to business outcomes: e.g. if we deliver this set of features this quarter, we’ll do them the right way and evolve the enterprise architecture landscape towards an architecture that better supports team autonomy. We will see examples of how to do that in the next pattern.
Kaikaku — step changes in architecture, large rewrites — will need a separate business case in most organisations, and will need to live as a stand-alone outcome. Sometimes these are necessary, e.g. when old technologies have gone entirely out of support or have become unmaintainable.
The mission of enterprise architecture
Brenda Michelson recently proposed a modern mission for an Enterprise Architecture practice:
“Ensuring the underlying health of the enterprise technology ecosystem, including systems, processes, services, platforms, products and people”.
If we see Design Authorities and Architecture Boards reviewing project designs as an antipattern — an artefact of both the heavyweight and lightweight waterfalls, and a futile exercise as we move towards Continuous Delivery — then what is the alternative?
Continuous Enterprise Architecture as a complement to Continuous Delivery may sound like either a contradiction or a nightmare, but the practice of ensuring the underlying health of enterprise technology involves a twin approach of strategy and tactics. Gregor Hohpe’s Architect Elevator provides a framework for us to fit this into: we’ll talk about that more in Pattern 4.
Pattern 3: Understand the implications of Conway’s law, team topologies and evolutionary architecture patterns like strangler pattern and restructuring from monoliths to microservices (and microfrontends)
Mel Conway wrote in a subsequently widely quoted article in 1967 that
Organizations which design systems […] are constrained to produce designs which are copies of the communication structures of those organisations.
A corollary of that — aligned with organisational inertia, and related to the topic we have been emphasising around long-lived enterprise application landscapes — is that organisational structures themselves then become constrained by the architectures that they designed many years — or sometimes decades — previously.
Conway spotted this when he wrote in 1967 — a much less quoted section of the same article, reposted and emphasised by Ruth Malan:
Because the design which occurs first is almost never the best possible, the prevailing system concept may need to change. Therefore, flexibility of organisation is important to effective design.
As noted above in Antipattern 3, attempts by organisations to reorganise rapidly around value streams or ‘tribes’ have a sound intention — to reduce handoffs and inefficiencies. But these attempts are rooted in the same misconception that has run through this chapter: neglecting the essential technical nature of software work.
Instead the implications of Conway’s law and its reverse need to be taken into account.
The significant reverse implication of Conway’s law in long-lived enterprises with large legacy technology estates is that the architecture of that technology estate — which originally was driven by an organisational structure at a point in the past — will continue to constrain the most efficient structure of the organisation, unless deliberate efforts are made to change both.
Team Topologies and the move to microservices and event-driven architectures
Matthew Skelton and Manuel Pais have written about the intersection of architecture and organisation with reference to Conway’s law in Team Topologies. This is an essential read to understand the patterns required for a target organisational structure and architecture. In essence their message is that independently deployable software components should be about the size that a single Scrum-sized team can develop and operate and that the majority of those components should be delivering independent business value (“streams”) with a minority of specialised components (e.g. tax calculation engines), and a thin layer of infrastructure (“platforms”).
Skelton and Pais make it clear that this does not necessarily imply a microservices architecture. But the nature of a well designed landscape of modern microservices fits well with these patterns, in particular:
- Not to small, not too big (the “micro” in microservices is misleading)
- Owns its own data — a single logical data store or database (whether SQL or NoSQL) that only this service can access directly
- Owns its own business logic
- Potentially based around a bounded context (though not necessarily)
- Possibly owns its own frontend following a microfrontend pattern
- Loosely coupled to other services via events or messages — limiting reliance on synchronous RESTful calls
- Independently deployable and scalable
Some have noted that this is simply “SOA done right” (without the overhead of WS-* specifications and Enterprise Service Buses). And a microservice fitting this description can be close to indistinguishable from a well-designed monolithic application cooperating with other applications through a well defined lightweight service or message or event interface.
The key practices were described well over a decade ago — the bounded context in Eric Evans’ Domain Driven Design and the database isolation, business logic cohesion, and asynchronous coupling in the Fortress Model — and latterly Snowman Architecture — in Roger Sessions’ Objectwatch newsletters (no longer available online but collected in Software Fortresses: Modelling Enterprise Architectures)
The size aspect in particular is key — some teams have moved towards more of a “nanoservices” architecture — many smaller independently deployable components per team and business function — which is likely to be too small, and to incur too much of a management overhead.
Here we will echo the Team Topologies advice — the target architecture for a slice of business value — a “stream”, is a piece of “software that fits in the team’s head”.
Amazon and the autonomous two-pizza team
Mary Poppendieck notes that this style of organisational model and architecture is how Amazon are organised: not just as technology teams but as business product teams, the “two pizza teams” (so called because they are small enough to be fed by two pizzas) that are the equivalent of small franchises in the Amazon ecosystem.
Martin Fowler, while joking that the “two pizzas” are American sized pizzas so that the teams might be bigger than you think, puts it this way:
Each team is focused on some aspect of the customer’s experience, some aspect of what Kathy Sierra calls making the customer kick ass at what they do. This alters our notion of what that small team does because if that small team is focused on some piece of customer experience, some way of making the customer do what they do better, then that tells us how we should draw lines between our small teams. Now, it’s not always easy to do this, but it should, I think be the driving notion.
Evolving towards autonomous teams and loosely coupled architectures
As we noted in Antipattern 3, a typical architecture in long-lived enterprises requires multiple application or component teams to collaborate on a high proportion of business features.
While some organisations have managed to fund a huge rewrite of their core platforms to an architecture that supports autonomous delivery — famously Amazon themselves split up their Obidos monolith — most enterprises will be unable or unwilling to do this as a significant (“kaikaku”) effort whilst still delivering valuable change to their existing platforms.
Instead they should set a realistic timeframe for moving key functions into a “big microservices” architecture — one where an autonomous Scrum-sized team can independently deliver, deploy and operate (“you build it, you run it”) business features and ideally entire outcomes.
This timeframe might be years long even for key functions — especially taking the kaizen rather than kaikaku approach. A significant risk here is that the half-life of a CIO tends towards the one-to-two year mark. But enterprise architects set a vision, and work hands-on with teams demonstrate early successes.
Patterns and guides are available: we’ve mentioned The Mikado Method earlier; the kaizen approach to architecture transformations, the strangler pattern was first described by Martin Fowler in 2004, named after the Australian strangler fig.
Fowler says:
An alternative route [to huge rewrites] is to gradually create a new system around the edges of the old, letting it grow slowly over several years until the old system is strangled. Doing this sounds hard, but increasingly I think it’s one of those things that isn’t tried enough.
Elsewhere this has been described as “changing the wheels while the car is driving” or “upgrading the plane whilst in flight”.
Paul Hammant followed Martin Fowler’s article with a number of enterprise case studies of successful legacy application strangulations. Dave Nicolette reviewed this and pointed out the long timeframes and relatively small scope of the success stores: as we noted above — the nature of kaizen is that these are going to be long term, multi-year efforts; Nicolette gives some heuristics for choosing focus areas based on bang-for-buck — basically those where investing effort in radically improving speed to market will be worth the investment.
Most recently, Sam Newman — author of the key book on microservices in a greenfield environment, Building Microservices, has released a further book much more applicable to large enterprises with legacy application estates, Monolith To Microservices. This is an essential text for the approach detailed in this pattern.
Even for the kaikaku approach — a distinctly funded, deliberate step change to rebuilding the business functionality from one or more existing applications — parts of these approaches should at least be considered: big bang cutovers, data migrations and in particular customer migrations are fraught with high degrees of risk: the April 2018 TSB migration from Lloyds Bank systems to Banco Sabadell is only one recent example of the dangers of this approach.
Breaking dependencies rather than managing them
These approaches allow us to move towards a more loosely coupled, high cohesion enterprise architecture. In particular, this allows many of the work management aspects of many enterprise agile frameworks like SAFe, LeSS and Scrum-at-Scale — to fall away or at least become de-emphasised for most business value items.
There’s no need for a release train and the significant overheads involved with planning, testing and managing it, if teams have autonomy to release customer value independently to production.
Most of the ‘scaling’ practices of these frameworks are about the scaled management of hard dependencies across teams. And many, most or all of these practices are required because enterprises have previously optimised their architectures either implicitly or explicitly for factors other than sustained flow.
In these cases, starting with some practices from one of the scaling frameworks can be useful, but it shouldn’t be seen as the ending point of an agile adoption — and again will give an incremental improvement in efficiency as opposed to the radical, exponential improvements possible with autonomous teams and architecture.
Beyond SAFe, beyond the water-scrum-fall — Kniberg’s river crossing
At this point, the autonomous team has ownership over their technical architecture — as with Amazon, they should have control over their business outcomes too — at the very least as to the design of how the outcomes could be best achieved: what product capabilities and features to build to achieve the “river crossing”, and how; and at best full Amazon two-pizza-team ownership of originating the outcomes themselves.
Henrik Kniberg has written a number of presentations and articles about alignment and autonomy, and uses a stark quadrant model to show both the challenges and the promise of aligned business and technical autonomy.
Here — familiar to many working in enterprise software development in the top left quadrant the non-autonomous boss gives both the outcome and the output.
In the top right — the nirvana of aligned autonomy — the team is given an outcome that aligns with business strategy (we assume), and can decide the most effective method to achieve that outcome.
Without the technical autonomy that we’ve described, it’s less possible — at best, significantly less efficient — for teams to have business autonomy. With full technical autonomy, rapid business autonomy follows.
Pattern 4: smart people and smart teams with robot friends
i. start small changes in technical excellence and DevOps to change culture through behaviour
Adopting “big DevOps”, technical excellence and good design practices requires cultural change: tools can help but use cultural change patterns — unlearning and relearning, safe-to-learn experiments, pioneers & more — for any chance of success.
As noted above: use metrics and improvement and coaching katas to support learning. Metrics for targets — like velocity, lines of code, or attempts to measure individual developer productivity — are actively harmful: we’ll see more of this in Chapter 9.
Barry O’Reilly in Unlearn references John Shook’s model of cultural change: you don’t change culture or mindset before behaviour, you change behaviour first, typically in a small way — which in turn changes mindset. As we’ve seen throughout this book, this is typically a series of J-curves where learning — and culture change — happens over time.
Breaking down barriers between Dev and Ops, between Dev and Test, between Business Analysis and Dev, between customers and Business Analysis — doesn’t happen overnight in a large enterprise.
Often changes to formal process can catalyse changes in behaviour. The authors have seen in a number of organisations how updating official procedures around change or release management to allow high degrees of automation to substitute for multiple manual handoffs, or redefining Segregation of Duties from having distinct departments to develop and deploy applications, to automated four-eyes checks through Git pull requests, allows teams to start to behave more like they actually own their applications.
The drive to team autonomy radically increases quality through ownership — this is the fundamental promise of DevOps, embodied in the first ideal of The Unicorn Project: locality and simplicity. A virtuous circle results, of the team taking more ownership of business and technical outcomes, the more that is given to them.
Develop technical careers: value engineering and engineers, and enterprise architects
As detailed in Antipattern 3, the two main routes to progression for software engineers have both typically been to become hands-off in delivery: either managing engineers, or producing PowerPoint documents. Those who have managed to progress while retaining hands-on development roles have typically been the exception.
A number of organisations have started to formalise a third way as part of their HR processes — or at least to refocus those who might have been promoted to PowerPoint back on actual software delivery.
Distinguished Engineer programmes
The role of Distinguished Engineer has existed (or did exist) for many years at technology firms such as IBM and Sun Microsystems. More recently, more enterprises — in particular in the financial services industry — have initiated Distinguished Engineer programmes to recognise and reward outstanding sole contributors.
A standalone recognition for a few distinguished individuals — typically the Distinguished Engineer population for large enterprises with tens of thousands of technologists numbers in the low tens — is worthwhile but not sufficient to motivate, retain and reward skilled developers as they progress through their career.
A pyramid of reward and recognition — for example with a feeder programme of more junior but highly recognised Expert or Enterprise Engineers below the Distinguished Engineer track — can help.
But the career structure of an organisation should be clearly marked as having a path to promotion and recognition for software engineers and other software development professionals who want to remain hands-on.
The UK Government Digital Service technology career framework
The best example we have seen of a framework for career development in technology — one which has been an inspiration for a number of other enterprises — comes from the UK Government Digital Service.
Here a framework of dozens of roles is described, with a set of skills and expected skill levels at differing levels of seniority — for example the Software Developer role goes through 6 levels from apprentice to principal developer, and specifically has two tracks with differing skill expectations: technical specialist and management. The former is a hands-on role up to the highest level of seniority.
The architecture elevator
Gregor Hohpe has written about the metaphor of a skyscraper’s elevator to frame the concerns of architects in the context of modern ways of working. Here he complements Brenda Michelson’s recommendation for a mission for an architecture practice:
Many large organizations see their IT engine separated by many floors from the executive penthouse, which also separates business and digital strategy from the vital work of carrying it out. The primary role of an architect is to ride the elevators between the penthouse and engine room, stopping wherever is needed to support these digital efforts: automating software manufacturing, minimizing up-front decision making, and influencing the organization alongside technology evolution.
Migrate towards modern testing and testers
As we saw in the antipattern, the value in testers is wearing the Evil Hat.
A modern testing approach involves shifting quality left, automating as many repetitive checks as possible, but using the distinctive testing mindset and skillset where it needs human interaction:
- to spot unconsidered defects or non-happy paths prior to software construction — in analysis, in user story acceptance criteria
- during construction as developers build small valuable slices in minutes or hours, working closely with the developers at their desktops or over chat rather than waiting for formal handoffs
- after construction when a feature is close to release, spotting final unpredictable defects in user experience and business functionality
- as well as helping track down the nature of defects in production for true DevOps teams
Here the trick for organisations is in training and nurturing talent in Bolton and Bach’s vision of a testing role within an application delivery team — a real knowledge worker, using their creative mind, rather than performing repetitive tasks which a robot can perform.
Consider internal consultancy
One micro-antipattern we have seen over and over in enterprises is experienced technologists complaining that the advice finally being followed after shiny expensive PowerPoint presentations by expensive consultants is the same advice that they have been trying to give internally, unsuccessfully, for months or years.
Henry Mintzberg’s book Managers not MBAs was instrumental in giving me second thoughts after I’d been accepted onto London Business School’s MBA programme. I ended up rejecting the offer, and concentrating on developing my existing career and doing the best within my organisation.
Mintzberg points out that most management challenges — and in the context of this chapter, most deep technical challenges — are best understood by the people with the deepest understanding of the context that the challenges have arisen in, rather than those parachuted in with generic consulting skills.
Internal consultancy can work but it needs fostering: the consultancy mindset and skillset — in particular the ability to distil present complex ideas in a coherent and convincing narrative with a commercial eye, to convince multiple stakeholders from dyed-in-the-wool technologists to CEOs — doesn’t come naturally.
And occasionally the experience of having implemented new technologies or techniques in other organisations can come in useful.
But the value in exploiting experience and understanding of an organisation, as well as people’s social capital within the enterprise, their network of contacts and goodwill — is under-used in most enterprises we have seen.
And, as we have seen earlier in the chapter, most of the significant transformational techniques take at the very shortest many months, if not many years, to effect. Consultancy engagements are rarely cost effective over these timeframes.
Centres of Excellence in development or testing are antipatterns for sustained flow and autonomy. But Centres of Excellence in enablement — the Team Topologies book also talks about a similar pattern — can be hugely valuable.
Pattern summary
Using agile and lean management out of the context of writing software will only achieve at best incremental gains.
Focusing on implementing integrated modern tooling without making cultural and organisational changes — “little DevOps”, only focusing on tools, will only achieve incremental gains.
Radically, exponentially better, sooner, safer, happier — continuous — delivery of value has to come from taking into account technical excellence in software development processes and designs at team and enterprise level, and excellence in the people and teams who develop software.
And it won’t happen overnight: it has to happen by taking a mostly evolutionary approach to this revolutionary change.
Measure, manage, and incentivise technical excellence effort
- Have a continuous improvement budget of at least 20% — work out how you fund this, whether through CapEx as a “tax”, or OpEx
- Use this to pay off technical debt and fund continuous improvement
- Make this work (or at least effort) visible so it can be managed
- For brand new greenfield software built to deliver at speed this is about sustainability
- For legacy software — more likely to make up the software estate of large enterprises — it’s about continuous improvement (kaizen) with occasional step changes or rewrites (kaikaku)
- Measure success — use Accelerate as a guide; look at code quality tooling as well — but use metrics for learning not targets
- Train and coach novel techniques — don’t expect teams to pick these up on their own. Experiment with how these techniques work in context.
- Your delivery managers must be incentivised by their KPIs or otherwise to maintain the balance of continuous technical excellence with delivering business features
Go beyond agile work management practices for radically faster speed to market
- Continuous delivery means continuous software engineering not water-scrum-fall
- Moving to this is difficult with component-oriented architectures and organisations
- But the payoff is significant in terms of speed of delivery
- Breaking dependencies rather than just managing them is the number one contribution to speed-to-market
- This requires a change in focus of your enterprise architecture function from removal of duplication or reduction of cost to enabling radically faster flow and product team autonomy
Instead of starting and ending your transformation with an organisational pivot (the “Spotify Model”) plot the path to autonomous teams including autonomy over their technology
- Team Topologies is an essential starting point
- Evolve your enterprise architecture to a system of loosely coupled applications, and your organisation, to a set of autonomous teams each delivering an element of business value — this is a long term, intertwined journey
- Consider using the “strangler pattern” to evolve to something looking like “big” microservices integrated with events; look at Monolith to Microservices for inspiration
- This allows the constraining structures of scaling frameworks such as SAFe to fall away
- Heavy coverage of automated functional testing is key to a sustainable evolution of your architecture
Technical excellence requires skill and experience: focus on people
- In particular experienced senior software engineers
- But also modern testers — test automation is key to agility but it doesn’t replace the requirement for testing skills (aka the Evil Hat)
- And enterprise architects steeped in the principles and practicalities of DevOps and agile technical practices to tend to the health of your system of systems
- Allow hands-on sole contributors — not people managers — to rise to the top of your reward and seniority structure
- Some of these people can coach the technical excellence improvements above
- Follow patterns such as Distinguished Engineer programmes
- Consider internal consultancy capabilities
Software development work is a combination of knowledge work and scientific-method-based engineering. This means it needs a style of management that suits these two inherent characteristics.
This management needs to focus on optimising for flow: optimising work management, technology design within an application and between applications, and organisational structure, all in combination to get the fastest possible flow of delivery of value.
The optimisation effort needs to be a continuous one: firstly because it’s impossible to design the optimal organisation, technology, and work management setup upfront given the complexity and individual context of any organisation, and secondly because the context itself — the market context, the technology context, and that of the stakeholders in the operation of the business — is constantly changing.
Tools — DevOps tools in particular, a “little DevOps” implementation — can help skilled people — they won’t replace critical knowledge-working skills, only help to remove repetitive low-skilled tasks from the critical path of highly skilled people.
Challenges with software delivery speed (and therefore cost) come from inherent complexity — which grows worse unless actively attended to on an ongoing basis by experienced and skilled software development and operations professionals: simplicity in software construction and operation is hard.
There is an old programmers’ joke: a junior developer writes code; a senior developer deletes code; an expert developer avoids the code being written in the first place.
Simplicity at an application level is hard enough; simplicity at enterprise level requires deeper attention and organisational flexibility as well as technical flexibility: using Conway’s law and its corollary to the organisation’s advantage.
Technology leaders must understand the false economies that they are typically incentivised to optimise for: cheap junior developers; deferring investment in automation; replacing testers with developers; replacing senior developers with tools. And instead invest in the broad and deep skills required for simplicity, quality and fast flow.
The rewards are significant, even in long-lived estates of hundreds of applications (and one or two living fossils): with investment and time, you’ll move, as Dan Terhorst-North puts it, to delivering software in minutes rather than months.
