‘Intentional Naivety First’ Bounded Context Modelling

Model the wrong boundaries in your system and disaster is just around the corner waiting to tease your sanity. An excess of dependencies between modules will result in changes rippling across a fragile codebase at compile time, and that same web of dependencies means even just a small runtime error in one module has the potential bring the entire system crashing down.

Modelling the wrong boundaries will test the patience of your teams as well. Every major piece of work will require collaboration with other teams who all have different priorities and sometimes political agendas. We’ve all been there, proudly waiting to go live with a killer new feature, but helplessly blocked by another team.

You know all of this, so don’t let me patronise you further.

But one thing a lot of people aren’t so familiar with, is how to find the right boundaries — aka bounded contexts — for their needs. And understandably so — in complex domains, there are hundreds of modelling choices, and no perfect model.

I want to show you one technique for getting started with modelling and leading workshops, even if you have zero Domain-Driven Design experience. A technique I frequently use myself.


Visualising the Domain First

As I discussed in my previous post, we need to uncover the essential concepts and relationships in our domain before we can start modelling the boundaries. Sorry, no shortcuts to understanding your domain.

If we try to model our boundaries without understanding the domain, this is what I call Pure Naivety First Bounded Context Modelling. We pick prominent nouns in the domain and assume they will be good boundaries (hint: they’re often not. See The Entity Service Antipattern).

This is where Event Storming is the answer. Grab a mix of people who understand different parts of the domain, and then collaboratively model the domain in a high-level of detail to create shared understanding.

Now we are in position to start identifying the optimal boundaries.

Intentional Naivety First Modelling

Where do you start? Where do you begin trying to identify those bounded contexts? My approach is to apply a heuristic I use for solving all kinds of problems — start with the simplest possible solution and vigorously reject additional complexity until justfied.

This is the premise of Test Driven Development (TDD). We write the simplest possible test, and the simplest possible solution in order to achieve our current target. But how does this look in a context modelling scenario?

Think about the simplest possible design of a system — one single context. Ok, we can usually disprove this quickly because of the sheer size of the system which is too big to be understood by a single person and must be worked on in parallel by multiple development teams.

So what’s next? What’s the next simplest possible solution? The next simplest possible solution is a sequence of linear steps that are highly decoupled,.

During workshops, this is the approach I take. I try to flatten the domain into a sequence of linear steps, and resist calls to explore more complex options until someone can provide a compelling argument against the simple linear process.

I’m not saying we never explore other alternatives, in fact, we should always explore alternative models. Understand that it’s the insights we gain from this first naive model that engage everyone and unlock a world of strategic modelling possibilities.

Finding the Naive Linear Model

Here is a guideline process for finding the naive linear model:

1. Identify a concept or piece of “data” on the timeline

2. Follow the concept along the timeline and highlight every transition in its lifecycle

3. Draw a boundary around each transition

4. Visualise all of the dependencies each boundary needs to do its job

5. Explore every possibility to move those dependencies inside the boundary so it becomes 100% autonomous

6. Combine boundaries that appear inseparable

7. Ruthlessly slaughter the model — find every possible disadvantage and trade-off

Time for an example. Here’s an anonymised scenario based on a real client engagement:

We are Wonder Beans, a business who provide complete magic bean solutions. We provide customised magic bean packages (for growing all kinds of magic trees), we arrange a magic bean farmer to grow your beans, and we track the development of your magic beans providing real time and historical insights.

1. Identify a Concept Early in its Life

Priority one is to identify the first step in the lifecycle of a concept. In a technical sense, where do we first receive or create some piece of data? In this case, Wonder Beans has an army of suppliers who regularly deliver batches of magic beans to their warehouse, so this is where we start.

We have an Inbound Bean Delivery Accepted event. A new batch of magic beans are now owned by the business and their lifecycle (within the Wonder Beans business process) has begun.

A tiny extract of the Wonder Beans event storm, showing the start of the magic beans’ lifecycle

A variety of modelling patterns could indicate an initial lifecycle phase. One example is the supplier triggered lifecycle pattern — an external system (pink) triggering an event (orange) that initiates the lifecycle of some concept in the domain being modelled (i.e. any events involving the concepts prior to your domain are irrelevant).

2. Highlight Lifecycle Transitions

Now our aim is to identify every transaction (a real world transaction, not a database transaction) or event involving the magic beans (or whatever they evolve into). These transitions represent phases in its lifecycle where rules, policies, and capabilities will become active or inactive.

Those are the essential insights we need in order to identify cohesion and find good boundaries.

The lifecycle of magic beans is as follows.

  1. Check the quality of the beans
  2. Store the beans in the magic refrigerators
  3. Create bean potions to suit customer orders — combinations of beans that produce specific types of magic trees (there are hundreds of types of beans and billions of possible combinations)
  4. Allocate magic bean farmers (the farmers must have expertise relevant to the bean potion and satisfy criteria related to the client — e.g. operates in their region)
  5. Monitor the beans (use feedback to update customers, guide farming and improve future potions)

3. Delineate and Name Each Transition

On our event storm, we can now draw boundaries around each of these transitions and give them a name. This makes it easy for us to point to them and refer to them, but there’s also another reason — to elicit business terminology from domain experts.

Facilitator: “Let’s call this ‘bean QA’ for now”

Domain expert: “No, that set of actions is referred to as the magic scrub”

An illustration of how you might identify steps on an event storm. This is an example illustration only. Expect lots more stickies on a real event storm.

At this stage, you may want to move to the flip chart and draw your contexts as a domain use diagram so you can show all of your contexts at a glance. You will be moving back and forth between the flip chart and the event storm as you zoom in and out of detail.

4. Accentuate Dependencies

Before you can identify boundaries, you need to build up a picture of dependencies; dependencies between tasks themselves, dependencies on other parts of the business, dependencies on external services….

Understanding dependencies shows you which tasks have a stronger coupling than others — are more cohesive — and likely belong together. Understanding dependencies will also challenge the naive linear model.

In the magic beans domain, our event storm has shown us that creating bean potions and allocating magic bean farmers both depend on “potion rules”. Potion rules are rules that affect how potions are created and how bean farmers are chosen.

For example, a rule might be “do not mix immature blue beans with mature dexo beans” or “find a farmer who has experience growing voodoo bean potions in the southern hemisphere”.

Accentuate dependencies or your model will be lacking key details

Some bean rules have existed for years and are unlikely to change, others are dynamic based on customer needs and real world feedback from bean growing cycles.

It looks like the naive linear model is impossible due to the dependency on this apparent “potion rules” context. Especially when everyone in the modelling session is convinced it is a context because there is a potion rules department. However, don’t let them win easily.

5. Challenge Dependencies — Strive for 100% Autonomy

In your modelling session, people will be convinced that some context must exist because it is relied on by multiple other contexts, because there is an organisation department with that name, or they will find some other reason. They might be right, but you have to challenge them.

What changes could you possibly imagine that would make the naive linear model work? Even if it sounds crazy, go along with it. Think of any crazy solution to make your contexts 100% autonomous. Some people will get excited and try to help you, others will resist and try to prove you wrong — this is healthy and exactly the type of debate you are looking for.

There are a number of patterns to look for in this situation. One pattern is isolate distribute coupling — extracting the sub-components of existing modules that depend on each other into a new module in order to decouple the original modules.

When two modules are coupled, identify the dependent sub-modules and extract them into a new module

Another pattern is decompose and distribute, where an existing module is broken down into multiple smaller pieces and distributed among existing modules. And this a scenario that applies to Wonder Beans.

Wonder Beans naively modelled the “Rules” context. Everyone talks about rules so it must be a bounded context. There is a specific UI in the product for configuring all kinds of rules, so it must be a bounded context. No, sorry.

Detailed analysis of rules (more event storming) shows that there are different kinds of rules. Some apply only to potions, and some apply only to farmer allocation, and none apply to both. So the rules context can be decomposed and distributed.

When multiple modules all depend on another module, maybe it’s a faux module and can be decomposed to enable higher autonomy

6. Combine Cohesive Steps

If you’ve modelled your event storm in high detail, you’ll likely have highly granular boundaries. Some of these boundaries will be more cohesive with each other than others.

We need to group cohesive boundaries to provide additional clarity, so we can talk about specific parts of the domain collectively, and avoid being overwhelmed with lots of small pieces.

Here are a few of the tips I’ve picked up over the years for analysing cohesion at the context level of granularity:

  1. Does the business have a name that collectively describes both steps?
  2. Are the same people involved in both steps?
  3. Is the domain expert the same for both steps?

For example, internally, all Wonder Beans staff are familiar with the phrase “bean prep”. Bean prep collectively refers to checking the quality of magic beans and storing them in the fridges. Both steps are also managed by the same employees in the same department.

After finding low level domain autonomy, explore higher levels of cohesion

Of course, there are many caveats to be aware of:

  1. Don’t blindly model the organisation structure in code
  2. Boundaries are emergent — they will change over time

At this level, we have to tread extremely carefully. These tips can be useful, but they can also lead us in the wrong direction.

7. Slaughter the Model

After going blue in the face defending our naive linear model, we now turn the tables and go red with aggression trying to rip it to pieces. We want to find every possible reason why it won’t work.

Combined with the insights we gain from defending the naive model, the insights we gain tearing it to pieces will flood our brain with a million modelling possibilities. We have so many potential avenues to explore now, the modelling session will be in full swing and all attendees will be fully engaged.

When slaughtering your model, attack it from all angles:

  • Business reasons — “we should split this context in half because we want to isolate a core business value proposition”
  • Organisational reasons — “we should remodel the boundary between these two contexts to accommodate the skill sets in each team”
  • Technical reasons — “for now, there is so much legacy in here it cannot be broken down in this way”
  • Delivery reasons — “there is a bottleneck between these two contexts, let’s make a bigger context to avoid a handover between teams so we can deliver faster”
  • Flexibility / uncertainty reasons — “I feel extremely uncertain about these boundaries. Let’s keep them as one big context for now until we gain further insights”

Next Steps

If you want to identify better boundaries, but you feel unsure where to start and aren’t sure how to facilitate modelling sessions, I fully recommend considering this approach. It will break the ice and give you a starting point to build on (admittedly, in some domains more than others).

As you gain more modelling experience you’ll encounter many other patterns and heuristics, and you’ll realise the true power of strategic modelling. If you’d like to fast track your modelling skills then check out some of the resources below, keep an eye out for my upcoming posts, or enquire about some of my workshops.

Additional Resources

Books
- Designing Autonomous Teams and Services
- Patterns, Principles, and Practices of Domain-Driven Design

Talks
- Aligning Organisational and Technical Boundaries for Team Autonomy
- Great Technical Architects Must be Great Organisation Architects
- The Art of Discovering Bounded Contexts

Articles
- Confusing Process Stages with Bounded Contexts
- The Continuous Organisation Design Playbook