Featured
[Draft] Rewilding Software Engineering
A good enough pathway to reasonable decisions.
By @girba (Feenk & Glamorous Toolkit) and @swardley (Wardley Maps)
December 2024.
Chapter 1: Introduction
Whilst the lessons in this book are widely applicable to decision makers and software engineers who need to ask and answer questions, this book was mainly written for those poor folk dealing with legacy systems. We say “poor” because in today’s world of digital technology and AI, there is little love given to legacy environments nor those who have to manage the migration away from them. This book seeks to address that imbalance.
Why legacy? According to Forbes, more than 60% of corporations rely on legacy systems to “power customer-facing applications”. Mechanical Orchard claims that maintaining legacy systems is a $1.14 trillion market, and a Microfocus survey (2022) argues that the world depends upon 800 billion lines of legacy COBOL code. McKinsey highlights that technical debt due to legacy accounts for between 20% to 40% of the entire technology estate and in 2018, the US Federal Government spent 78% of its IT budget on operations and maintenance i.e. keeping the lights running. Acropolium (a self described specialist in legacy migration) estimates the average time for a typical migration project is 4 years and according to the 2020 Mainframe Modernization Business Barometer by Advanced then 74% of organizations have failed to complete their migration projects with over 28% of organisations claiming their legacy systems are between 20 to 30 years old.
In a world where nearly 90% of businesses are expecting GenAI integration to free up 75% of their workload, is it any wonder that many CIOs are hoping that AI will solve this legacy problem for them.
Most of the statements above are created by surveys as we do not have a means of interrogating the true legacy estate. Hence the quotations are statements of belief, none of these reports should be considered rigorous by any measure. They are a painting of the landscape, not the landscape itself. That said, as a whole they do indicate a global chorus singing a common refrain: legacy is expensive, hard and it needs to change.
In this book, we argue that simply hoping that AI will solve this problem is not the solution and that a major cause of all our headaches is the method by which we ask and answer questions itself. To examine this, we need to start our journey by first looking at how we make decisions.
How we make decisions
Why should we talk about decision making in software engineering? Developers spend more than half of their time trying to figure out systems enough to determine what to do next. They might explore the impact of a change, the root cause of a bug or they want to know how to migrate a component in order to make the decision to do so or to fix this or to make that change.
The further away we go from the code editing and committing, it’s only about decision making, such as the architectural choices that we make on a whiteboard. Software engineering can be viewed as primarily a decision making activity about a continuously changing system and surrounding environment. But what does that decision making process look like? This is rarely a question we ask ourselves.
Instead of questioning the process of decision making, we tend to focus on the outcomes of the decision made i.e. could we have made a “better” decision that is more aligned with the business (the outcome) rather than could the decision have been made in a “better” way (the process). Ideally, we need to do both but to begin with, we will concentrate on the process itself.
In general, the steps for how we go about making decisions can be described as:
- We assess the problem
- To assess the problem, we need to explore the systems we are looking at.
- To explore the system, we need to have a conversation with either a person or the system.
- Having a conversation requires information that we can share or can be shared with us.
- Information does not just appear but it must be synthesized in some way.
- To synthesize information, we need some form of development experience to interact with the systems.
These steps are shown in figure 1.
The steps have been drawn in a Wardley map using the axis of concept/genesis, emerging/custom built, converging/product and accepted/commodity. The terms concept, emerging, converging and accepted have been used to describe how much agreement there exists on the process of information use is (the red line). For example, exploration is mostly considered an emerging practice as there is little consistency between engineers over how they explore beyond it being “ad-hoc”. The terms genesis, custom built, product and commodity describe how industrialized the practices and tools for information generation are (the blue line). For example, there exist numerous competing products for development experience.
Many of the ideas in this book can feel as abstract in description as they are as powerful in practice. Hence, to help the reader overcome this, we will explain each of these steps using examples grounded in practice, in a question and answer format.
Let us start with the question of “How do we go about making decisions in practice today?”. To answer this, let us run through the steps in figure 1 in reverse order.
a) We can only experience a system through tools that expose that system through some form of experience — a development experience. Today, our development experience consists of monolithic tools that are not designed for our specific context but a general context. For example, the tools we use to investigate a hospital system are usually the same generic tools that we use to investigate an online gambling site. So pervasive are these standard tools that you can probably describe them for yourself without even looking. They will have a navigation pane on the left, a basic search function on top and some general window which displays information such as the code of the object you’ve selected in the navigation pane. You might even be able to right click on an object in the navigation pane to get some properties. How you view and interact with the “data” has been pre-defined, i.e. the tool constrains how you view the digital world. In the physical world, this is the equivalent of trying to build a formula 1 racing car with the same set of tools used for digging a deep shaft mine.
b) Using these tools, we manually inspect the system in order to synthesize the information we require. Examples include reading lots of code, investigating error traces or exploring log entries in order to find the information we think we need. On average, we spend more than 50% of our entire development time on reading code.
c) The information is then consolidated into views of the system that are manually pieced together. For example, we may use the information we have collected to create an architecture diagram, a domain model or a network diagram. We often have to leave our monolithic tool that we used to synthesize the information in order to present it in a different format. For example, using an editor to examine code and then a slideware tool to present an architectural diagram that represents the code that we found with the editor. Sometimes we don’t even use digital tools to create these views but instead rely on whiteboards, paper or even post-it notes. Occasionally automated tools are used, such as a graph of events over time, but again they are typically generic and highly constraining with pre-defined views. You will have experienced this if you’ve ever used one of these tools and found that the graph isn’t quite what you needed but had no way of changing it within the tool, often resorting to trying to export the data into a new tool such as Excel in order to create a “better” graph.
d) The views we create are considered to represent the data about the system and our conversation is based upon those views such as the number of events over time in the graph and possibly connections to our network diagram. We call this a “data centric” approach even when those views are little more than pen on paper or manually created and we have no real idea how representative of the system they are. Ask yourself, how many architecture diagrams have you seen that are actually wrong or have missing components that are discovered later or have changed since the time the architecture diagram was created? A hand drawn diagram about a system is not unlike a painting that is used as a way to document history. It documents the perspective of the author at that time more than the system itself.
This situation is exacerbated in a world of continuous deployment where it has almost become fashionable to give up on architectural diagrams by keeping them “high level” with “just enough” information or to force a process of regular manual updates and in some cases to rely on vendor tools that come with their own constraints and pre-defined views in order to help automate the generation. An example of this approach is summarised in an InfoQ article saying that “One of the biggest mistakes is to create detailed architectural diagrams for parts of the system with high volatility”. Even when vendor tools are used, it is often considered that “the automatic creation of diagrams is not a sufficient option. It needs to be complemented by manually modeled diagrams”. Architectural diagrams are seen as a craft.
e) These conversations are part of how we explore the system. In software exploration, the process is typically driven by a series of ad-hoc conversations and interactions with the system. This method is unsystematic, guided by individual curiosity, and often lacks a comprehensive strategy. In contrast, geographical exploration follows a more structured approach. It begins with a blank map and systematically models the landscape through movement, careful observation, and the use of specialized tools like theodolites. This results in a comprehensive, model-centric understanding of the area being explored e.g. a map.
f) We assess and make our decisions based upon this exploration. The result of the exploration might only partially describe the system and can hold hidden beliefs. Our decision is predominately based upon whether we believe what we are being told by the software engineers and the views created i.e. our gut feel whether this is right.
Our current practice can be summarised as gut feel assessment based upon ad-hoc exploration using what are perceived to be “data-centric” conversations but are built upon manual information that is synthesized through a process of manual inspection using a development experience that consists of monolithic tools. We quote the term “data-centric” because gut feel, ad-hoc, manual information and manual inspection seem to be orthogonal to the idea of being data centric.
We’ve added the practice today to our Wardley map shown in figure 1 to create the new map shown in figure 2.
When creating this new map, we’ve introduced the idea of pipelines and considered how evolved the components of today’s practice are. Pipelines represent a common meaning i.e. ad-hoc is a type of exploration but there are other ways of achieving exploration such as the more structured approach of geography. Whilst there might be significant disagreement between software engineers over how exploration is achieved, there is general agreement (an accepted consensus) that their approaches are ad-hoc and there is little agreement over what a more structured method would look like. Hence, in the map the notion of ad-hoc exploration is considered more evolved than the notion of exploration itself.
Using this logic, each of the components from ad-hoc exploration to monolithic tools were added. Manual inspection was described as an emerging practice rather than an industrialised one because the rigidity of our monolithic tools often forces us into other highly manual processes such as reading code. There exists little consensus over what manual inspection means and topics such as reading code are rarely discussed. Whilst reading code does not scale with large systems containing millions of lines of code, it has the advantage that it can be adapted to any context and is hence used to circumvent the rigidity of today’s development experience. For example, when the tool doesn’t show connections between code objects, the software engineers are forced to read the code to find out the connections for themself.
While there are shortcomings to today’s approach, there are three takeaways worth noting:
- Any decision about a system, regardless of whether technical or business focused, requires information from the system.
- This information has to be synthesized somehow from the system, and the only way to do that is through a tool that provides an experience. A development experience. Thus, the development experience is paramount for any meaningful system development.
- Developers today write code for a fraction of their time. They spend most of their time reading because they want to understand what to do next. The largest single cost in software engineering is figuring the existing systems out which is something we do not optimize our work for.
How did we get here?
In a world of abundance of data, automation and systems we have somehow ended up with decision making processes that are more ad-hoc choices based upon manually created views. The cause of this situation appears to be the use of generic tools, which certainly suits the tool vendors to which we have handed over part of the process of understanding a system. We have accepted that building tools is hard — it takes money, it takes time — or at least, that is what tool vendors have always told us by highlighting their world class solutions and beloved tools that create lightning speed. “Software is a team sport” with the tool vendor as the referee, pitch and equipment provider.
If you want to pick a villain, a potential candidate would be Apple who took the concept of personal computing developed at Xerox Parc and encased it in a physical box and impenetrable apps with the Apple Macintosh. Another would be Microsoft which then sold its own concept of personal computing to the masses by removing the physical box — any x86 architecture would do. In both cases, the operating environment became more of a black box and what was sold was convenience of doing tasks rather than an understanding of what was happening. Despite the glossy marketing brochures, the vendors did not enable or empower understanding in people rather they infantilised them into tightly constrained spaces dependent upon the tools they sold.
Our natural inclination is to fight against this. Children start to build computational systems even within highly constrained environments like Minecraft or Roblox. If we wish to enhance these natural inclinations, we need to remove the constraints of tools, we need to provide more freedom. This is the central idea, expressed as four freedoms, behind the counter revolution to all of this control which is the open source movement.
Computers were supposed to augment human intellect, not diminish it in the pursuit of convenience. The path we have taken has had such a devastating effect that most professional software engineers don’t even conceive of building their own tools. Even within development forums, you commonly see examples of problem solving that reach the limit of the tool where the engineer gives up and suggests any further exploration is handed over to the tool vendor. Worse than this, many have almost replaced exploration and understanding with searching the web or glorified forums for answers. This is exemplified with endless infomercials on the “best answer to all your coding questions” — Stack Overflow.
Example: Trying to optimize a data pipeline in the traditional way
To explore these ideas further, we return to a real world example. A large corporation wanted to optimize the performance of a central data pipeline by an order of magnitude. This was wanted from a business perspective. In their case it was the main marketing pipeline from which offers were sent to millions of customers, and they wanted to be able to react much faster to changes in the market environment. The problem was visible all the way to the C-level and an initiative was started to reach the business goal of a quicker response to the market.
However, after a few years of effort, the data was still moving through the pipeline at the same speed as before. So, how can it be that all the effort made no difference in the end? People cared about the problem and they had spent many millions of dollars in pursuit of a solution.
To explain the environment, we will use their manually written high level architectural diagram (figure 3).
The pipeline consisted of an internal domain-specific language (DSL) based on Excel. Programmers would write elaborate queries in Excel spreadsheets and these were automatically converted to database transformations that were applied to the data which would end up in both a SQL database (Oracle) and a NoSQL database (Cassandra). Both of these databases were used as an input for a low code platform that also offered AI abilities and on which other programmers wrote scripts specific for various marketing campaigns.
The architecture was simple but no amount of engineering or reading stack overflow was helping to optimize it, or so they believed — they did not have any performance metrics. Despite this, their investigation led them to believe that there might be too much data generated along the pipeline that was not used in the end.
Their best guess was that the problem was “dark data”, the equivalent of “dark matter” — lots of stuff, we cannot see but has an impact. The teams working with the DSL did not have any visibility on what was actually being used. The teams working on the low code scripts could only see the data from the databases, but they did not know what transformations affect that data.
To make matters more complicated, the low code platform was indeed believed to be fulfilling its promise of helping people to create code faster, but it offered no support for either performance measurement or for tracing where a piece of data was used. Due to all these observations, they realized that because they were working in silos, they could not optimize the overall pipeline. They understood that they needed accurate data lineage to verify their beliefs but there was no tools for this or stack overflow article. In a system which contained tens of thousands of variables, they had been forced to spend person-weeks trying to manually see how just one variable was used through the pipeline.
In summary, the method of assessing the situation included:
a) A development experience consisting of monolithic tools which constrained what they were able to do for convenience.
b) They synthesized information through manual inspection. For example, they attempted to create traces of how some variables were passed through the low code scripts by manually reading through the code.
c) They consolidated information into manual views of the system. The high level architecture diagram in figure 3 was such an example.
d) Their conversations were based upon those manual views and a belief they were representative of the system. These conversations led them to further beliefs that there “might be too much data generated along the pipeline that was not used in the end”.
e) Their exploration of the system consisted of a series of ad-hoc conversations and interactions with the system. This method was unsystematic and mostly guided by the individual curiosity of the software developers and architects from different teams.
f) They assessed and made their decisions based upon their beliefs in this exploration. The result was millions of dollars spent on investment with no discernible impact.
This led to a situation where the business and engineering were in conflict and the pressure was mounting on engineering to find a solution. Engineering had responded by putting more resources into investigating the problem using the same path which had not yet yielded any results. The problem was considered by many to be insurmountable and they had practically given up.
A new path: Moldable Development
In a world of rigid tools, manual inspection, ad-hoc exploration and gut feel, a group of researchers (that later found a home at feenk) asked whether another path for decision making was possible? This required challenging the way we make decisions, the way we think about tools and the way we interact with systems. Step by step, over a period of 15 years, they developed a new path known as “Moldable Development”. To understand this new path, let us once again run through our steps to decision making in reverse order.
a) We abandon the monolithic tools which normally define our development experience. Instead we choose to compose the experience out of micro tools, each of which was built to answer a specific question for a specific context.
b) Using our micro tool we synthesize the information we require programmatically.
c) The information is then presented in generated views. These views directly feed from the information which we synthesise from our system through micro tools. For example, a graph (the generated view) of a collection of objects within the system and their relationships to other objects. All the information, wherever it is presented, can be considered both live and directly extracted from the system. It has not been manually created. It is not unlike a photograph replacing a painting as a means of documentation.
d) Our conversation about the system is now based directly on the system. A key part of the conversation is to codify that information into a model i.e. a representation of the live, running system itself.
e) These conversations are part of how we explore the system. We use the model of the system to ask new questions as we explore more of the system. To answer these additional questions, we constantly create new micro tools, synthesizing new information which we then codify into our model of the system. The more we explore, the more complete this model of the system becomes. This approach mimics geographical exploration. We follow a more structured approach starting with a blank model, and then systematically model the landscape through observation using our own theodolites (the micro tools).
f) We guide the assessment through explicit hypotheses and make our decisions based upon this exploration and the model of the system itself. The model is created directly from the system and contains no necessity for belief. We do not need to trust what we are being told as we can directly interrogate the model which itself is generated live from the system.
The new practice has been added to Figure 2 to create Figure 4.
Starting at the bottom of the map above and moving upwards:
Our development experience changes from the use of a monolithic product from a tool vendor that is designed to be generic to a composable environment that you customize for your problem. Why is this composable environment considered to be more industrialised? If you use a toolkit (such as the open sourced Glamorous Toolkit) then many of the micro tools you will need to solve your problem have already been solved by others and serve as examples. The process of manufacturing micro tools has also been industrialised within the toolkit. It is the digital equivalent of creating your own “customised” MINI Cooper by choosing from different but highly industrialised options such as roof color, chassis color, trim type and wheel options. The billions of different possible permutations gives an illusion that the MINI is custom built but in reality it is constructed from highly industrialised components that are chosen by you to fit your needs. This is the same design strategy behind composable micro tools.
Our synthetization changes from manual inspection of the system through monolithic tools to specific coding of tools that provide a direct, unmediated feed of the system. This synthesization requires that the cost of creating the tool is cheap enough that it has no material impact e.g. built in minutes or hours rather than days, weeks or months. Decreasing the cost of creating a tool contributes to the industrialization of micro tools. Similar dynamics happened in the space of testing. When the cost of creating a single test became so small it did not matter, automatic and industrialized testing became adopted at scale. We can regard a test as a form of micro tool that takes the execution of a system and transforms it into a red/yellow/green signal. The same idea can be extended to any tool, including visualizations or queries.
The use of small tests was behind an approach known as Test Driven Development (TDD) where we create small tests before we make code. Moldable development is somewhat akin to TDD but we now create small tools before we make decisions. Both approaches can be combined in something known as Example Driven Development, however we will discuss this later in the book.
Our information is no longer manual but generated live through micro tools. These micro tools in effect model the system.
Our conversation changes from a data centric approach focused on information provided in a manual view (e.g. a powerpoint composed graph or a CAD created network diagram or a handwritten architectural diagram) to a model centric approach that is generated live from the system and can be directly interrogated.
Our exploration changes from ad-hoc, highly dependent upon the understanding of the system by an engineer to a more formalized approach akin to geography. A skilled geographer with theodolites can map out any landscape and they do not require a deep understanding of the landscape prior to creating a map. In the same manner, a skilled explorer can model any system without requiring a priori in-depth knowledge of the system.
Our assessment changes from gut feel (and the level of belief we have in what is being presented in the manual views) to more hypothesis based where questions can be directly interrogated against the model.
Example: Optimizing the data pipeline following the new path
To explore these ideas further, we return to our real world example. A large corporation wanted to optimize the performance of a central data pipeline by an order of magnitude. After years of effort, the data was moving through the pipeline at the same speed as before. This led to a situation where the business and engineering were in conflict and the pressure was mounting on engineering to find a solution.
When you find yourself spending effort that seems to have no effect on reality, it is plausible that your model of the system differs significantly from the actual system. This happens frequently when models are mental models informed by manually created diagrams. In such situations, you want to first improve your ability to see the actual system.
Our first hypothesis was that maybe their lack of results was because of an incomplete understanding of the system. Hence, our first question was “What does the system look like?”
In our case, we wanted a tool to give us an accurate perspective of the system using the data of the system itself. Alas, given the combination of technologies and the specificity of the business case (such as, what data points are being used in a specific marketing campaign), there exists no such tool out of the box. We had to create it.
While building the model, we realized that the output from the first system did not match the data from the databases. This small observation made the team discover a whole external 3rd party system that they were not aware of!
This is shown in figure 5.
This confirmed the hypothesis that their visibility into the system was partial and inaccurate. Past decisions had been based upon this. To create a high level aggregation, the tools had to start from the smallest scope such as how a single property traverses the pipeline. These tools were built.
Figure 6 shows the generated view from one of the tools. A single input (the blue dot in the “Internal” system) impacted multiple other points of data (the red dots in the “Internal” system). These points of data then became inputs (blue dots) into the “External” system and impacted further data (the red dots in the “External”). Finally these points of data became inputs (blue dots) into their low code system.
As can be seen from figure 6, a single property (the blue dot in the “Internal” system) impacted multiple inputs (the blue dots in the “LowCode” system) due to the existence of the third party “External” system. Without an understanding of this flow, any attempt to optimize the entire system (from internal to lowcode) was likely to be ineffective and fraught with failure. This is exactly what they were experiencing.
Based on the property-level data lineage, we constructed tools to generate multiple pipeline overviews. One such overview quantified the amount of data produced along the pipeline and can be seen in Figure 7.
For each system, the visualization depicts groups of properties that are used or not used downstream in other systems. The red parts correspond to properties created but not used later on. In essence this was the “dark data” i.e. data that had no interaction with the rest of the system but added “mass” to the system.
These visualizations showed the team that they did indeed generate data they were not using, but more importantly, it made them realize that the situation is not hopeless and that it is possible to get an overview over the system, even if it was made out of heterogeneous and old components. Based on the input, they decided to redo the pipeline by consolidating the information to eliminate the need for unnecessary transformations.
This new path of assessing the situation included:
a) A development experience consisting of custom tools built specifically for the system to answer an initial question of “Why are our services slow?” which leads to a question of “Is there useless data generated” which requires a question of “How is data traversing the pipeline?” to be answered.
b) The tools synthesized information from the data pipeline that existed within the entire system between the various components. This was continuous live data.
c) The tools consolidated information into generated views of the system such as figures 6 & 7. It should be noted that figures 6 & 7 aren’t manual drawings but are continuously generated live from the systems themselves.
d) The conversations were based upon the live information within the system rather than any erroneous belief as originally existed in figure 3 with the missing 3rd party component.
e) The exploration of the system consisted of a systematic approach of building a model of the system by asking questions and answering those questions through tools that used the system itself. To achieve this, it is essential to use some form of toolkit which enables people to rapidly build tools whilst the toolkit models the system under examination. In the above case, the tools behind figure 6 & 7 were built using Glamorous Toolkit.
In total, 54 distinct micro tools (i.e. at least 54 answers to questions) were needed to properly understand the system in order to answer the top level question of “Is there useless data generated?”. This took approximately 2 person months of effort to solve the top level question and provide a model of the system compared to the previous attempts which were measured in hundred person years and many millions of dollars of direct costs (ignoring the opportunity cost of the time spent and the lack of solution for the business problem). In person months alone, this was a factor of 600x improvement with an actual working solution to the problem rather than failure.
f) They assessed and made their decisions based upon accurate information about the system. This led to a situation where the business and engineering could come to an agreement based on a common understanding of the problem.
How relevant is the example to you
It’s tempting to look at the example and ask “How did they miss that 3rd party system?” or state that “We wouldn’t do something like that”. The team of engineers at the client were extremely capable including a global leader in the service integrator industry. The engineers were all highly qualified with degrees and industry qualifications. However, this system was large with many pieces that had been built over many years. People had retired, and component systems had been forgotten about. The team was also siloed into different groups — an internal system team, a database team, a low code team etc and no overall picture existed.
Whilst this might appear as an extreme, we would argue based upon experience that highly skilled teams, often operating in silos, trying to manage complex and complicated legacy environments with a less than adequate model of the system is commonplace, if not the norm. It does not surprise us that 83% of data and legacy migration projects either fail, or exceed their budgets or schedules. What surprises us is that 17% succeed. However, even if that 17% is correct then we would expect that the results could be achieved at a much faster speed and lower cost.
If you have any form of legacy estate, then we suspect the example is relevant to you. To test, simply ask your engineering team to provide you with a model of the system. If the result is a manual hand written or powerpoint diagram (such as figure 3) then ask for a comparison to live. If you cannot get one, then you are likely to be navigating blind.
Why not just use an AI to solve this?
AI is a field and when most CIOs in 2024 talk about AI, they usually mean a specific set of transformer architectures such as LLMs (large language models) and LMMs (large multi-modal models) that are commonly found in code co-pilots. Whilst there is nothing wrong with using such models to assist in the effort (and the authors both do), as of December 2024 marketing research campaigns claim that copilots can help developers write up to x1.55 (55%) faster. It should be remembered that these are marketing research and are often challenged by independent research which claims figures as low as x1.05 (5%).
Typically, using a model-centric approach as described here, we see improvements of orders of magnitude. For example, the case above showed 600x (60,000%). In this case we had something to measure the impact against (the previous time and costs) but often it is difficult to quantify the difference because there is no baseline to compare against. Even in this case, we don’t know how long they would have had to continue before they found an answer and we can only compare against the time they had taken before they gave up with the traditional approach. The main issues with the comparison are:
- It is rare to have a baseline to compare against.
- Often, even where data exists, we are not comparing like for like i.e. you have one group which has failed to find an answer or given up versus one where the answer is found.
- Small sample sizes. It’s not possible to discount the influence of luck, the skill of the engineers or the bias of the authors.
- Larger opportunity costs to the business due to delays are rarely factored into the equation.
As a result, we are not in a position to quantify the impact of a more moldable approach and instead must rely on our experience of orders of magnitude. However, if we can’t quantify the impact then how do we know this path is right? The simple answer is we don’t but we do believe this is a better path. This is no different from my experience with Wardley Mapping back in 2005. There maybe a better approach but we’re not aware of it. All we can do is to help others become aware of this moldable approach and ensure the tools we use are open source as well as the knowledge.
That said, the two approaches — use of AI models and Moldable Development — are not in opposition and can compliment each other. We will discuss how, later in this book.
Throughout this introduction we have discussed the role of questions and answers. In the next part of our journey we will explore this in more detail and why it matters.