Three of us from Redgate went to SC London 2019. Here are some key things we learnt.
What Michael learnt
I enjoyed the talk The Gordian Knot by Alberto Brandolini. Alberto’s talk linked the research from Accelerate with design choices and how those design choices impact our culture.
tl;dr Software design decisions around bounded contexts define interactions that shape our engineering culture. Each interaction has its own “winning behaviour” and people act in this way. The bounded context is the unit of consistency, purpose, responsibility and pride.
Alberto reminded us of a couple of key points from Accelerate — loose coupling was the main differentiating factor, not technology, and that we should split around behaviour instead of data.
Alberto spoke about Drive by Daniel H. Pink. Drive is made up of three components — autonomy, mastery and purpose. Dan Pink talked about these ideas at Business of Software 2012 (video). As a quick review:
- Autonomy — Having a sense of control over what you’re doing
- Mastery — Getting better at something that matters
- Purpose — Why you are doing something in the first place
Alberto applied these principles to software teams where team members can vary from having a strong drive to considering their work as “just a job”. He said that if you have bounded contexts as units of clear responsibilities, then software teams have the autonomy to take responsibility for our own changes and can achieve a sense of mastery in that bounded context. Here bounded context means a logical boundary in a software system (as per DDD).
- Autonomy — Write loosely coupled software that has few external dependencies
- Mastery — Allow the team to decide on trade-offs
- Purpose — Reduced by number of meetings required to agree on decisions
Alberto looked at the time people stay in a software team, versus how long we take to enjoy the benefits of improving the code and found that systems do not improve if the feedback loop is too long.
People won’t improve a system if they won’t stay around long enough to see the benefits. — Have you repainted a hotel room?
Long feedback loops can end up being reward deprivation systems. For instance:
- Releasing in 15 months tentatively hoping for positive feedback with random negative feedback for things not your fault in between
- Having to work on legacy microservices that are coupled and guaranteed to fail
If we spend a long time working on a project without releasing, there is a risk of causing a reward deprivation system.
What Ben learnt
Trisha Gee presented an interesting talk on something that I’d previously not given much thought to — reading code is harder than writing it.
Programming languages are the only languages that we learn to write before we learn to read. Since we actually read code more frequently than we write it, we should place greater emphasis on the skill of reading.
Why do we hate reading other people’s code? There are a couple of theories that come into play here. First is the Mere-exposure effect, which explains that people develop a preference for things merely because they’re familiar with it. Second is the IKEA effect, which explains that people place a disproportionately high value on things that they’ve partially created.
Trisha gave several techniques and tips to help develop the skill of reading code. We should remember that we’re not reviewing the code. Don’t judge it, it should be accepted it for what it is. We can make notes about the code; questions, discoveries and assumptions. We can even draw diagrams. When we are simply trying to understand the code we shouldn’t change it. If you must write something, you should write tests. Code is meant to be run, so this is a good way to understand it. We should use the features of our IDE to help us, such as watches and the evaluation evaluator.
We should also be observing the shape and dialect of the code. Future readers will be thankful if our contributions follow the existing style.
What Mark learnt
Several talks mentioned interesting testing techniques, particularly TDD with Petri Nets by Aslak Hellesoy and Testing Microservices by Daniel Bryant.
A Petri net is a mathematical construct like a state machine, but with some extra complexity. Petri nets model “tokens” moving around a system, and certain transitions can be enabled or disabled depending on the positions of the tokens. Petri nets can be much better at modelling distributed or concurrent systems than standard state machines, since they can represent much more information in much fewer states. Aslak showed a demo with a couple of Petri nets modelling simple systems, and showed off how to implement some basic primitives like a critical section.
Petri nets look like a useful tool for doing model-based testing, and I’m hoping to spend some time trying to implement some application tests based on the idea, but their relative obscurity compared to other tools might make adoption tricky.
Daniel Bryant talked different testing techniques for microservices. While not directly applicable to the work we’re doing, the way we’re splitting up more of our code into separate commandlines (and potentially platform capabilities) means we’ll need to do similar thinking ourselves. Daniel started by talking about how the traditional test pyramid becomes a lot less relevant with microservices and how integration testing and testing in production (monitoring, chaos engineering, etc) becomes a lot more important. Mocking out dependencies becomes necessary, but managing those mocks to ensure they’re accurate is even more critical. Tools like hoverfly or pact can help by recording network interactions, replaying them for tests and then sharing the generated simulation between teams for contract testing.
The full talks are now available on youtube if you want to see more.