111th Monthly Technical Session

Muhammad Furqan Habibi
henngeblog
Published in
6 min readMar 18, 2024

Howdy, dear engineers and developers!

Here at HENNGE, we are passionate about sharing knowledge, especially about the latest and greatest in the world of technology. This time around, we have diverse topics ranging from frontend, backend, even DevOps and developer experience in general. Some of these topics include react server components and react actions, flaky tests, mocking in Python tests, and many others.

Parallel Universe of Frontend Development | Ray

In this session, Ray proposes a fascinating thought experiment in which frontend web applications written in Javascript/Typescript are developed with not only frontend running code but also with some server (backend) running code. This is made possible by new techniques such as React Server Component and React Actions. This development process is similar in nature to the development process of early web applications back in the early 2000s, but using modern Javascript/React technologies instead.

Ray then goes on to give an example of how to develop HENNGE Access Control frontend with this process instead of the current implementation, which is a frontend-only Single Page Application. He noted that some complexities in the current implementation, especially related to data fetching, could be solved more elegantly using React Server Component and React Actions. He gives an example of fetching all users' data from the database and rendering it dynamically on frontend. This can be implemented using a server component that runs in the backend and fetch the data, massage it, and pass it to the frontend, all in React. This approach simplifies the data fetching process whereby with an SPA it may takes up to 10 fetches, but here we only need a single fetch.

In conclusion, Ray summarized the server side rendering pros and cons compared to fully client side rendering solution such as SPA:

Pros:

  • Can potentially boost productivity due to simpler data fetching strategy
  • Reduce the need for deep knowledge of frontend technology
  • Still, afford the possibility of employing complex frontend techniques

Cons:

  • Need more running costs to host the backend part
  • The complexity goes to the boundary of the server/client
  • Still uses experimental react functionality

Flaky Tests: What are they and how to manage them | Chaw Chit

In this session, Chaw Chit shared her experience with flaky code tests and how to manage them. She started by identifying the usual cause of a flaky test, such as:

  • Variability of execution environment
  • Variability of setup and teardown step
  • Inconsistent test data
  • Variability of behavior of asynchronous code
  • Non-deterministic part of the code

These flaky code tests could lead to significant problems in development, such as:

  • Additional debugging work
    Developers might end up spending more time to fix an apparent issue that actually does not exist.
  • Reduce productivity
    Developers need to perform extra testing and rerun the tests multiple times, wasting development time.

Chaw Chit then offered the following strategies to tackle the issue of flaky code tests:

  1. Test isolation
    - Remove reliance on test order
    - Reset the state between tests
    - Promote consistent and reliable test execution
  2. Consistent test data
    - Inconsistencies can lead to different test outcome
    - Isolate test data
  3. Temporarily disabling test
    - @pytest.mark.skip
    -
    Only for temporary and fix later
    - Otherwise, could miss out on the real bug
  4. Mark as flaky
    - @pytest.mark.flaky(rerun=5)
    -
    @pytest.mark.flaky(max_runs=10)
    -
    Rerun the test
    - Increase the chance of successful test pass
  5. Monitoring
    - Check test suite for new flaky test
    - Check test reliability
  6. Dashboard
    - Use CI such as CircleCI
    - Show which tests are flaky

Developer Experience | Kelvin

This talk is a continuation of a previous talk by Kelvin at MTS #95 titled Thinking About the Developer Experience. There Kelvin talked about the high level overview of developer experience in an organization. This time around Kelvin shared more in-depth about what developer experience is, why should we care as developers, and some personal insights on improving developer experience at HENNGE.

Kelvin defined developer experience as three interconnected facets:

  • Productivity
    How simply or quickly a change can be made into the codebase
  • Impact
    How frictionless it is to move from idea to production
  • Satisfaction
    How the environment, workflows and tools affect developer happiness

These facets can be identified by asking the developers questions such as: Is the tool making my job harder or easier? Is the environment helping me focus? Is the process eliminating ways in which I can make mistakes? Is the system giving me the confidence to do complex things? How do I feel about this?

Kelvin then dived into why developer experience is important at all. He points out some benefits of a great developer experience, for example, it can drive developer productivity, resulting in a shorter time to market, it can positively impact revenue growth, and it also helps in attracting and retaining customers while increasing customer satisfaction.

Kelvin then shared some key metrics that can be used to measure how good the developer experience is in a particular organization:

  • Deployment frequency
    How frequent are new releases
  • Lead time for changes
    Time taken from change request to deployment
  • Mean time to recovery
    Average time it takes to recover from a failure
  • Change failure rate
    Percentage of changes that result in a failure

Lastly, Kelvin shared his personal insights into improving the developer experience at his team:

  • See something, do something
    - Ask for clarifications and record them for the next guy
    - Fix/update/delete things that need fixing
  • Build for the new guy
    - Always do things assuming that you have a new guy tomorrow and won’t be around
    - Preemptively handle FAQs (best if you don’t even need one)
    - Share scripts to do common things so there’s no secret sauce
    -
    Reduce time to first E2E result
  • Start small
    - Building good feedback loops is complex; start small and slowly grow them
  • Tools are just tools
    - Don’t let tools determine the process

A Mockery Of Python | Jonas

In this fascinating session, Jonas shared about a particularly “weird” behavior of monkeypatching in Python testing. He started by defining the problem statement, which consists of three small Python modules utils.py, big.py, and small.py, and a test module test_random_number.py.

# utils.py
import random

def get_random_number():
return random.randint(1, 6)
# big.py
from . import utils

def get_big_random_number():
return utils.get_random_number() + 1
# small.py
from .utils import get_random_number

def get_small_random_number():
return get_random_number() - 1
# test_random_numbers.py

# imports emitted
def test_big(monkeypatch):
monkeypatch.setattr("mockery.utils.get_random_number", lambda: 3)
assert get_big_random_number() == 4

def test_small(monkeypatch):
monkeypatch.setattr("mockery.utils.get_random_number", lambda: 3)
assert get_small_random_number() == 2

When the test code is run as is, one would expect that both tests will always pass since the get_random_number method is given a mock value of 3, and the assert value is correctly set accordingly. However, this is not the case, and most of the time, the second test case will fail spectacularly.

Faling test

To understand this, Jonas goes on to explain deeply how monkeypatching works and how Python import statement works. In conclusion, the erroring test is the result of the combination of these two factors. First, monkeypatching works by assigning the import dictionary to the keymockery.utils.get_random_number to the given mock value. However, in the actual user module of small.py , it only imports the get_random_number function instead of the whole utils, which is akin to assigning only the get_random_number key of the utils import dictionary. As dictionary assignment creates a whole copy of the assigned value, so does small.py getting a whole copy of get_random_number function. This means that when monkeypatching patch the original mockery.utils.get_random_number functions, it doesn’t get reflected in small.py ‘s copy of the get_random_number function.

Solving it is actually quite simple, we just have to patch the imported function in each user module, instead of patching the original module. So our working test code should look like this

def test_big_fixed(monkeypatch):
# monkeypatch.setattr("mockery.utils.get_random_number", lambda: 3)
monkeypatch.setattr("mockery.big.utils.get_random_number", lambda: 3)
assert get_big_random_number() == 4

def test_small_fixed(monkeypatch):
# monkeypatch.setattr("mockery.utils.get_random_number", lambda: 3)
monkeypatch.setattr("mockery.small.get_random_number", lambda: 3)
assert get_small_random_number() == 2

And that was all the sessions for this edition of MTS. We learned a lot of different topics, from frontend, backend, even DevOps and developer experience in general. See you in the next edition of MTS!

--

--

Muhammad Furqan Habibi
henngeblog

Infrastructure Engineer with mild interest in computer vision, I make computers behave like human instead of the other way around.