Clean DDD Lessons: Caching in Persistence Adapter

George
Technical blog from UNIL engineering teams
8 min readJun 28, 2024
Photo by NordWood Themes on Unsplash

When using Clean Architecture and DDD paradigms, separation of the persistence layer and the domain model layer is a primordial one. Moreover, as we have previously discussed elsewhere in detail, there is no or little use of full-blown ORM frameworks (i.e. Hibernate) when following CA. And this is mainly due to the fact that in CA we are mapping persistence entities to the domain entities in the gateway before returning these models to the Use Case layer. This, of course, cancels all the benefits which an ORM framework can bring us with its lazy-loading — since we must eagerly browse through the entire object graph of each database entity while performing the mapping.

In Clean DDD, we still use an ORM framework, but a lighter one — Spring Data JDBC. It loads all aggregates eagerly and does not use any sophisticated mechanisms like built-in caching or lazy-loading. This has a significant impact on the design of our persistence schema. First or all, we must keep our aggregates as small as possible avoiding complicated (many-to-many) relationships. We also need to reference foreign aggregate roots by their IDs only. This obliges us to load (and convert) ourselves all aggregate roots related to the aggregate we are working with in a use case. Moreover, this loading of related aggregates will happen quite often, as we often need that information for presentation of the results of a use case to the user.

All the above considerations, will eventually point towards a necessity of introducing some sort of caching mechanism which will greatly improve performance of our CA application.

Persisting locations in Cargo tracking example

To make this point clearer, let’s look at how we can use persistence caching in our CargoClean example application.

In this application we have Cargo aggregate referencing Location aggregate via the latter’s natural ID — UnLocode. Majority of use cases will, of course, need the information related to such and such location: when showing the origin city or the destination city of a cargo we need to book, to route, or to track, for example. So we often need to lookup an aggregate Location via its UnLocode or even to load several, if not all, locations at once before calling a presenter. On the other hand, locations themselves are rarely changed, making them a perfect candidates for being stored in and looked up from a cache.

For our example we have introduced a way to simulate a slow loading and converting between location database entities and Locations in our persistence gateway. This allows us to measure the impact with caching Locations can bring to the performance of our application. This feature is enabled via custom configurable properties.

cargo:
slow-load:
enabled: true
delay-millis: 150

This will simulate a delay of 150 ms between each load of a location database entity and its conversion into a domain model — Location. In reality, the delay may come from a call some external system during conversion or the fact that the loading of a the database entity itself makes several requests to the database: i.e. for eager loading of one-to-many relations by Spring Data JDBC.

Trying to book a new cargo, for example, will now takes several seconds, which is a considerable downgrade in the performance of the application.

Configuring Caffeine cache

We introduce a popular Caffeine caching library in order to cache locations in an in-memory cache. The first thing we need to do is to configure a bean in our application context — CacheManager bean which acts as a bridge between Spring’s caching SPI and Caffeine.

@Configuration
public class CacheConfig {

/*
Point of interest:
-----------------
We are creating "CacheManager" (Spring abstraction) bean backed
by Caffeine cache as our cache provider. Then we register a cache
for each model (aggregate root) which we want to cache.
*/


@Bean("cacheManager")
@Qualifier("caffeine")
@Primary
public CacheManager caffeineCacheManager(CargoCleanProperties props) {

final CaffeineCacheManager cacheManager = new CaffeineCacheManager();

cacheManager.setCacheNames(Collections.emptyList());

// make cache for locations

cacheManager.registerCustomCache(props.getLocationCache().getName(),
makeCache(props));

return cacheManager;
}

private Cache<Object, Object> makeCache(CargoCleanProperties props) {
return Caffeine.newBuilder()
.initialCapacity(props.getLocationCache().getInitCapacity())
.maximumSize(props.getLocationCache().getMaximumSize())
.expireAfterWrite(props.getLocationCache().getTtl())
.build();
}

}

Of course, if we are using another caching mechanism, like Redisson cache for caching to a centralized Redis store (i.e. when caching across several replicas of the application in a containerized environment), we would have to adapt slightly the method with instantiates CacheManager bean.

Usually, a Spring Boot application will use a very handy abstraction available with the use of EnableCaching annotation. We are not, however, using this annotation, nor any Cacheable annotations. This is because we need more flexibility in accessing each of the cached Locations. This point will be made clear by the examples below. The important thing is, that we still using all the power of Spring’s CacheManager SPI, but giving ourselves all the finesse necessary in order to introduce caching just where we need it.

Caching locations in the gateway operations

The most important place where we introduce our cache is the method in the gateway which loads, converts, and returns all instances of Location. Here is the relevant portions of DbPersistenceGateway adapter.

@Service
@RequiredArgsConstructor
@FieldDefaults(makeFinal = true, level = AccessLevel.PRIVATE)
public class DbPersistenceGateway implements PersistenceGatewayOutputPort {

// code omitted for brevity


// Spring Data JDBC repository for LocationDbEntity aggregates
LocationDbEntityRepository locationRepository;

NamedParameterJdbcOperations queryTemplate;
DbEntityMapper dbMapper;
CacheManager cacheManager;

@Transactional(readOnly = true)
@Override
public List<Location> allLocations() {
try {

/*
Point of interest:
-----------------
Query for UnLocode (IDs) of all locations, without loading
Location aggregates themselves.
*/

List<UnLocode> unlocodes = queryTemplate.query(AllUnlocodesQueryRow.SQL,
new BeanPropertyRowMapper<>(AllUnlocodesQueryRow.class))
.stream().map(AllUnlocodesQueryRow::getUnlocode)
.map(UnLocode::of)
.toList();

/*
For each UnLocode, first see if there is a corresponding Location
already in the cache, if not, then load Location DB entity and
convert it to Location model updating the cache in the process.
*/
Cache cache = getLocationCache();
return unlocodes.stream()
.map(unLocode -> cache.get(new SimpleKey(unLocode),
() -> locationRepository.findById(unLocode.getCode())
.map(dbMapper::convert)
.orElseThrow(() -> new PersistenceOperationError(
"No location found for %s in the database".formatted(unLocode)))
)).toList();
} catch (Exception e) {
throw new PersistenceOperationError("Cannot retrieve all locations", e);
}
}

private Cache getLocationCache() {
String cacheName = props.getLocationCache().getName();
return Optional.ofNullable(cacheManager.getCache(cacheName))
.orElseThrow(() -> new IllegalStateException("Cache %s not found".formatted(cacheName)));
}

// code omitted for brevity
}

Without caching, we used to simply call LocationDbEntityRepository provided by Spring Data JDBC via findAll() method to eagerly load and then to convert all LocationDbEntitys to corresponding Locations models. Now we have to do a bit more work. Here is how we proceed:

  1. We load only UnLocodes (IDs) of all locations in the database by issuing a simple SQL query. This query should execute really fast as we do not need to load a complicated graph of objects.
  2. We then get the cache for the locations from the CacheManager wired into out gateway.
  3. We run through each UnLocode and check if the corresponding Location entry exists already in the cache. We use it for the resulting list of returned locations, if it does.
  4. It there is no entry in the cache for a UnLocode (the entry was not there, in the first place, or it has expired), we perform a callback to actually load a corresponding LocationDbEntity from the repository and to convert it to an instance of Location (model).
  5. We return a list of all locations which were either found in the cache or loaded (and converted) from the database.

The similar approach is used with obtainLocationByUnlocode() method where we are interested in a single Location only.

Updating cache

To make things more interesting, we have introduced an additional use case in our example application — “Update location”. A manager can choose one of the existing locations in the system and update the name of the location. When the updated Location model needs to be persisted to the database, the gateway also need to update the cache. This is how it is done:

@Transactional
@Override
public void saveLocation(Location location) {
try {
// save location and update the cache
locationRepository.save(dbMapper.convert(location));
getLocationCache().put(new SimpleKey(location.getUnlocode()), location);
} catch (Exception e) {
throw new PersistenceOperationError("Error when saving location %s"
.formatted(location.getUnlocode()), e);
}
}

Additional considerations

Using the caching mechanism described above, allows for a very flexible and configurable approach to caching of our models. We invite the reader to run the example application with a loading delay set, in order to observe the impact of using this method. After the initial load of all locations (booking use case, for example), every access to any of the location should benefit from caching. There several additional points which are worth mentioning.

It’s a best practice to use Serializable keys when storing values in a cache. Since our UnLocode is not serializable, we can use a technique of wrapping an object used as a key in an instance of org.framework.cache.interceptor.SimpleKey.

If a transactional method in our gateway fails or if the use case fails during execution within a transactional boundary, we may have a inconsistency between values stored in the cache and the state of the corresponding enties in the database. This is because, a failure of the transactional method will rollback any changes in the database, while any related changes in the cache will remain intact. One simple way to remedy this, is to clear all caches if we detect a rollback. In Clean Architecture we control our transactions programmatically, allowing each use case to specify exact consistency boundary within its business operation. So we intervene in our SpringTransactionAdapter to register a callback which will be executed upon a rollback.

@FieldDefaults(makeFinal = true, level = AccessLevel.PRIVATE)
@RequiredArgsConstructor
@Slf4j
@Service
public class SpringTransactionAdapter implements TransactionOperationsOutputPort {

TransactionTemplate transactionTemplate;
CacheManager cacheManager;

// code omitted for brevity

@Override
public void doInTransaction(boolean readOnly, TransactionRunnableWithoutResult runnableWithoutResult) {
log.debug("[Transaction] Executing runnable (without a result) in a transaction, read-only: {}", readOnly);
if (readOnly) {
readOnlyTransactionTemplate.executeWithoutResult(transactionStatus -> runnableWithoutResult.run());
} else {
transactionTemplate.executeWithoutResult(transactionStatus -> {
/*
Point of interest:
-----------------
We need to register a callback which will clear
caches in case of a rollback of the current transaction.
*/
registerCacheInvalidationOnRollback();
runnableWithoutResult.run();
});
}
}

/**
* Registers a custom transaction synchronization callback which will
* clear all caches after a rollback of a transaction.
*/
private void registerCacheInvalidationOnRollback() {
if (!TransactionSynchronizationManager.isActualTransactionActive()) {
return;
}
TransactionSynchronizationManager.registerSynchronization(new TransactionSynchronization() {
@Override
public void afterCompletion(int status) {
if (status == TransactionSynchronization.STATUS_ROLLED_BACK) {
log.debug("[Transaction] [Cache] Clearing all caches on rollback");
CacheUtils.clearAllCaches(cacheManager);
}
}
});
}

// code omitted for brevity
}

Conclusion

We have looked in detail how we can introduce caching to our CA application. We have considered using a cache of domain entities in the persistence gateway. The caching relies on the Spring’s CacheManager which is used together with a Caffeine cache. We have looked at how aggregate instances can be looked up from the cache and stored in the cache by the persistence adapter.

References

--

--