Product (Risk, Model, and Data) Classifications… an analogical tale of whisky.
Classifying whisky is complex. There are many dimensions with which to group or cluster the varieties e.g.
- By ingredient composition (peat, malted barley, other grain, yeast, local water, local air, barrel, …),
- By geography (which informs the flavour of some of the ingredients, and the possible weather conditions),
- By distillation and fermentation methods (still types, number of distillations, aeration, temperature, …),
- By barreling method (barrel charring, barrel position/movement during storage),
- By aging (which informs both the chemical processes allowed, and the exposure to storage variables like weather, and unplanned movement)
- By vatting or blending — and marketing — method (how this heaving mass of variables is massaged into something deemed acceptably consistent),
- Or “simply”, by taste (end product — and the palate/type of day/big hairy stressors it meets)
This is similar to the difficulty that arises in classifying complex financial products like derivatives (and even not so complex ones). There isn’t one single grouping dimension of universal relevance. Instead there are several. Several classification contexts that inform several grouping dimensions — even when a dominant context like taste or flavour (or trade booking, or made-available-to-trade, or risk hedging, or portfolio construction) exists.
The ability to reformulate grouping dimensions is a function of the granularity of attributes that can be composed into the financial product (and the richness of their descriptions).
Basically, show me your meta-data, primitives, and combinators, and I’ll tell you what your processing is capable of.
Back to whisky — and an important engineering role at your favourite distillery.
The distillery’s experienced Taster-Sommeliers.
The next step is to compare any new potential batch to an exemplar from the previous batch (i.e., a reference standard). Note this doesn’t have to be some absolute standard — simply using the last bottled version is good enough. The comparison is done using some variant of blind A-B testing. For example, a common method apparently involves pouring two glasses from the old batch, and one glass from the new. The experienced tasters are blind as to which glass is which. They are then asked to identify the “odd man out” (i.e., which one tastes different from the other two). If they consistently identify the new make as being different (as is likely initially), the master blended has to go back and adjust the relative contents of the new batch by adding new whiskies in, and trying the blind comparison tasting again. Once you get it to the point where the tasters cannot consistently differentiate the new from the old, you are good to go ahead and bottle.
Embedded in the tasting and blending expertise is a combination of:
- a keen ability to detect and correctly map or attribute slight taste variations to (well described) sources of variation,
- a detailed knowledge of the ingredient and process composition of all barrel batches,
- and (presumably) a Taster with unflappable demeanour, or at least a self-aware one who knows what stress and ambience does to their palate.
And that’s why they a̵r̵e̵ were paid the big bucks (meh, it is but a learned tongue — cue learning pipeline and “synthetic learned tongue”).
There is a tasting and a blending to be done. One that relies on skilled attribution and pattern matching. This in turn relies on fine grained knowledge of what your source stock and inputs are i.e. meta-data.
And sometimes even then…
Of course, that’s in an ideal world. Practically, distillers have to accept some “slippage” in flavour matching over time, due to the limited availability of source stock. Once it becomes untenable to continue to keep calling something under the same label as previously used, the distillery will need to come up with a new designation for this product. After all, no one is going to flush those tens of thousands of liters of whisky down the drain. Time for a special “Founders Reserve” release anyone?
Also from https://whiskyanalysis.com/index.php/background/scotch-style-whiskies-single-malts-vs-blends/
This — old — presentation (see below), a fun exercise in analogical framing, was delivered to support an argument against taxonomies that favour the single dimension of hierarchies. And a case instead for attribute-rich composability and the ability to generate multiple, dynamic and context-dependent taxonomies and grouping dimensions.
Hiding in plain sight of the analogy are:
Rich meta-data frameworks; (dynamic) contract, product, model, risk and process (& generally data) taxonomies — plus combinators and corresponding DSLs supporting specification, validation, valuation and other processing semantics; P&L attribution and Unexplained P&L metrics; (“like” product/risk profile/data) cluster and other pattern-matching/similarity/discriminant analyses.
From an Acuity Derivatives engagement brief (a Tier 1 US bank)
… we designed a cross-asset class product taxonomy and risk taxonomy, based on a decomposition of the products’ payoffs, underlying, and factor price process types (this, driven off a meta-model of production pricing models). We then mapped and normalized feeds from the bank’s risk systems (across the firm’s global lines of business) into a data model built based on the two taxonomies. This decomposed and normalized meta-data dataset allowed us deploy clustering analysis techniques, with which to develop groups of “like” products and risks.
- test in-group consistency of product treatment in modeling, valuation, and EOD risk profiles;
- review the accuracy of risk aggregation, netting and residual basis; and
- propose remediation strategies to gaps in L3 limit structures and risk systems’ capabilities.
Thanks for reading.
And, to end with fine S̵c̵o̵t̵c̵h̵ data visualization inspirations…