Great to see these concepts laid out so clearly, linking age-old reasoning (Kant) to modern discoveries (machine learning).
I’ve thought much about these ideas in relation to music. First, in applying linguistic relativity. If you view music as a language (an ancient and common analogy) then it gives the mind a vast new vocabulary and syntax. It provides the musician with a system of cognition that can still process non-musical data. Said another way, you can “see the world through a musical lens” and thus think about it differently.
Second, this has been a creeping problem in the world of musical genre where an increasingly vast amount of new music no longer fits the taxonomy of labels designed to organize the shelves of record stores (e.g. what the heck is “jazz” anyway?) With the advent of digital music and streaming catalogs, genre (i.e. label) has effectively given way to algorithm (i.e. encoding?) where most people now rely on a recommendation engine to “intuit” our musical interests via big data. This is one of the most interesting fronts on the AI debate right now. Apple music is a great example of that still unanswered question… who’s a better curator, human taste-makers or digital algorithms?