Programming reproduces a worldview, and the problem is that the worldview reproduced in most cases is too much of a monoculture, with barriers to representation in the form of implicit and explicit hurdles & gatekeeping. (This is part of the reason I think “small computing” is so important: users should have enough control to make the code they interact with representative of THEIR umwelt.)
The absolute best case for coding, on an ethical level, is that you have reproduced your worldview and you’re the only one who uses the software you wrote — while someone else can still take it and adapt it. Making it so everybody feels comfortable replacing the system’s perspective with their own is a UX concern (maximizing expressivity at every part of the learning curve, and smoothing the learning curve out)
The way I see it, UI standardization is a matter of supporting corporate capitalism, and doesn’t make sense outside that structure. This might mean that, so long as people have jobs, we can still have this distinction between a standardized “work machine” that multiple people use and an idiosyncratic and personal “home machine”.
A computer is both an expression of our worldview (i.e., an art-work) and an extension of our mental space (like a journal, mood-board, scrapbook, sketchpad, or face). When it works, it’s an auxiliary brain lobe with a slow connection. The degree to which it functions as an extension of our mental space is dependent on the degree to which it’s representative of our umwelt. Because, when our computer isn’t good as art (i.e., doesn’t actually map to how we think about things in a natural sub-cognitive way), we have to struggle with translating things so the computer understands this. Essentially every piece of standardized or commercial software is awkward to use until the user changes their mental space to match it, because it’s not an expression of the user’s worldview but of some designer or programmer’s.