I agree in principle, but there’s a couple of key phrases I’d like to focus on:
“Indexes, when properly designed”
Those last 3 words are a big caveat on the premise of indexes being beneficial to the other premise, namely:
“Users prefer predictability over fast response time”
Yes a full table scan can give variability in performance, but that variability is typically the same order of magnitude (eg it took 5 mins to run today, and 8 mins to run tomorrow).
When indexes come into the mix, then (well designed or not), then that variability can be much more severe. A query that opts for index access when it should not have ( especially in a nested loop style scenario) can turn a 5 mins query into a 5 day query :-)
I like to think of it this way (the percentages below are plucked from thin air, not any scientific metric, but I’m using them to make the point):
I think an autonomous data warehouse with a zero/minimal index strategy aims to give performance 80% of optimal, 100% of the time. Adding indexes can give you performance which 90% optimal, but only 80% of the time.