This strikes me mostly as a case against bad user research. In an industry context, research has nearly always had to fight for its relevance and build a clear and consistent argument about why it needs to be in the room. You point out a bunch of reasons why the argument has changed from “We are the only people with access to users” to “we are the only people who can collect and interpret qualitative data in a rigorous fashion” but the former argument was always a weak one that a UX research team should have been ashamed to depend on.
We have both methods represented by full-time people on my team (qual + quant) and they have remarkable similar needs to justify their existence in the organization. Both types of researchers have to jockey to be in the right meetings where decisions are happening, train their customers when and how to engage them, and teach people the opportunities and limits of the methods which they have mastered. Research as a role is rarely blocking; most projects will move forward without the research team filing its report. They might be more likely to fail, but that’s hard to attribute cleanly to a lack of research. As a result, being strategic about what you do, how you sell it, and how you differentiate yourself from non-researchers doing what look superficially like the same thing is a daily task for everyone.
Incidentally, you could make the same argument about access to data for quantitative researchers too. Self serve access to product telemetry data has gotten super easy, as has some kinds of a/b testing. That has not obviated the quantitative researchers in my company. Instead is has been a powerfully clarifying force. They don’t have to do menial data access work, and can focus on the strategic bits that require their particular mix of skills.