I agree with so much of this post but there are a few success signal points I’d challenge. If you’re depending on user feedback as the ultimate way of determining support content effectiveness, you’re missing out on everyone who got what they needed (hopefully early in an article) and just continued on with their product journey. Those are successful self-serve interactions that go quietly unnoticed.
Article rating feedback inevitably skews negative because that’s who got through an entire article and didn’t find what they needed. In my opinion, the best kind of “feedback” is actually how many users successfully completed the task the content was intended to address without having to contact a person. That should be measured through product behavior and support contacts relative to the content you’ve created to help.
Similarly, pageviews per sessions doesn’t inherently equate to educational browsing—it can just as easily signal that someone is having trouble finding the content they’re looking for, so be cautious when using that as a measure of success.