Good for you. Though another comment earlier seems to belittle your rubric, actually what you have presented is not all that uncommon.
I am currently working with an organization that has taken on the gargantuan task of evaluating and assessing language proficiency through a type of “standardized” testing. The test itself is horribly standardized, to the point that the material that takers see only varies in details, while the structure is always exactly the same (including the name of the town where current events take place!).
In order to guarantee stability in the marking process, raters are given several rubrics to follow, with instructions to create a hypothesis and identify if according to the descriptions in the rubric their hypothesis holds water. The rubric is based upon the CEFR, actually a rambling, open to interpretation document, but applied directly to the testing and labeling of proficiency for academic writing. It is a bit of a nightmare.
Were I as a rater allowed to read the test takers’ responses and evaluate or assess them based upon my experience as an ESL teacher, I could easily slap a B2, B1, A2 or A1 level without all of the repetitive and ambiguous language in the rubric. In one company, the dozens of rubrics that we must use to evaluate and assess texts that run anywhere from 20 to 200 words all look like the same basic rubric with individual words changed, those words being “mostly”, “partially” and “generally”.
Rubrics probably can be handy when a team of raters are grading or rating hundreds of responses and are expected to use the same standard criteria in order to produce a standard label to fill in a blank on an application that would not allow for individual evaluation; however, in teaching, our goal is not giving a grade based upon a scale but rather to encourage our students to improve upon what they have already demonstrated as their limitations. Rubrics do not serve that purpose.
Good for you.