Starting with data, like different colors, the information would answer “what color is X”. Our alphabet is the visible color spectrum, and the information is contextualized around an object we are observing. Then the knowledge that X is some color is a belief resolved by sensory experience of that information. Knowledge answers the question “why do I believe that the color of X is Y?”, in this case the belief is based on sensory experience. However, it seems obvious that sensory experience is not the only way to resolve belief into knowledge, and the way one resolves this belief can lead to erroneous conclusions.
Information: What color is the sky?
Knowledge: Why do I believe that the color of the sky is blue?
I look at the sky and I see blue. Alternatively, one might rely on an expert who tells them that the sky is blue.
Understanding: Why do I believe that the color of the sky should be blue (and my senses are not tricking me)?
I know that the sun shines light across the visible spectrum, and I know blue waves are shorter and tend to scatter across the sky.
In understanding why the sky should be blue, even if we don’t see it that way (color blindness, perhaps), the knowledge necessary to reach the correct conclusion can be measured other ways.
— — — — — — — — — — — — — — — — — — — — — — — — — — — — — —
So we have our data, the set of basic symbols, which in context make information. The information we operate on with beliefs, turning it into knowledge in a formula Belief(x) = Knowledge_x. Then the K_x is a vertex on a graph, with edges connecting K_x and K_y organizing the information into a knowledge graph that represents understanding. That’s as far as I’ve gotten, but perhaps a minimum subset of knowledge that represents understanding would itself be a vertex on a “wisdom graph”, connected to other vertices of understanding.