Introduction to Knowledge Space Theory

Lucas Oliveira
Aug 24, 2020 · 6 min read
https://www.flickr.com/photos/mikemacmarketing/30212411048

The Knowledge Space Theory (KST) is a set concepts and structures that aims to assess and represent one’s knowledge state. Assess people’s knowledge is an important task in order to check if someone is capable to perform some activity or to provide guidance towards the mastery of some subject. We face these assessments regularly in our daily lives, in the form of school tests or job interviews. KST intends to generate a higher detailed report of someone’s knowledge as opposed to single metrics used in a lot of tests. These detailed reports can then be further used for better analysis of someone’s capacity and provide more eficient guidance to the learner to master a subject.

In KST, the knowledge states comes in the form of a collection of problems that the individual is qualified to solve. KST heavily relies on Set Theory in order to provide its axioms and definitions.

In this post we will go through the main aspects of what composes the KST.

Items and Instances

  • Invert a matrix using cofactors (concept from linear algebra)
  • Calculate the resulting forces over a particle (concept from physics)
  • Invert a chord (concept from music)

These types of problems are called items. So in KST each knowledge state is a set of items. An item is just a general concept description. An specific problem related to some item is called an instance. So for example of item and intance would be:

  • Item: Calculate the roots of a quadratic polynomial
  • Instance: Find the roots of the equation x² +5x + 6 = 0

Knowledge Structures

  • Q = {a,b,c,d} (Each item represented by one letter)
  • K = {{}, {a}, {d}, {a,b}, {a,d}, {a,b,c}, {a,b,d}, Q} (Each subset of Q is the knowledge states, for example the state {a,b,d} means the maestry of the items a, b and d.

Knowledge Spaces

Learning Spaces

Learning Smoothness

Here, whereas the K1 respects the learning smoothness, K4 does not since from the state {d} to state {a,b,d} we had to master 2 items (a and b) at the same time. Example taken from https://arxiv.org/abs/1511.06757.

Learning Consistency

Here, whereas K2 respects the learning consistency, K3 does not since once in the state {a} the only possibility of mastering state {d} is by mastering all the items, since there is no state {a,d}. Example taken from https://arxiv.org/abs/1511.06757.

These graph like representations are called “covering diagrams”.

Atoms

The definition of atoms is important because it has a direct pedagogical meaning since the items within an atom at q are the set of items that are required to be mastered before mastering q. For example, suppose we have the following items:

  • Item A: sum two integer values
  • Item B: multiply two integer values
  • Item C: square an integer value

Intuitively we can say that before mastering item C, we need to master item B and before mastering item B, we need to master item A. So by representing this with KST, the minimum state containing A must be the state {A} (since we don’t need to master anything before mastering it) and the minimum statue containing B is {A, B} and not {B} (since before mastering B we need to have mastered A). As a consequence the atom for C would have to be {A, B, C}.

Fringe Theorem

  1. From all the possible predecessors state from the current one, what are the items that once mastered would put me on this state?
  2. Once in this state, what are them possible next items that I can master to proceed to another state?

The set of items that answers the first question is called the inner fringe, whereas the set of items that answers the second question is called outer fringe. The outer fringe has an important pedagogical meaning also. The outer fringe of a given state K is the set of items that someone in state K is ready to master. So, if after an assessment we conclude that a student is in a state G, the set of items that we should recommend the student to master next will be the outer fringe of G.

Learning Sequences

Once we have a very large learning space, keep representing its form as covering diagram can be very problematic since for each state node, the set of items that compose the state may be very large. Learning sequences makes possible to represent the learning space in a very reduced form called learning diagram:

In a learning diagram, instead of labeling the nodes, we label the edges. The state that represents each node is given by grouping the edges that compose a path from the empty state towards the target state. Example taken from https://arxiv.org/abs/1511.06757.

adapted

Adaptive Education, Learning Analytics and Education…