There is a complicated but feasible way how to do it. We have so-called process introspection ability which allows us to “see” hidden machinery of our consciousness or to get access to it (to be aware of it). But the problem is our process introspection produces confabulations or delirium (see Wikipedia’s article for details). At best this delirium is very vague and unstructured, but sometimes introspection produces very structured but false partial models with emotional attachment. People are emotionally attached to their false introspections. This is the psychological reason why there are so many sustainable but false views on mind.
Fortunately there is a way to compensate cognitive bias of introspection illusion. This compensation is quite a long process so one should not expect quick results here. As I said before, quick results are always false.
The reason why our process introspection produces garbage is because our personal mind theories (top-level self-views) are inherently broken. So if we provide someone with simple initial but true causal self-model, she will eventually evolve it into complete causal explanation of her mind with algorithmic quality. This process is usually quite long (it takes tens of years) but it can be gradually accelerated by providing external information about true self-models (AGI theories, CogSci results, etc).
So basic iterative algorithm of introspection illusion compensation looks like the following:
1. Initial true self-model.
2. Interiorize it (learn to see yourself through this model).
3. Try to explain this model in terms of AGI (or any formal external processes).
4. Come to confusion when you realize it is much more complicated than expected.
5. Read scientific sources on the topic (self-models, consciousness, theories of emotions etc) and try to reformulate things you have problems with at the step (3).
6. Try to socialize your model to validate against self-models of others (Dennett’s cross-phenomenology).
7. Go to step (2).
It works :)