The reason that “consciousness” is not easy to define exactly is the same as the reason that the “full meaning” of any other idea is not easy to define exactly, even in mathematics. It is because when you analyse any idea it is exposed as a whole set of separate ideas, depending on the context it is used in, and each of those separate ideas is not qualitatively different from the idea you started with, except that it is probably more abstract and harder to further pick apart. So you have a logical regression, but it does not have to be an infinite regression.

Analysis of human reason has given us mathematics. Analysis of physical measurement, combined with mathematics, gave us physics and technology. Analysis of the physics and perception of light and sound gave us audio visual media. This still leaves 99% of the human /animal qualia completely unaccounted for, not least because subjective experiences are harder to measure than physical quantities. Once we learn to model a wider range of our sensations we will start to see “machines” whose behaviour can only be called intelligent and which will be able to communicate easily with us, as Turing envisaged, using human natural language.

That ability to communicate with us on our own terms would open the floodgates to improved human education and an acceleration in the advancement of science — possibly to the point where the distinction between the subjective and the objective becomes more blurred. Once machines can understand us, we will also be able to understand ourselves much better, so that the meaning of the questions about what we don’t yet know will become clearer and therefore closer to solution.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.