Sep 1, 2018 · 1 min read
Hi Steven,
If I understood correctly, your main argument seems to be that it is ok to assume a machine (specially a turing test passing machine) has consciousness in the same way it is ok to assume there is something like to be like a bat. Your claim is based on the fact that behavior is a good measure of how different the systems are for the purposes of having what-is-it-likeness. But your original thesis, that you are trying to defend, is that behavior is sufficient to explain consciousness.
So your argument seems to be:
- [thesis] What-is-it-likeness can be explained by functional behavior (aka running software)
- People and turing-test-passing machines are behaviorably close
- #1 and #2 implies it is ok to assume turing test passing machines have what-is-it-like-ness
So #3 assumes #1, which is your thesis.
Hope that makes sense,
