The modern argument for machine consciousness can be paraphrased as follows:
- We know lots about how electro-mechanical systems work.
- We declare that consciousness is also an electro-mechanical system because if it is then we will be able to understand it.
- Machines can be conscious
One can sympathise with thinking the belief in (2) is worth a research budget. But to confidently assert that conceivable consequences of (2) – such as (3) – are facts about the world, is bluster. To write it into into our legal and moral thinking is a mistake.
I suppose the motivation for (2) is, we always want to believe we can understand things. And, impatiently, we also want to believe things can be understood well enough just with the current state of our knowledge, insights and tools.
Gung-ho optimism based on unevidenced confidence rather than on a thorough understanding of consciousness and mental health does not, however, have a good record.
At the current time, the negative I hope we can avoid is:
Large corporations will use stories about machine consciousness and machine legal status to transfer legal risk away from themselves, making it easier for them to prioritise profitable machinery over human lives.
A example will be when a corporation denies legal liability for a robot or self-driving car which injures a passer-by, by saying it was the robot's fault, not theirs.
A better, more logical, move in the early 21st century would be to first acknowledge that
- The current state of our knowledge, tools and insights does not suffice to understand what consciousness is or how it works.
Hypotheses like proposition 2 are fair game in working research, but applying half-baked conclusions like 3 to the real world is a bad case of confusing reality with fantasy fiction.