Modern arguments for machines having consciousness can, I suggest, be paraphrased as follows:
- We know lots about how electro-mechanical systems work.
- We shall assume that consciousness is also an electro-mechanical system because if it is then we might be able to understand it.
- Machines can be conscious
One can sympathise with thinking that (2) is worth a research budget. But to confidently assert that conceivable consequences of (2) – such as (3) – are reliable facts about the world is bluster. To write it into into our legal and moral thinking is a mistake.
I suppose the motivation for (2) is, we always want to believe we can understand things. And, impatiently, we also want to believe things can be understood well enough just with the current state of our knowledge, insights and tools.
Gung-ho optimism about the current state of knowledge on consciousness and mental health does not, however, have a strong record of treating human beings well.
At the current time, the negative I hope we can avoid is:
Large corporations will use stories about machine consciousness and machine legal status to transfer legal risk away from themselves, making it easier for them to prioritise profitable machinery over human lives.
A example will be when a corporation denies legal liability for a robot or self-driving car which injures a passer-by, by saying it was the robot's fault, not theirs.
A better logical move, in the early 21st century, would be to first acknowledge that
- The current state of our knowledge, tools and insights does not suffice to understand what consciousness is or how it works.
Hypotheses like proposition (2) are fair game in working research, but applying half-baked conclusions like (3) to the real world is a bad case of confusing reality with fantasy fiction.