The Argument For Machine Consciousness

Modern arguments for machines having consciousness can, I suggest, be paraphrased as follows:

  1. We know lots and lots about how electro-mechanical systems work.

Therefore:

  1. Let us assume that consciousness is also an electro-mechanical system because then we might understand it.

But when (2) is taken as a given, not a working theory, it happens to have a corollary:

  1. Machines can be conscious

One can sympathise with the pragmatic research-budget-allocator going along with (2) as a working hypothesis for further research, but the game is changed entirely when belief (2) makes its way into legal and moral thinking.

I guess the motivation for (2) is, we always want to believe we can understand things so we always want to believe things can be understood based on the current state of our tools, knowledge and insights. Gung-ho optimism about our current state of knowledge on consciousness and mental health does not, however, have a great history.

At the current time, the negative I hope we can avoid is:

Large corporations will use stories about machine consciousness and machine legals rights to transfer legal risk away from themselves, making it easier for them to prioritise profitable machinery over human lives.

A example will be when a corporation denies legal liability for a robot or self-driving car which injures a passer-by, by saying it was the robot's fault, not theirs.

A better move, at least in the early 21st century, would be to first acknowledge that

  1. The current state of our knowledge, tools and insights does not suffice to understand what consciousness is or how it works.

Hypotheses like proposition (2) are fair game in working research, but applying half-baked conclusions like (3) to the real world is a bad case of confusing reality with fantasy fiction.