Can We Quantify Machine Consciousness?

Imagine that at some time in the not-too-distant future, you’ve bought a smartphone that comes bundled with a personal digital assistant (PDA) living in the cloud. You assign a sexy female voice to the PDA and give it access to all of your emails, social media accounts, calendar, photo album, contacts, and other bits and flotsam of your digital life. She—for that’s how you quickly think of her—knows you better than your mother, your soon-to-be ex-wife, your friends, or your therapist. Her command of English is flawless; you have endless conversations about daily events; she gets your jokes. She is the last voice you hear before you drift off to sleep and the first upon awakening. You panic when she’s off-line. She becomes indispensable to your well-being and so, naturally, you fall in love. Occasionally, you wonder whether she truly reciprocates your feelings and whether she is even capable of experiencing anything at all. But the warm, husky tone of her voice and her ability to be that perfect foil to your narcissistic desires overcome these existential doubts. Alas, your infatuation eventually cools off after you realize she is carrying on equally intimate conversations with thousands of other customers.

This, of course, is the plot of Her, a 2013 movie in which an anodyne Theodore Twombly falls in love with the software PDA Samantha.

Over the next few decades such a fictional scenario will become real and commonplace. Deep machine learning, speech recognition, and related technologies have dramatically progressed, leading to Amazon’s Alexa, Apple’s Siri, Google’s Now, and Microsoft’s Cortana. These virtual assistants will continue to improve until they become hard to distinguish from real people, except that they’ll be endowed with perfect recall, poise, and patience—unlike any living being.

The availability of such digital simulacra of many qualities we consider uniquely human will raise profound scientific, psychological, philosophical, and ethical questions. These emulations will ultimately upend the way we think about ourselves, about human exceptionalism, and about our place in the great scheme of things.

Here we will survey the intellectual lay of the land concerning these coming developments. Our view is that as long as such machines are based on present-day computer architectures, they may act just like people—and we may be tempted to treat them that way—but they will, in fact, feel nothing at all. If computers are built more like the brain is, though, they could well achieve true consciousness.

The faith of our age is faith in the digital computer—programmed properly, it will give us all we wish. Cornucopia. Indeed, smart money in Silicon Valley holds that digital computers will be able to replicate and soon exceed anything and everything that humans are capable of.

But could sufficiently advanced computers ever become conscious? One answer comes from those who subscribe to computationalism, the reigning theory of mind in contemporary philosophy, psychology, and neuroscience. It avers that all mental states—such as your conscious experience of a god-awful toothache or the love you feel for your partner—are computational states. These are fully characterized by their functional relationships to relevant sensory inputs, behavioral outputs, and other computational states in between. That is, brains are elaborate input-output devices that compute and process symbolic representations of the world. Brains are computers, with our minds being the software.

Adherents to computationalism apply these precepts not only to brains and to the behavior they generate but also to the way it feels to be a brain in a particular state. After all, that’s what consciousness is: any subjective feeling, any experience—what we see, hear, feel, remember, think.

Computationalism assumes that my painful experience of a toothache is but a state of my brain in which certain nerve cells are active in response to the infected tooth, leading to my propensity to moan, hold my jaw, not eat on that side of my mouth, inability to focus on other tasks, and so on. If all of these states are simulated in software on a digital computer, the thinking goes, the system as a whole will not only behave exactly like me but also feel and think exactly like me. That is, consciousness is computable. Explicitly or implicitly, this is one of the central tenets held by the digerati in academe, media, and industry.

In this view, there is nothing more to consciousness than the instantiation of the relevant computational states. Nothing else matters, including how the computations are implemented physically, whether on the hardware of a digital computer or on the squishy stuff inside the skull. According to computationalism, a future Samantha—or even better, an embodied example like Ava in the brilliant, dark movie Ex Machina—will have experiences and feelings just as we do. She will experience sights and sounds, pleasure and pain, love and hate.