Conscious AI: Is It Possible? And What Would It Mean for Humanity?
June 10, 2019 - 7 minutes readFamous futurist Ray Kurzweil once said that “By 2029, computers will have emotional intelligence and be convincing as people.” Well, 2029 isn’t too far away anymore. And we already have intelligent systems making movie recommendations, composing music, and imitating human speech. Is it only a matter of time before the technology behind these advancements becomes conscious and conversational, like a regular human being?
At this year’s South By Southwest, a panel discussion revolved around “How AI Will Design the Human Future“, our fear and simultaneous interest in artificial intelligence (AI), and what consciousness really is. Let’s dive into the highlights of this conversation and see what experts think about the possibility of conscious AI.
What Exactly Is AI?
We should first cover what AI and consciousness are and how they relate to each other.
AI’s most recent wins have largely been due to machine learning, which uses probabilities to find patterns in datasets to make “decisions” or “predictions.” Thanks to this subset of AI development, things like making diagnoses, competing in Jeopardy!, and writing text have all become possible.
But the definition of AI is vague and fleeting; Jennifer Strong, the host of “The Future of Everything” podcast, says, “The term ‘artificial intelligence’ is thrown around constantly and often incorrectly.” It could refer to almost anything these days, and no one is monitoring exactly who gets to tout it as a product feature. Case in point: 40% of European companies claiming to use AI in their business actually don’t use any AI at all.
A study based out of Stanford, named the 2016 One Hundred Year Study on Artificial Intelligence (or AI100) study, defines AI as a group of technologies that were inspired by humans to understand and perceive their surroundings and reach a conclusion on what actions to take as a result.
And What About Consciousness?
This could turn into a week-long philosophical discussion about our brains, our thoughts, our souls, morals, and so much more, but we’ll try to keep this concise.
Dr. Heather Berlin is a cognitive neuroscientist and professor of psychiatry at Mount Sinai in New York City. She posits, “If you can replace one neuron with a silicon chip that can do the same function, then replace another neuron, and another—at what point are you still you? These systems will be able to pass the Turing test, so we’re going to need another concept of how to measure consciousness.”
But is consciousness even measurable? Berlin says, “It used to be that only philosophers could study consciousness, but now we can study it from a scientific perspective. We can measure changes in neural pathways. It’s subjective, but depends on reportability.”
A Mysterious, Deep Intelligence
Amir Husain is CEO of Spark Cognition, an AI company. He says, “Human brains have a wonderful simulator. They can propose a course of action virtually, in their minds, and see how things play out. The ability to include yourself as an actor means you’re running a computation on the idea of yourself.”
In this way, AI is already kind of human – it runs simulations against historical data to calculate the best probabilities. But it’s not 100% of the way there yet.
More realistically, we may never reach 100% human-like AI because we still don’t completely understand the human brain itself. Berlin says, “It’s still one of the greatest mysteries how this three-pound piece of matter can give us all our subjective experiences, thoughts, and emotions.”
But it’s not all bad news. We’re slowly chipping away at the brain and how it works: we’ve created prosthetics that respond to brain signals, we’re getting better at creating medicines that treat mysterious illnesses like depression, and we’re more connected to scientists and researchers globally than we ever have been before.
Ongoing Debates
At some point, we must also discuss the ethics of creating a conscious AI that has no real quality of life (or actual life, really). Some AI experts think there’s a lot of value in implementing consciousness, but to a certain degree.
According to Berlin, there are three levels of consciousness: pure subjective experience (“The ocean has salt”), awareness of one’s own subjective experience (“I tasted the ocean water, and it was salty”), and relating one subjective experience to another (“The ocean’s saltiness reminds me of regular table salt”).
Husain says, “Complex tasks you want to achieve in the world are tied to your ability to foresee the future, at least based on some mental model. With that view, I, as an AI practitioner, don’t see a problem implementing that type of consciousness.”
With technology like autonomous cars still not being trusted by consumers due to the risk of death, it’s also difficult to say whether consumers even want a conscious AI in their home or in their products.
Future Paranoia
One of the best ways to introduce conscious AI is carefully and slowly; conscious AI is under very high scrutiny, and it cannot afford to make a mistake or cause grievances for consumers. When, and if, we develop and roll-out conscious AI, one thing is for sure: we must still take it and its findings with a grain of salt.
As Dr. Peter Stone, associate chair of UT Austin’s computer science program, says, “We shouldn’t charge ahead and do things just because we can. The technology can be very powerful, which is exciting, but we have to consider its implications.”
What do you make of the concept of conscious AI? Is this something you’d like to see in the near future? And what benefits outside of the ones we’ve discussed do you think it offers society? Let us know in the comments!
Tags: AI, android app development, app developer nyc, app developers NYC, app development NYC, artificial intelligence, artificial intelligence app development, machine learning, machine learning apps, ML, mobile app development, New York City app developer, New York City mobile app developers, New York City mobile app development, NYC app developer