Musk’s quote, “Artificial Intelligence is Mankind’s biggest threat” paints a picture of a Terminator-esque future where the rise of the machines could be Mankind’s undoing. What does he know that we don’t? Let’s find out by asking him the question:
“Hey Elon, I was super interested to hear your views on AI. How do you think we will achieve a level of digital superintelligence, as you call it, that represents a threat to Mankind?”
As simple as this question is, I will probably never get to ask it to Musk. However, his choice of words when he talks about AI does provide enough clues to know what his answer would be if he were being candid.
It’s generally accepted that the current generation of AI and the upcoming generation will revolutionise the way we live and work. The near-term risk is that of workforce displacement due to AI outperforming humans. The main contributing factor to the risk of workforce displacement is the speed at which the adoption of AI is taking place. If we consider the industrial revolution occurred over a period of 80 years and the impact this had on the workforce, AI has the potential to disrupt within a generation. Pretty much everyone with any sense, including Musk, agrees with this.
This is very much within the context of the intelligence being artificial in terms that it is based on algorithmic processing whereby the result of this processing is then applied to our needs in narrow use cases. Therefore, Musk using the phrase superintelligence suggests that he is concerned about a level of AI that is yet to be reached. Let’s call this Strong AI. Musk calls it “digital superintelligence”.
Right now Strong AI doesn’t exist. However, one feared outcome of Strong AI is Strong AI gone rogue where the capability acts against the interests of humankind. This is what we assume Musk is concerned about.
We must agree what Strong AI is. The simplest way of doing this is to consider what attributes the Strong AI must have to pose a risk beyond that of just AI. Today’s AIs, which are essentially algorithms, can be controlled by a level of checks and balances, oversight or governance. It’s just programming. If an AI is not able to be controlled by this level of policing then the AI will in effect be thinking for itself, or more specifically creating new information from its inputs, rather than performing processing. Therefore, a move has been made from AI to Strong AI, with the Strong AI to all intents and purposes being conscious in terms of being able to operate outside of its programming.
So how do we get from our current state of AI to Strong AI? The simple answer is we don’t know. Not even Musk knows. Any other answer to this question is speculation at best. But let’s not let this stop us in our tracks.
A factor of digital complexity
Arguably the most widely held view in the AI community is that consciousness is a factor of complexity. This is where Musk’s phrase “digital superintelligence” gives away his position. Musk must believe that the superintelligence that we should fear is digitally based. Therefore, by increasing the complexity of digital systems at some point we will cross over into Strong AI.
In terms of current binary-based processing, we have not reached a point of complexity that gives rise to consciousness. We do not know where this point is, however if quantum computing becomes a viable reality this will provide a step change in complexity. Will this be enough to spark consciousness? We don’t know. There is a little more to it than this in terms of ensuring that the right programming is in place to effectively mirror a brain like architecture (assuming that consciousness is created by the brain). But, subscribers to digital complexity believe that given the right architecture and the right processing, consciousness could occur.
No Strong AI or biological/hybrids
If on the other hand you believe that no amount programming and processing complexity will create consciousness, then you either believe that i. we will never create a Strong AI or that ii. we can but only do so by other means. By other means would almost certainly mean biological. This already happens around 350,000 times a day through the process of human birth although this is only to human intelligence levels, not superintelligence levels. The question really is that if complexity alone does not create consciousness, can we bootstrap a biological solution or create a hybrid biological/technology solution that can outperform humans? Leaving the ethics of such a biological system to one side, the timeline to such a system is probably further out than a complexity-based digital superintelligence.
So even without getting Musk to answer the question for himself, we know that he does not fear the current iterations of AI and he does believe that digital complexity will lead to consciousness. He cannot know how digital complexity will lead to consciousness and following on he cannot know when.
I would therefore expect an answer along the lines of: “Hey Jon, super question! I believe that the ongoing development of digital systems will someday reach a point of complexity where consciousness is created resulting in a potential scenario of digital superintelligence gone rogue. I do not know when this is but it’s probably not a bad idea to start thinking about how we should regulate this type of research, you know, just in case.”
The principles of digital superintelligence verses biologicals/hybrids broadly align with opposing philosophical views about the nature of consciousness. Those that align with digital complexity being able to re-produce consciousness subscribe to Physicalism. Physicalists believe that the mind (consciousness) is a physical entity, in so much that if the brain is destroyed the mind is destroyed. On the other hand, those that subscribe to Dualism believe that the mind is not material. Dualists believe that the mind transcends time and space and is not made by the brain.
Within both camps there are multiple theories and nuances on the details. Perhaps most interesting on the Dualist side (and therefore controversial to Physicalists), is a view that builds on our desire to resolve quantum mechanics versus the universe as we can see it. Protagonists of this view propose that we only perceive a version of the universe through our senses and this does not represent how the universe truly is. Further, they argue the “observer effect” in quantum mechanics, where the act of observing something changes it, is proof that the “observer” is fundamental to the result. This phenomenon is then extrapolated to the point of saying that consciousness is a universal construct beyond matter. We are not just a third party looking out at the universe as it is (as the Physicalists would argue), but we are fundamental to how we see the universe.
Let’s remember that it’s been nearly 100 years since the Copenhagen Interpretation of quantum mechanics and we have still no agreed way of reconciling the quantum world with traditional physics.
So are you with Musk on this or not?