What to ask Elon Musk about AI

Musk’s quote, “Artificial Intelligence is Mankind’s biggest threat” paints a picture of a Terminator-esque future where the rise of the machines could be Mankind’s undoing. What does he know that we don’t? Let’s find out by asking him the question: 

“Hey Elon, I was super interested to hear your views on AI. How do you think we will achieve a level of digital superintelligence, as you call it, that represents a threat to Mankind?” 

As simple as this question is, I will probably never get to ask it to Musk. However, his choice of words when he talks about AI does provide enough clues to know what his answer would be if he were being candid.

Current AI

It’s generally accepted that the current generation of AI and the upcoming generation will revolutionise the way we live and work. The near-term risk is that of workforce displacement due to AI outperforming humans. The main contributing factor to the risk of workforce displacement is the speed at which the adoption of AI is taking place. If we consider the industrial revolution occurred over a period of 80 years and the impact this had on the workforce, AI has the potential to disrupt within a generation. Pretty much everyone with any sense, including Musk, agrees with this.

This is very much within the context of the intelligence being artificial in terms that it is based on algorithmic processing whereby the result of this processing is then applied to our needs in narrow use cases. Therefore, Musk using the phrase superintelligence suggests that he is concerned about a level of AI that is yet to be reached. Let’s call this Strong AI. Musk calls it “digital superintelligence”.

Strong AI

Right now Strong AI doesn’t exist. However, one feared outcome of Strong AI is Strong AI gone rogue where the capability acts against the interests of humankind. This is what we assume Musk is concerned about.

We must agree what Strong AI is. The simplest way of doing this is to consider what attributes the Strong AI must have to pose a risk beyond that of just AI. Today’s AIs, which are essentially algorithms, can be controlled by a level of checks and balances, oversight or governance. It’s just programming. If an AI is not able to be controlled by this level of policing then the AI will in effect be thinking for itself, or more specifically creating new information from its inputs, rather than performing processing. Therefore, a move has been made from AI to Strong AI, with the Strong AI to all intents and purposes being conscious in terms of being able to operate outside of its programming.

So how do we get from our current state of AI to Strong AI? The simple answer is we don’t know. Not even Musk knows. Any other answer to this question is speculation at best. But let’s not let this stop us in our tracks.

A factor of digital complexity

Arguably the most widely held view in the AI community is that consciousness is a factor of complexity. This is where Musk’s phrase “digital superintelligence” gives away his position. Musk must believe that the superintelligence that we should fear is digitally based. Therefore, by increasing the complexity of digital systems at some point we will cross over into Strong AI.

In terms of current binary-based processing, we have not reached a point of complexity that gives rise to consciousness. We do not know where this point is, however if quantum computing becomes a viable reality this will provide a step change in complexity. Will this be enough to spark consciousness? We don’t know. There is a little more to it than this in terms of ensuring that the right programming is in place to effectively mirror a brain like architecture (assuming that consciousness is created by the brain). But, subscribers to digital complexity believe that given the right architecture and the right processing, consciousness could occur.

No Strong AI or biological/hybrids

If on the other hand you believe that no amount programming and processing complexity will create consciousness, then you either believe that i. we will never create a Strong AI or that ii. we can but only do so by other means. By other means would almost certainly mean biological. This already happens around 350,000 times a day through the process of human birth although this is only to human intelligence levels, not superintelligence levels. The question really is that if complexity alone does not create consciousness, can we bootstrap a biological solution or create a hybrid biological/technology solution that can outperform humans? Leaving the ethics of such a biological system to one side, the timeline to such a system is probably further out than a complexity-based digital superintelligence.

Musk’s reply

So even without getting Musk to answer the question for himself, we know that he does not fear the current iterations of AI and he does believe that digital complexity will lead to consciousness. He cannot know how digital complexity will lead to consciousness and following on he cannot know when.

I would therefore expect an answer along the lines of: “Hey Jon, super question! I believe that the ongoing development of digital systems will someday reach a point of complexity where consciousness is created resulting in a potential scenario of digital superintelligence gone rogue. I do not know when this is but it’s probably not a bad idea to start thinking about how we should regulate this type of research, you know, just in case.”

Philosophy 101

The principles of digital superintelligence verses biologicals/hybrids broadly align with opposing philosophical views about the nature of consciousness. Those that align with digital complexity being able to re-produce consciousness subscribe to Physicalism. Physicalists believe that the mind (consciousness) is a physical entity, in so much that if the brain is destroyed the mind is destroyed. On the other hand, those that subscribe to Dualism believe that the mind is not material. Dualists believe that the mind transcends time and space and is not made by the brain.

Within both camps there are multiple theories and nuances on the details. Perhaps most interesting on the Dualist side (and therefore controversial to Physicalists), is a view that builds on our desire to resolve quantum mechanics versus the universe as we can see it. Protagonists of this view propose that we only perceive a version of the universe through our senses and this does not represent how the universe truly is. Further, they argue the “observer effect” in quantum mechanics, where the act of observing something changes it, is proof that the “observer” is fundamental to the result. This phenomenon is then extrapolated to the point of saying that consciousness is a universal construct beyond matter. We are not just a third party looking out at the universe as it is (as the Physicalists would argue), but we are fundamental to how we see the universe.

Let’s remember that it’s been nearly 100 years since the Copenhagen Interpretation of quantum mechanics and we have still no agreed way of reconciling the quantum world with traditional physics.

So are you with Musk on this or not?

abstract art blur bright
Photo by Pixabay on Pexels.com

5 Comments

    1. Great spot Ben! Yes, he does seem to have softened his position. His Neuralink company (https://www.neuralink.com/) is exploring the technology/biological hybrid route. He also mentions that ultimately as a result of a hybrid solution we will be able to snapshot and upload ourselves in digital form rendering our physical forms optional. So he still subscribes to the Physicalist view of being able to digitise consciousness!

      Like

  1. I’m posting this reply on behalf of Gerald Janes who’s contacted me directly as he’s having problems with his WordPress credentials.

    From Gerald:

    I agree with your interpretation of Musk’s concerns on the potential for adverse impact of Strong AI (or any AI) on mankind. I believe he may have been a bit dramatic in how he articulated his concerns but if a debate is stimulated it is a good thing.

    I like your approach to the idea of Strong AI although I believe it might be difficult to ascribe attributes to clearly delineate it from ‘normal’ AI 🙂

    Why do I agree with the general direction of his views? I attended a presentation a while back, where one of the speakers used a great phrase; “Algorithms have parents”. The point being made was that algorithms are written by humans and humans have inbuilt bias, conscious and unconscious. It is reasonable to assume that such bias could find its way into a human authored algorithm. I am not going to comment on the challenges of autonomously generated software (software writing software), but you can see the general direction of travel, things might get worse. It is now recognised that in many cases the data sets used to ‘teach/train’ algorithms also contain bias, further compounding the problem.

    The issue of algorithmic bias is one that is well known and recognised within the AI community (by AI I mean Artificial Intelligence not Augmented Intelligence) but does not seem to be getting much traction elsewhere. I believe it is important to differentiate between Artificial Intelligence and Augmented Intelligence, both of which use the same acronym and are sometimes interchanged. Autonomous, automated processes can execute at speeds far faster than humans can react to. If there were harmful outcomes from an autonomous, automated process it may be difficult to react to quickly. So, I can see why Musk is concerned. It is recognised there is a massive shortage of skilled Data Scientists who are the ones driving this technology. My suspicion is that as a result the level of quality control and oversight may, at times, not be as good as it should or needs to be. This is an area that I believe will need far more attention as Artificial Intelligent enabled applications and processes are more widely deployed. You can already rent algorithms; Microsoft Azure Cognitive Services were one of the early proponents of renting their algorithms. You can see that as such an ecosystem grows, maybe to become the algorithm app store of the future, there is the potential, like the app store of today for a lot of poor quality, algorithms to be available to an unsuspecting consumer. It is easy to predict what might happen as a result.

    I am not even going to dwell upon the issue of malicious algorithms. Like the viruses and malware of today, I believe it is likely that we will see, as widespread deployments occur, a rise in this area for which we are not likely to be ready to deal with.

    I believe there is a case for a class of algorithms and applications which will be classed within Augmented Intelligence, in other words where a human remains within the loop. If this is the case, then I believe it will go a long way towards mitigating Musk’s concerns.

    Like

    1. Agreed, the switch over as to when Strong AI occurs is a discussion within itself. The philosopher Nick Bostrom (https://nickbostrom.com/), who both Elon Musk and Bill Gates should attribute most of their views to (and who is a Physicalist) discusses this in his book “SuperIntelligence”. He proposes a scenario where the cross over occurs but the SuperIntelligence then chooses to deliberately hide it’s true motives in what Bostrom calls the “covert preparation phase”. During this phase the SuperIntelligence is preparing a strike against Mankind.

      Great point on algorithmic bias. Effectively we can and should be managing this now to at least some degree but as you point out we are probably falling short.

      Augmented Intelligence also brings up questions of hacking and censorship whereby our information flow could be manipulated.

      Like

Leave a comment