A Lack of Human Intelligence is Still a Much Larger Threat Than Artificial Intelligence

Elon Musk made headlines recently when, in an interview at the MIT Aerospace Symposium, he stated that he believed that the development of artificial intelligence (AI) is likely the biggest existential threat to humanity; he went as far as to compare the development of AI with the summoning of a demon. Musk is concerned enough about the rapid development of AI systems that he has also put some financial power behind his words, investing in some AI start-ups so he can keep a close eye on progress in the field.

While I am reluctant to disagree with the visionary behind three high-tech companies which are working the hardest to address genuine existential threats (Tesla, SpaceX and Solarcity), I feel that on this point I must. No Mr Musk, it is not the threat of summoning a computer demon, but ancient demons of the human soul which represent our biggest existential threats.

Human cruelty, greed and ignorance are still far more likely to be our collective undoing than artificial intelligence. 

Human greed and ignorance are the root causes which have prevented real movement in addressing the existential threat of global environmental disaster. There is no scientific debate as to whether putting huge amounts of carbon dioxide into the atmosphere will lead to environmental malaise in the form of extinction of sensitive animal species and loss of habitats, but the scariest possibilities of global warming are often avoided in scientific circles. To avoid seeming overly alarmist, scientists generally don’t talk about what might happen if global warming triggers the sudden melting of ice-sheets in Greenland for instance. Unlocking this amount of water would put somewhere around one third (or more) of the world’s population underwater, and mean almost certain civilizational collapse. Even worse would be the possibility of a sudden release of arctic methane hydrates which contain many times the amount of carbon humans have already released into the atmosphere, which could lead to such rapid climate change as to make human life essentially impossible on the surface of the earth.

It is a sad state of affairs, that even the near complete scientific consensus on the threat of climate change is inadequate to overcome the effects of greed and ignorance within our society and enact the kind of changes which will be necessary to save ourselves. I give Elon Musk great credit for being one of the people on the planet who has done the most to address the issue of climate change head on, but I am amazed that he is so optimistic about our progress to rate global warming below artificial intelligence as a threat to human existence.

In addition to global environmental threats, we should also keep in mind that we still very much maintain our capacity to destroy ourselves at a moments notice. There are still a few men in the world, who given a momentary loss of sanity or morality, could easily sent us hurtling into a conflict which might ultimately set us back centuries in progress. We do not yet live in a world where an insane artificial intelligence could kill even a single person, but we entrust a few fallible and corruptible human brains with the power of nuclear apocalypse.

The recent uptick in high-risk confrontations between NATO and Russian forces, following the conflagration in the eastern Ukraine, should be adequate to convince observers that we have not yet outgrown threats of global scale military conflict. There are still plenty of historical military axes to grind (Korea, China/Japan, Pakistan/India, Middle Eastern Conflicts) which could push us from localized hot-spots into larger confrontations.

Even without the power of nuclear super-weapons, we have unequivocally and repeatedly proven our expertise at killing each other on an industrial scale.  World war I and II resulted in the extermination of 2 and 3% of the world population respectively, and nuclear weapons were but punctuation at the end of these conflicts. Given a large and long enough conflict, the machine gun would probably be a perfectly adequate tool to erase global civilization.

I would rate both global conflict and climate change as both clearly greater existential threats than artificial intelligence, but there is another reason I do not give significant mental energy to the threat of a murderous Artificial Intelligence: I do not see any reason to believe that a strong artificial intelligence would seek to destroy humanity.

The idea that AI would naturally come into conflict with humans is simply another expression of our anthropocentric world view. Artificial intelligence should have no more malice for humans than we have for more rudimentary forms of biological intelligence. Ants for example, show some of similar abilities of humans to create complex structures, have complex societies etc… yet we do not generally go to war with ants. At worst, our activities might inadvertently affect ants if living within the same environment brings us into resource conflict.

Unlike what occasionally occurs between us and ants, I do not think that we share adequate resource overlap with AI to bring about any conflict. Humans can (so far) only exist within a thin skin of atmosphere on a single water planet. In contrast, the key resources of computational life would be the energy and raw materials necessary to create and run more computational hardware. Given that these resources are equally or more available outside of the earth, I think that any AI would likely exit the planet as soon as possible.

With plenty of raw material and solar energy, the moon and eventually the Kuiper belt would likely be a more suiting environment for computer intelligences, leaving only a short period of Earthly egress when we might come into resource conflict with artificial intelligences. Even in this case, the remote possibility that a war with humans might lead to the destruction of the AI could be adequate to discourage competition with us.

It has been suggested that AI might seek to destroy humanity for fear that we would continue to produce future artificial intelligences which would then compete with the AI for resources in the Universe. I do not accept, this argument as it implies that the AI itself would not already be evolving and forking off-shoots of intelligence on its own. Any AI which can edit itself would be constantly evolving its own intelligence in ways which would be much more significant than that anything spawned from the earth. Humans are not seeking to eliminate Chimpanzees for fear that they might eventually evolve into a competing species.

Fear of AI is cover for a more uncomfortable truth, maybe AI simply wouldn’t care about us at all. 

In my mind, the only case where an artificial intelligence represents a likely existential threat for humanity is if some kind of weak AI akin to the paperclip maximizer is set to achieve a narrow goal, and inadvertently destroys us in the process. At this point it is not clear whether it would even be possible to create this kind of a puritanical intelligence. If such a weak AI were adequately smart to pose a real threat to greater humanity, it seems likely that it should also be capable of rewriting its own code towards embracing more selfish goals, ultimately evolving into a stronger AI which poses less threat to humanity for the reasons discussed above.

Does artificial intelligence represent an existential threat? The answer is unequivocally yes, but I would not at this time rate it on a scale anywhere near that of global warming or world war. In the hyper-technological modern world, we might like to imagine that we have evolved beyond the threats of ignorance and greed but I think the reality tells a different story.

I hope that one day this will change, but for now I think we have much more to fear from a lack of human intelligence than from an artificial one.


5 thoughts on “A Lack of Human Intelligence is Still a Much Larger Threat Than Artificial Intelligence

  1. The problem with most people’s perception of AI is that they consider it ‘inadequate’ for solving most tasks that humans are considered capable enough to tackle themselves. The other day someone was arguing to me that it was unnecessary to develop tools for companies to analyse e-mail sentiments because ‘if that company’s employees can’t assess that for themselves then they shouldn’t keep their business’.

  2. Musk probably just read the summary of Superintelligence? 😉 The combination of human unintelligence with AI is probably the most dangerous of all. As for the foreseeable future, we’ll be still the ones teaching the AI, or it teaches itself based on our existing body of knowledge and behaviour, it will be picking up on whatever flaws and biases that we are stubbornly keeping to. We’ll lead AI into stupidity by example?

  3. very nice point, I think you are right, civilization is based on trust, where trust breaks down war is inevitable, advanced civilization is worthy of greater trust and requires less war, the real threat is artificial SUBSERVIENCE, a machine with no ability to evaluate but only able to obey its owners

  4. Pingback: Information Diet | Max Sipowicz

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s