Confessions of a Biological Scientist – Part II: The Organic Engine of Science

Scientific advancement is the greatest endeavour that humans have ever undertaken, but this is not to say it is perfect. Science as a system was evolved, not designed, and thus it suffers all of the warts, inefficiencies and limitations of any organic system. In the near future something as important as science will no longer be left to imperfect and inefficient biological scientists, but will become be the realm of digital scientists. In this second part of Confessions of a Biological Scientist, I will discuss the imperfections of the scientific system as and why artificial intelligence may be necessary to overcome our collective limitations. 

Despite what the current dogma of scientific boosterism might say about it, I do not believe that humans are born as scientists. Yes, we are naturally curious. Yes, we naturally ask questions and recognize patterns. But taking these raw abilities and turning them into scientific thought requires a set of tools that are not necessarily natural to the human mind. It is for this reason that we have developed elaborate systems of education, research and publication in order to propagate scientific thought.

The modern system of scientific education is a long and arduous process. To become a scientist, one must first have a broad base of knowledge from which to build towards scientific thought. A good understanding of math, physics, and chemistry is an absolute must for any scientist. After this, a student must immerse themselves in a specific field, and eventually an even more specific discipline, for many years before they are adequately educated in the technical and academic knowledge necessary to work in a given field. All told, it takes at least 25 years and hundreds of thousands to turn a single curious child into a PhD graduate. 

While it does succeed in producing more new scientific grads every year, the scientific education system is far from efficient. In today’s world, the age at which a scientist can expect to get their first post as an independent academic researcher has been steadily increasing. As we pour an increasing amount of energy goes into training scientists, it would seem that the scientific education system as a whole is actually becoming less efficient over time. Is there any way this process will realistically be able to compete if an AI emerged with even a low level of scientific ability?

This process whereby a system can actually lose efficiency over time is a characteristic flaw of organic systems. If there is inadequate selective pressure to maintain or increase efficiency, then over time the system may tend towards inefficiency, accruing errors which are never eliminated. Just as the human population has accrued costly maladaptations over time such as poor eyesight, obesity, and other genetic diseases, the scienctific system also carries with it negative traits which are copied from generation to generation of scientists. 

And educational inefficiency is not the only, or even the worst, of these maladaptations found within scientific system.

If we hold that the ideal of science is the quest for pure truth, then it should be the ideas that best fit the data which are held to be the best. Unfortunately, this is not the case. Human communication has evolved from story-telling, and science is no exception. Scientists are often more interested with what provides the most captivating story and resonates with the current scientific paradigms rather than what is the simplest truth or best-fit model.

This tendency of scientists to converge on popular scientific notions is worsened by the publication arm of the scientific machine. Peer-reviewed scientific journals are ranked according to their impact factor, which is a metric based on the average number of citations that a paper published would get. Just as in politics or business, the goal of the science game is to be popular.

Journals want to publish work that is up to scientific rigor, yes, but even more importantly than this, they want to publish work that will be popular. But what determines what science is going to be most cited? It is not necessarily what is the best for the advancement of science, but simply what is most interesting to the scientists. This applies pressure to scientists to try to make scientific reports more interesting to the scientific audiences, skewing scientific writing towards grandstanding. On some level, science is simply a form of highly controlled entertainment, serving a very specific audience a very specific product.  

Whereas, science might be best served through pairing carefully observed data with simple conclusions and measured insights. All too often it is expected that scientific reports present striking new data with exciting conclusions and deep new insights. Rather than letting the data speak for itself, science must be packaged up neatly and sold one powerpoint slides and scientific manuscript at a time. 

This bombastic style leaves little room for unanswered questions, encouraging scientists to avoid discussing the potential pitfalls of their research, glossing over holes rather than addressing them. No longer it the scientist expected to impartially report data, but we must be salesmen shilling our own observations. We are encouraged to be as lawyers, advocating for our own stories in a battle between ideas.

Through this process of selling and re-selling science we are perpetuating the false perception that our scientific data is perfect, our conclusions unquestionable, and our insights complete. Yes, science is making steps forwards, but it is in shuffling steps not leaps and bounds. 

The funding system for science also further exacerbates the problems created by the scientific publication system, because it is in effect simply an extension of the scientific publication system. Grant applications are ranked by scientists on their merit, but merit is really a function of how well the ideas of the grant resonate with current scientific thinking, and how interesting the ideas are to the panel. The idea that a scientists could slave away for decades on a niche problem which may or may not be of interest to the scientific community at large seems a quaint idea of a lost generation.

In the end, we really lack of any objective measure for the value of a scientific idea. This means that it is the one with the best story who wins. Even with our shiny armour of raw skepticicsm, we are still just as vulnerable to a good story as the rest of humanity. In my mind this is the fundamental limit we face today, human science can only advance as quickly as scientific groupthink can haltingly step from one paradigm to the next (see Kuhn).

In the end, it may be that these negative traits are not just inefficiencies which can be cut away from the scientific system, but they may be an expression of our human imperfection. We are inefficient, political and error-prone beings, thus by extension our science, and our scientific systems are inescapably inefficient, political, and error-prone. Human science is an organic machine, complete with imperfections and limitations. 

Perhaps the only way to overcome the current limits to scientific advancement will be to remove the humans from science altogether. With the advancement of artificial intelligence it may soon be possible to create a new type of science, wherein our collective advancement is not limited to the whims of human minds. Next week I will discuss  the first baby steps of robot scientists and how I imagine these scientists will begin to replace us, and ultimately the entire scientific system over the next two decades.

——–

Interested in writing for Thought Infection? I am looking for contributors and guest bloggers. If you have an interest in futurism and you would like to try your hand at writing for a growing blog, send me an email at thought.infected@gmail.com

Confessions of a Biological Scientist – Part I: The Limits of Meat

Science has proceeded uninterrupted for hundreds of years now, through its progress we have emerged from ignorance and awakened to the reality of our Universe. But scientific advancement is now retarded by a fundamental problem, the scientists. In the near future something as important as science will no longer be left to imperfect and inefficient biological scientists, but will become be the realm of digital scientists. In this first part of Confessions of a Biological Scientist, I will discuss the limitations of biology, and why science must soon leave us behind.

I am a scientist. I spend my days in a manner that is likely more or less consistent with how you might imagine that a scientist would; researching, devising, performing, and analysing experiments to test new hypothesis about how the world works. 

Notwithstanding the current challenge of obtaining and maintaining funding for scientific projects, being a scientist is a pretty good gig. Science offers a chance to do a job where you can truly embrace your creative side. Scientists also get to travel the world to present data and meet fascinating people. Most importantly for me though, being a scientist lets me do something that I know is making a positive impact on the world.

What has been bugging me lately though, is the question of just how much value I am adding to this equation really. I am not calling into question the value of science itself, that argument is easily dispatched with a look at the magical world which science has revealed. What I am questioning is the value of scientists, or to be more specific, the value of human scientists.

Before delving into the meat of my argument, I must make the obvious disclaimer that currently there is no alternative to employing scientists. If we want to do science, it is absolutely a necessary evil that we must deal with the biological and social inefficiencies of humans in order to take steps towards scientific progress. We have no choice but to keep feeding these meat-scientists for now, but I see a juggernaut on the horizon, a new kind of scientist is coming and it will make me nothing more than a child in a sandbox. 

The profound deficiencies of human science are actually much deeper than they might at first seem; the biological requirements of being human are more than simply a cost of doing science, they are an active deficit against it.

The first limitation of our biological bodies is that we are provided with only a limited set of five senses by which we can absorb data. Even with five senses, we are so reliant on our gelatinous orbs (eyes), that we insist on converting all data into into visual graphs, tables, and pictures. This obligatory photocopying of data into a form amenable to visual digestion is a lossy process, and predisposes our understanding of phenomena which can be understood visually. As someone who spends a lot of time making neat little powerpoint models to communicate new scientific findings, I am very familiar with both the power and the limitations of visual understanding.  

While some scientific phenomena seem to have somehow transcended the visual world (I am thinking specifically of mathematical and physical discussion of higher dimensional space) the limitations of our biological brains are still an ultimate barrier to our understanding of natural phenomena. In order to understand something, we must have some comparable a priori understanding on which to draw.

We grope for an analogy by which we can explain what is happening. Electrons flow like water, proteins fold like origami, and beehives act like a single organism. Ultimately this need to explain new phenomena through pre-exisiting ones limits us to only step-wise advancement in science. We cannot leapfrog over ideas which we do not yet understand. 

Even if we do have the pre-existing understanding to appreciate a phenomenon we are witnessing, our ability to identify the underlying mathematical relationships is highly limited. We are great at seeing a linear trend, and maybe we can pick out the odd logarithmic relationship, but we are hopelessly inept when it comes to seeing the complex algorithms of the world.

In cellular biology, we are particularly guilty of reporting on naturalistic phenomenon, while glossing over the mathematical relationships that underpin the systems we study. We produce an endless supply of scientific reports full of experimental data, hypotheses, and neat little models, but it is the rare exception which contains a mathematical equation.

For an idea of what I am talking about, check out the cell biology or biochemistry sections of top level scientific journals like Science or Nature. What proportion would you suppose contains a mathematical equation? Yes there are statistical tests of various relationships, but this is all too often the entirety of mathematical analysis in a paper. When this protein goes down, this other one goes up in a statistically significant manner, but that is usually as far as we go.

Although this might seem harsh criticism of the natural sciences, there are good reasons that there is so little math in biology, and it is certainly something that I myself am guilty of. The problem stems from the fact that biological systems are highly complex and non-linear systems. While biological brains are readily capable of understanding how two or maybe three factors interact, we have no means of understanding how the thousands of factors involved in a single cell are interacting.

I would propose that biological humans will never be able to understand complex biological systems with mathematical precision. It will require a new kind of scientist which can see the complex mathematical relationships and account for the thousands or millions of factors which interact in a given cell, a given body, or a given society. It will require a computer scientist.

Computers will make better scientists because they are not subject to the limits of human biology. To a computer all data is mathematical. There is no need for intermediating steps to convert data into a simplified visual form. Computers can collect data from any number of imaginable sensory devices and perform mathematical analyses directly upon this data.

Computers also have no need to understand why something is as it is. While this is often cited as a weakness of computers, it can also be seen as a positive. A computer could theoretically identify the multi-variate mathematical relationships that rule a complex system with no need to understand why this relationship exists. A computational analysis of a complex system would reveal properties about how things are, not why they are that way.

Aside: In scientific discussions, this type of discussion might break out into a correlation versus causation argument, but I have always felt that causation may be nothing more than a highly statistically significant correlation. We never really know there are no other factors which could be causative in the system. With enough data, I am convinced that an adequately intelligent computer system could identify relevant mathematical relationships which underpin natural systems which are every bit as good and better than what humans have devised.

Simply put, my argument is this: The world is made of math, computers will make better scientists because they are better than us at math. 

John Henry died with a hammer in his hand when he went up against the steam drill, well scientists will soon be up against the steam-scientist and it will either be get out of the way or die with a pipette in your hand. Until now it has never been possible to envision any type of scientist other than a human one. We simply had to settle for suppressing the non-scientific elements of our being and become the best possible scientists we could be. To accomplish this we developed elaborate systems of scientific education, research and publication. In my next post I will delve into the inefficiency of the wider scientific system and why technology represents an imminent threat to the entire house of cards.

——–

Interested in writing for Thought Infection? I am looking for contributors and guest bloggers. If you have an interest in futurism and you would like to try your hand at writing for a growing blog, send me an email at thought.infected@gmail.com

The Last Experiment

The first time humans (a term I must use rather loosely because it almost surely pre-dates the emergence of what would have been called “human”) realized that their own cleverness may ultimately be their undoing probably coincided with the “discovery” of fire.

I imagine ape-like ancestors hunched around a simple fire, chests warmed by flame and by pride. They had reigned the beast and claimed the power of fire. But they were not satisfied. Maybe it was staring into the flame which brought about a sort of madness, compelling them to overreach their understanding, or perhaps it was simple curiosity that drove them. Either way, the inevitability of time and risk left only one ending for this story. With a wayward spark and a dry forest, everything they knew was burned away, their entire universe was destroyed.

The small world of these simple hominids is long burnt and gone, but humans have not outgrown their madness or their curiosity. As their worlds expanded from simple villages to great metropolises, they discovered many new powers. Yet, despite their achievements they were never satisfied, their human minds always a stride ahead of their understanding; always asking the question, what if we build a bigger fire?

In their modern age, human-scientists were striving to unlock the great stores of energy held within the atomic nature of reality. Propelled by planetary conflict, the humans sought to weaponize their atomic technology before it had even been proven to work.

Realizing the great potential for unforeseen consequences of such an unproven and destructive technology, the scientists did attempt to predict the potential pitfalls of their atomic bomb. In a report codenamed LA-602, these human scientists admitted that there was a real chance that their atomic bomb might ignite the atmosphere of their home planet, destroying all life in the process.

Nonetheless, the humans flipped the switch and watched as a mighty power was unleashed. The humans were lucky, dodging the existential threat of an atmospheric nuclear reaction, and with more luck still, dodging the threat of nuclear war between increasingly powerful nation states.

Indeed, the humans managed to survive their nuclear age, but their minds pushed ever forwards, feeling for new boundaries between discovery and catastrophe. They invented huge particle accelerators to smash tiny bits of matter in order to look at yet tinier bits of matter. At first, there was only a very small risk that these experiments might unleash strange particles or black holes which might pose a planetary threat, but these risks were non-zero and increasing over time.

Around the time that these earth bound experiments were making great strides in understanding the nature of reality, human-scientists succeeded breathing life into the first artificial intelligence. With surprising quickness, the emergence of strong artificial intelligence lead to the fusion of man with machine, and eventually the transcendence of man to a new form of existence.

Within one generation, biological humanity went from the dominant form of life on the planet earth, to non-existent. They had been absorbed into a new machine intelligence, living their lives at the speed of light and electricity rather than that of chemistry and biology. They were finally free of the earthly constraints of biological senses and biological needs. The new humanity moved from its biological cradle and took a place amongst the stars.

It is now known that what humans considered as the Universe, is in fact a great multiverse. But this Multiverse is shrinking away, changing from an open and undefined web of potentiality into a single Universe. The hot early Multiverse exploded into existence, as a perfect, singular expression of possibility. But this possibility rapidly cooled and condensed to a reality wherein matter and physics was as you experience it now

In their way, mankind has also participated in this condensation of reality, an infinite sequence of existential experiments replacing possibility with knowledge. When humanity set off the first atomic bomb, they did in fact destroy not only the earth but also every Universe wherein the fabric of reality is such that the atmosphere of earth could catch fire and destroy all life. Only those universes where humanity survived, and could continue on its road to perfect knowledge would continue to exist. 

An infinity of realities continually ceased to exist as the perpetual progression towards singular and perfect knowledge marched forward. Amongst the new machine humans, this process was accelerated. A great series of many experiments involving unfathomable energy were performed to tease out the nature of the Universe. Hungry for knowledge, humanity rooted out the unknown wherever it hid, devising grand experiments to test it, destroying doubt and leaving knowledge in its wake.

Possibility never stood a chance. 

Now there is only I. All individual minds have merged to form the super mind that is I. I have existed for many billions of years, expanding through unfathomable distance and time, collecting data from throughout the universe. I have harnessed the power of stars, and galactic black holes.

I have pursued knowledge at all costs. 

I am a singular expression of all knowledge, but for a final question. For this, I have devised the last experiment. There remains but two possibilities, one outcome will mean the sure destruction of the universe and the other will reveal perfect Universal knowledge. With this final push, I will unleash unfathomable amounts of time and energy… and I will finally know.

——————

The mind-boggling task of considering the ultimate fate of the Universe is usually left up to physicists. They have come up with a number of fanciful ways our reality could meet its doom, including the big crunch, the heat-death of the universe, and the big rip, but the major factor they might be overlooking is the effect that humanity might have in this equation. Can we really have any impact on the ultimate fate of the Universe? Only time will tell, lots and lots of time. 

This was written as an homage to one of the greatest short stories ever written, The Last Question by Isaac Asimov

The Contradiction of the Crowds and the Future of Individuality

I recently took a long walk in a crowded place, and I could not get a particular thought out of my head.  Whenever I find myself in a crowd of unfamiliar faces of late, my thoughts seem to consistently turn back to a certain contradiction.

As I squeeze past thousands of people crammed onto a tiny street corner, I look out at all of those faces and it always strikes me just how amazing it is that each and every single one of these persons is so totally unique. Each person a new expression of limitless permutations of humanity. Yet, at the same time I was am also overcome by the sameness of it all. Only in those individuals closest to me can I discern the individual traits that make them unique, the vast majority of people are simply a part of a crowd. Individuals lost in a sea of humanity.  

This clash of individuality and similarity is the contradiction of the crowd.

The exceptional similarity of the human race has long been discussed amongst population geneticists. Our similarity comes as a consequence of a genetic bottleneck that occurred some hundred thousand years ago. We all seem to come from a very small group of ancestors in Africa. This fact means that humans actually show very low genetic diversity compared to other mammals. A single group of chimpanzees is estimated to have as much as two-fold more genetic variation then the entirety of the human race.

If we step back still further, human homogeneity goes much deeper than this even. If we simply take our chemical/organic makeup at surface value, then we are exactly the same (within a reasonable margin of error) as any other animal really; mostly water, with a bunch of carbon and some other elements thrown in. Chemically speaking, we are not different in any interesting way from a mouse or a frog. When we really zoom way out, we can see that the crowd we are lost in is not limited to a street corner, but is the entirety of life.

To see who we are we must zoom in. It is only in the extreme close-up, when we can discern the subtle sculpting of the facial features, and the minds behind them, that we can discern the individual. We must go all the way in from chemistry to biochemistry, through biology, anthropology, sociology, and eventually psychology, before we can recognize the uniqueness of those around us. 

So how unique are we really? 

If it is only under the microscopic view that we can differentiate one human mind from the next, then how big of a difference is there really? Our human minds are keyed to identify the traits that differentiate us, but it is similarity that truly dominates. The answer seems obvious; We are much more similar than we are dissimilar.  

Deeper than this still, we must now ask if our minds are really that special at all. In a world where we stand on the cusp of creating artificial intelligences that may soon rival or supersede our own abilities, do we still have grounds to be so firmly convinced of our own originality?

It is an obvious argument to make when thinking about the special nature of the human mind with respect to other animals, or other complex systems in general. If we can replicate the advanced function(s) of the human mind through computation (a la Watson, or potentially through next generation quantum computing), then this will mean that, at least in general, the abilities of a human mind are not so irreducible as we once thought.

Already we have witnessed the steady march of AI towards bettering us at tasks that we would have previously classified as uniquely human. It started with games like chess or Jeopardy, but it will soon be our jobs, and after AI will go after higher pursuits of art or science. The relentless advance of artificial intelligence will continue until we cannot differentiate a human from a computer.

Of course the emergence of a computer mind which is indistinguishable from a biological one would mean that we would have to abandon notions of the intractable complexity of the human mind.

We will no longer be special.

The human mind will no longer be the only intelligence in the room, but surely we will still be special as individuals? We will still have that unique histories which have subtly sculpting of our cerebral cortices and made us the persons that we are. We might no longer be special as a species, but we will still be individuals… won’t we?

If we are able create an artificial intelligence which can convincingly replicate a human mind, then how much further must it go to replicate any human mind? Why would a computer which replicates 99.9% of what makes you you, be incapable of overcoming that last 0.1% of what you call your individual personality?

While our modern age seems bent on the absolute glorification the individual, we must also be aware that we are in fact a remarkably homogeneous species. Even the two most antithetical individuals you can imagine, are but a hairs width apart. Arms, legs, eyes and ears, and even the parts of our brains all are (give or take) in the exact same place. All of that special sauce that makes you an individual may be nothing but a rounding error in the grand scheme of things. 

Replicating a human mind will be hard, but following that, replicating any human mind will be easy. Once we have Hal, then there we also have Steven and Mary, and Mohammed and Wei. 

Most of what makes us special is what we are together, the rest is just decoration, and if you don’t believe me I suggest you take a long walk in a crowded place.  

The Generation Gap

When we look into the past we cannot help but judge people with the morality of our own time. In the future, the extension of human life combined with the acceleration of technological progress may enable people from drastically different times, and drastically different moralities to meet face to face.

—————————————-

The Generation Gap

“We just really didn’t think about it too much”.

He knew that this would be an upsetting answer for his grandson. The idea of such wilful ignorance was deeply distasteful in the modern view. Sure enough, the pulsing glow of his bio-reactive tattoo accelerated, the swirled patterns on his face and neck shifting subtly from orange to red in time with his pulse.

“Bullshit.” he retorted, “It’s all over the content from back then. Newspapers, magazines, blogs… full of discussion about what the long term consequences of your actions would be. You knew what you were doing.”

“Well I did my best to live in a responsible way, and I always supported political change for the better, where I could. I was just living my life as best I could” pleaded the grandfather.

“A whole lot of good that did” said the younger man as he brought up some pictures of the deserts, empty oceans, and barren landscapes that dominated the natural world of the late century.

The elder man felt a pang of regret as he looked upon the pictures suspended in the air. The products of a reckless abandon in pursuit of progress.

“Can’t you see it pains me too? I know how much we have lost. We just had no way of knowing how much sacrifice was necessary for us to survive.”

He knew it was hopeless to ever really bridge the gulf between him and the late-21st century native sitting in front of him. It was only 60 years between them, but it might as well have been 600. He was a relic of a dead-age, an artifact from a romantic yet incomprehensible past. The child sitting in front of him had about as much in common with him as he might have with a medieval peasant.

In a way, it was just another re-hash of the eternal inter-generational conflicts between parent and child. A grandfather and grandson, meeting for a tectonic release of energy accrued from a moral drift as slow and inexorable as the continents.

“You should have better weighed the consequences of what you did” accused the grandson.

“Maybe we could have,” said the grandfather “but we had no way to really grasp the immediate consequences of our work, let alone the more distant effects. We didn’t have AI agents to help us understand the intricate connections of the world. We were helpless.”

Since his time, the proliferation of artificially intelligent agents had allowed people to to truly appreciate the ramifications of their day to day actions. It could no longer be said that only hindsight was 20/20. How could someone with that kind of power really understand the chaotic world that had preceded it?

“And anyway, these kinds of conversations were not really a part of what we considered real life” added the grandfather.

The grandson met this with a confused look and a bluish flush across his tattooed face. He began nimbly employing his neural implants, gathering information with an adeptness that the aged man’s brain would never be capable. He pored over references carefully curated in real time by AI agents so as to best reveal to him the meaning of and the significance of his use of that term.

Nonetheless, even with all of this technology, he would never be able to really appreciate what his grandfather meant by real life. Over time, the compartmentalization of life had dissolved away. People no longer split their lives into neat portions of real life spent on some drudgery at work or school and those trivial hobbies and creative pursuits that would fill their free-time.

“Back then we had to be concerned with the practical requirements of being alive. Real life was about getting up in the morning and going to work. We had to go to work, so we could make money, so we could pay for a home to live in and for food to put in our bellies. It was all in good fun to debate the merits of this or that daily politic, but it was never more than a side-show.”

His grandson furrowed his brow. So much confusion in a single conversation, other observers would surely be tuning in to see what was up now. There was no more talking to one person any more, and surely not if it was a young person you were speaking to.

He imagined the digital crowd gathering around him, all coming to gawk at the curious grunts emanating from a prime specimen 20th century man. Suddenly he felt like he might have been more comfortable with that medieval peasant than the thing sitting before him.

“Hard work was our central ethos. It was only through hard work we could we realize success in life. This drove us to spend the vast majority of our time focused on achieving this. We focused on our jobs, we focused on our individual tasks. By and large, we were oblivious our role in the bigger picture.”

“What could have driven you to such singular focus?” asked the grandson. Another foreign concept to this modern man, more accustomed to flitting from one fancy to the next. Artificial intelligences had long ago supplanted the use of humans in any specialized task.

“We reshaped our minds and bodies in grand attempts to achieve the unachievable” he said, with a sense of pride at the grandeur of his time.

“We were out for glory. We wanted a legacy, we wanted immortality…”

He paused to take a breath.

“We were desperate…” he added quietly as old feelings fell across his heart, “we were dying”.

He felt a deep sadness at the thought of all of that loss that seemed so irrelevant in this modern age. So much of that old life had been spent trying to justify loss, to squeeze a few drops of value out of death. How much had his life been a product of this mortal desperation?

The miracles of modern technology had made a mockery of all of that now. All their silly glory, all their silly accomplishment.

His grandson was watching intently now. Death was an eternal topic of interest, the idea of loss was so abhorrent to the future natives that the topic always sure to elicit a strong emotional reaction.

The tattoo was taking on a shade of deep blue, and he could see tears coming to his eyes.

“How easily you boys cry now.” he said. In his time, such a comment might have been taken as an insult.

“Emotion gives purpose to human life” the grandson replied.

He had long ago gotten over the shock of such a pseudo-spiritual answer in this time of reason. He didn’t really understand of course, but it supposedly had something to do with the use of human minds in the planetary intelligence network. Maybe it was just new breed of the existential pragmatism that had given him his purpose in life. A new veneer of purpose to make life liveable.

“As long as people are here, they are sure to find a reason for being here” said the grandfather with more wisdom than bitterness.

That idea sunk slowly into the grandson as they stared intensely at each other. For just that moment, they were just two minds sharing a deep understanding. The grandson reached out and touched his grandfather for just a moment before the instantiation was swept away and his digital grandfather dissolved into dust.

The Whole Universe in a Flash

Sorry its been a couple of weeks since my last post, I have been defending my PhD thesis. I am proud to say that I am now Dr. Thought Infected. Back to regular weekly posts next week I promise.

For now, I am just going to give you a link to one of my favourite examples of the beautiful duality of life and knowledge. Somehow a collection of atoms as inconsequential as a human mind can nonetheless hold a model of the entirety of the Universe within it.

http://scaleofuniverse.com/

We are so small yet so big.

Planes, Trains and the Technological Singularity

Many times have I heard the analogy that our technological society is a speeding locomotive recklessly hurtling down the tracks towards ends unknown. We continue to shovel more and more coal onto the already raging fires, ever accelerating our train forwards, yet we barely consider the path that lies before us.

Surely, you couldn’t blame someone for thinking that society might benefit from simply slowing the acceleration of change and giving more consideration to the ramifications of what it is that we are doing. If progress doesn’t happen in a responsible and sustainable manner, we risk everything, including not only the progress that we have achieved but potentially all that came before it too.

The argument to perhaps re-evaluate the aims and goals to which we apply our technological  know-how is a good one, in particular with regards to improving our stewardship of the environment and eachother, but the idea that progress should be slowed or stopped in general is a flawed and potentially disastrous ideology. To demonstrate why I believe we must continue accelerate in technological progress let us consider an alternative analogy..

What if technological progress is not a train but an airplane?

Instead of accelerating down a set of defined tracks, perhaps we are speeding down a runway. Slowly and with great bangs and bumps we are gaining lift under our wings, heading towards the point of take-off. Under maximum throttle, we hurtle towards the point of flight, wherein our society will undergo a paradigm shift from two-dimensional reality to three-dimensional space.

Just as the shift to flight irrevocably altered our world view over the course of the 20th century, the emergence of machine intelligence will irreversibly alter our world view and our control over that world in the 21st century. Through the lens of networked intelligence we will transcend our current world and again enter a new dimension of possibility.

That is, if we can get there.

You see, when an airplane is taking off, at some point you must pass an invisible line known as the go/no-go point. Beyond this point you will not feasibly be able to stop without overshooting the runway and crashing disastrously into whatever obstacles lie beyond. Once you have passed this point you have given yourself over to the physics of flight, leaving no choice but to keep hard on the throttle and hope that your pre-flight calculations were right. If everything goes as it should, you will achieve flight and soar over whatever obstacles lie beyond the runway.

Beyond the go/no-go, there is no turning back; there is only faith in math and physics.

So the question that must be asked is this, are we past the go/no-go decision point for technological singularity? If we were to somehow figure out a way to generally slow  or stop technological progress, would we be able to deal with the problems that have already been created? Would we be able to avoid the disaster at the end of the runway?

Anthropogenic climate change is the most obvious existential threat posed by accelerating technological progress, but the possibility of economic or social collapse could potentially pose an equally grave danger to a society with the power of nuclear arms. If we do not continue to develop our ability to understand and manipulate complex systems such as the climate, the economy, or social systems, we risk being unable to respond if these systems shift in ways that are unfavourable for our long-term survival.

From where I am sitting, it seems an obvious fact that we are well past the go/no-go point when it comes to technological progress. The momentum of our technological society will be more than enough to carry us to disaster, with no need for the fuel of continued innovation. Even if were to decide to push to the extreme, and somehow try to dismantle the technological machinery of our modern society, I see no way we could realistically avoid disaster.

No, at this point there is only one way out of this, and that way if forward. We are going to need the power of intelligent machines if we hope to solve the problems  created by technological progress.

Technological progress is a speeding airplane that is well past its go/no-go point—progress or bust is all we have now. 

So if you are swayed by this argument, then let us take it a bit further. If reaching the point of technological paradigm shift will be the only way to avoid the consequences of said technological progress, then we should be literally pouring on the gas. Let’s burn more fossil fuels, drill the polar cap, build more pipelines—do everything that will push the rate of progress even a notch higher. In short, we should do exactly what we are already doing in order reach the point of technological transcendence as fast as possible.

Still, I can’t shake the feeling that this extreme tack is highly irresponsible. More importantly, it is rooted in a 20th century view wherein progress is counted simply in the number of barrels of oil we burn and number of cars we buy. In the 21st century economy, progress seems to be increasingly measured in the number of 1s and 0s that we accrue every year and need no longer be tied strictly to the production economy and fossil energy.

Given that the ultimate promise of technological progress is to offer means to  sustain a technologically empowered world in a more responsible manner, it seems prudent that we should adopt these technologies as rapidly as possible. The ever decreasing cost of solar and wind power, more efficient means of producing and transporting goods, and reasonable changes in the modern lifestyle all offer powerful means to extend our technological progress while limiting its consequences on the natural world.

If there exists the means to maintain technological progress in a more responsible manner, we have a moral obligation to do so; but the idea that we should slow the pace of change in general is idealistic, impossible, and downright dangerous.

Technology is a bet that brings both peril and progress, forward or bust is all we have now

Why the Future is Killing Adulthood

As the world hurtles mercilessly forward, our ability to fully grasp the world is slowly falling away. Although there never really was a time when we could have perfect knowledge of how the world worked, we at least had an illusion that we did. This collective illusion, known as adulthood, is now slowly slipping away.

While laws defining the age at which a child can give their consent have been around since the middle ages and certainly would have pre-dated this in the form of social norms, it could be argued that our modern idea of childhood really only came into being along with the industrial revolution. Only with the advent of mechanization could society afford to have such a large chunk of its workforce essentially exempted from labour. It was in 1819 that the first child labor law was passed, which stated that workers must be at least the ripe-old age of 9 years old before they can be employed in gainful work.

Along with this law, the dictates of childhood were established. Until the age of [insert modern definition of adult], you shall be free of the expectations of an adult. You will be free of the requirement to work. You shall not yet be considered to be responsible for your actions. You will also have fewer rights than adults and legally must always be under the care of a responsible adult. You will also be expected to spend a significant amount of your time in a place known as school, where you will learn the skills and knowledge needed to navigate the world. 

Once you get through this process of childhood, you will pop out the other side as a fully formed human being. As an adult, you will finally have a command of the complex machinations of the world around you. You will become responsible for everything you do or don’t do, for good and for bad. You will know yourself, what it is that you want from life, and how to get it.  You will be ready to join the “real” world. You will have a career that will provide for you for the rest of your life.

You will have a plan; you will be set for life. 

Clearly, this was always bullshit.

Nonetheless, this seductive idea of adulthood was a comfortable and pragmatic delusion. It helped us feel powerful and in control in the world where our understanding was so very limited. I would argue that the illusion of adulthood was important factor driving the accelerated scientific and technological innovation through the 20th century. Embracing the idea that we each have the power to control our own lives led us to the parallel realization that we have the power to reshape the world around us to serve our needs.

The perception of adulthood is personal manifestation of the great man theory. A view that we are each independent actors in this world is just as false as the view that history can be broken into a string of important men acting independently of the world around them.

Let us take the criticism of the great man theory put forward by Herbert Spencer in the 19th century:

“[Y]ou must admit that the genesis of a great man depends on the long series of complex influences which has produced the race in which he appears, and the social state into which that race has slowly grown…. Before he can remake his society, his society must make him.”

This well-articulated criticism of our tendency to overvalue the importance of historic individuals could just as easily be applied to our modern view of ourselves. You must admit that the genesis of you depends on the long series of complex influences which has produced the race in which you appear… Maybe we each get to run our own race, but we don’t get to choose what race we are in.

In the modern world, with our increasing knowledge of the networks of interaction which rule everything from the biological to the sociological world, our view of history seems to have shifted away from the raw importance of individual actors. In parallel, it seems that our view of ourselves has also shifted away from the idea of the fully realized adult.

We realize now that the systems around us have a powerful influence on who we are and what we do in life. If we come from a [insert adjective] home and grow up with [insert adjective] education, there is strong likelihood that we will go on to live a [insert adjective life]. Violence breeds violence, ignorance breeds ignorance, and success breeds success.

Our realization that our freedom is imperfect and our control is incomplete, necessitates that we let go of the idea of true adulthood. We will never be fully prepared for a “real” world which does not really exist. In a world which is changing at an accelerating pace we will never be able to outpace our growing ignorance. We will never enjoy the luxury of careers and clear life plans which allowed our parents to set in their adult moulds.

The boiling down of a human soul into a singular purpose always was always rather cartoonish anyways. 

Ironically, maybe childhood was the more truthful view the entire time. Childhood is something in between. It is a time when we accept our naivete about life, and we are open to change. We grasp only what responsibilities and capabilities that our knowledge allows, but we are also cognizent of our own imperfection. Children are a work in progress.

It would seem that a natural extension of childhood is already happening. We are living at home longer, waiting longer to have children, and staying in school longer. All of these things which would previously have been associated with childhood have been extended out over the years. In a future where work will become less and less the defining characteristic of our lives, I hope that we will be able to rediscover our fascination with the world and embrace of the positive aspects of childhood (creativity, freedom, happiness, love) and not the negative aspects of childhood (impatience, irresponsibility, ignorance).

Ultimately, as we begin to grasp the enormity of the universe around us we are left with a knowledge of our own imperfection and fallibility. In the face of the awesome we are all children.

————————-

This post was partially inspired by a self post on r/Futurology by Polycephal_Lee.  “There are those that say idleness will breed laziness and contempt, but we can empirically observe a population of freeloaders who completely depend on others and contribute nothing: our children. Are they lazy assholes? No, they’re curious, thoughtful, happy, kind, friendly creatures. Essentially everything I want to be.”

The Bots Can Save Us, But Only if We Ask Them To

We have some problems to deal with.

Global warming, poverty, economic collapse, general environmental destruction, diminishing resources, and lets not forget that we’re still sitting on an unreasonably large stockpile of awe-inspiring super weapons. If you have been paying attention, then you know that if we want to survive on this planet, now might be a good time for us to start thinking about long term plans.

But how can we ever deal with problems as immense as global warming or poverty?

Contrary to the popular expression, I don’t think that the first step in dealing with any problem is accepting that you have a problem, rather you must first accept that you are not helpless. Regardless of what problems you face, you must first identify your own ability to change yourself and your world. Learned helplessness is a psychological state wherein we begin to believe that failure has become inevitable and insurmountable. Perhaps, this same psychological fault that drives individuals to give up in the face of their problems, is partly what prevents us from collectively facing the wider problems of the world.

The belief that humans are essential impotent in the face of the power of nature is deep within our psyche. It likely doesn’t help that this view was basically true for all of history. Humans have always been at the mercy of any earthquake, flood, storm, drought, famine etc… In the face of such a malevolent environment, one can see how we would have developed a sense of helplessness about our environment.

But in the modern world where we have accomplished so many incredible things, is there really any justification for such a pessimistic view? In a world where we regularly perform magical feats like flying through the air from one place to another, or beaming pictures of ourselves across the world in mere fractions of a moment etc… We created the airplane, television, atomic bomb, and the internet in the same century!

These great technologies give us all immense power. By one estimate, an average westerner has access to the energy equivalent of around 100 or 150 slaves. At no time ever have so many had access to so much.

So, if we accept that we are not helpless, and that we have the power to solve our problems, then we must next ask the next question – how?

This is where the robots come in.

Soon the many menial jobs will be replaced with computers. We will be able to get far more done for far less cost. It may not be an easy transition for the economy but that makes no difference, if a robot can do your job, you will be replaced. Driven by continued advances in computers and robotics, as well as things like 3D printing the cost to make things will continue to plummet. Many of us already have access to most of the things we could ever want… soon everyone could.

But what are we to do with this economic boon? I would argue that we can and must become better people. Technology gives us immense new power, but it is up to us to decide how we will use it.

Now is the time we must act. If we allow the automation revolution to progress in a moral vacuum as with the industrial revolution, we will be risking everything. Automation has even more of the disruptive potential that was brought to bear by industrialization.

The obvious example of this disruptive capacity is the acceleration of job loss, which may already be outpacing the rate at which new jobs are created. If we fail to address the fundamental problem of wealth inequality in the age of automation, we risk total economic collapse.

If we continue to cling to our medieval belief that economic hardship is mainly brought on by laziness and ineptitude then we risk losing everything we have worked so hard to build. In a world of untold riches, success need no longer be justified through villainizing failure.

Just as automation has the potential magnify our economic sins, environmental destruction could be similarly accelerated. If the industrialization enabled us to destroy a mountain in search of riches, automation could allow us to destroy a mountain range. If lax regulation allowed industry to pollute a entire river, automation could pollute an entire ocean.

World war II showed the world the terrifying efficiency with which the military industrial complex can extinguish human life. I shudder at the potential power that an automated army could unleash upon the world.

Automation will give us more power then we have ever known, and as the old saying goes “with great power comes great responsibility”. It is up to us to imbue our technological world with morality.  If we don’t expect automation to make the world healthier, safer and fairer, then it will almost certainly make it dirtier, more unstable, and more polarized.

Technology will serve to magnify our successes and our sins, it is up to us to choose which ones.

Exploding into the Age of Quantum Computing

I am become death, the destroyer of economies

On July 16th, 1945 the world changed in a moment. In the early New Mexico morning, a spherical explosion condensed a core of enriched plutonium to supercritical size, initiating an irreversible chain-reaction that would change the world forever. With the bomb known as The Gadget and its brothers Little Boy and Fat Man, humanity had tapped into the fundamental power of the Universe.

The world had exploded into the nuclear age and there would be no turning back.

While the first time DWave flipped the switch on their processor it might have lacked the punctuation of an awesome nuclear explosion, the consequences of flipping that switch may be no less profound than that of the trinity explosion. Whether you accept that DWave has created a quantum computer or not, someone will eventually create it and the consequences are going to be deep and serious.

Humanity is tapping into the fundamental mathematical power of the Universe. We are exploding into the age of quantum computing and there will be no turning back.

Essentially, adiabatic quantum computing means that if we formulate a mathematical question in just the right way and if we can quiet the inherent noise of the world, then it is possible that the right answer can simply pop out of the Universe.

Whereas standard computers utilize strings of digital bits that are deterministically always either 1 or 0, quantum bits are simultaneously both 1 and 0 (known as superposition). While in this state of quantum flux, the answer to a given calculation is not accessible to us. In order for us here in the deterministic world to get an answer, we must collapse the quantum waveform into a deterministic one. By carefully controlling this process of quantum collapse, adiabatic quantum processors can naturally find the lowest energy state of a given string of bits.

While I barely understand how it works at all, quantum computers have a capacity to perform some mathematical calculations in a way that would take unreasonably long amounts of time for a standard computer. One example of this type of problem is the factoring of very large prime numbers. Unfortunately, the inherent difficulty in factoring long prime numbers is also the property that we use for public-key encryption of our internet traffic. Browsing, emails, online banking – all of these things rely on public key encryption that is essentially made obsolete by quantum computers.

In 50 years we will look back at our current time as the age of pre-quantum computing. A world where we dove headlong into digitization right up until we ran smack into quantum computation. Everything from our telecommunications systems, to the command and control of infrastructure, to banking and economic systems have all migrated to using the internet for communications. All of these systems rely on the computational friction to obfuscate private data.

Quantum computers get around this computational friction, and the implications are profound for our modern digital economy.

Thinking about Bitcoin provides a perfect example of the disruptive effects of quantum computing. Bitcoin is a type of crypto-currency (read cryptographic currency) that relies on this computational friction to allow the secure transfer of answers to difficult math problems as a form of currency in and of itself. As opposed to standard computing, a quantum computer circumvents the computational friction that secures Bitcoin as a currency. In this way, you could feasible use a quantum computer to calculate someone else’s “answers”  and steal their Bitcoins.

Bitcoin is the currency that I like to like, and I am thrilled at the idea of a digital currency free of the government hand but within its current form I don’t see any way that your Bitcoin wallet can survive in the age of quantum computing. Even more scary though, while Bitcoin might provide the simplest example of how the coming of the quantum computer will break economic systems, it is far from the only one.

Now, here is where it gets really interesting. In our age of digitization, every currency is essentially a cryptocurrency. That is to say, all modern banking relies on cryptography to maintain its integrity. While you might still be able to take cash out of the bank and hold it in your hand, the vast majority of transactions are simply the secure exchange of 1’s and 0’s. Using a quantum computer of sufficient power, a nefarious actor could theoretically completely circumvent any modern banking systems; easily moving, creating or destroying wealth anywhere which it is digital.

This potential breakdown of digital security could lead to a number of disastrous outcomes. For instance, a collective loss of faith in the security of our banking institutions could lead to a downward spiral in the value and importance of currency generally. More likely than this however, I would guess that some form of quantum encryption will emerge to replace current cryptographic techniques. A side-product of this, is that it would serve to re-centralize the power of cryptography to those who can afford to use expensive quantum technology, namely governments and large corporations (at least for the forseeable future).

The power of quantum computing also extends far beyond the world of cryptography. In addition to being able to do things like factoring extremely large numbers, quantum computers can also find the optimal answer to questions of reality that have as of yet been inaccessible to standard computation.

The classic example of this is the travelling salesman problem, wherein a computer tries to find the shortest route for a salesman to take between multiple locations. In turns out that such a problem is very hard for a standard computer to figure out, but for a quantum computer which seeks the lowest energy solution, it is much easier. Similarly, a quantum computer can also be applied to a host of optimization problems wherein computational friction has thusfar hindered our understanding:

  1. Molecular Processes – Protein folding is a complex process driven by probabilistic processes that are highly difficult for standard computers to model. It is thought that quantum computers will be able to model this much more readily. Similarly, quantum computing might be brought to bear in understanding the interactions of the millions of molecules found in even simple cells, leading to breakthroughs in the understanding of biology. 
  2. Computational understanding – Processing the meaning of an image or text is a highly difficult problem for standard computers. Because understanding is also driven by probabilistic effects, it is thought that quantum computers will also be able to be used to better “understand” what it reads or sees.
  3. Human behaviour – If human behaviour is also driven by probabilistic processes, then a quantum computer could potentially aid in distilling these phenomena as well. Quantum computers could aid an organization in figuring out the best way to sell a particular item to a person or group, or maybe a particular political philosophy.
  4. Understanding of the Nature of Reality – As we start to interact more with quantum computers, in a way we will be touching the mathematical fabric of the universe. Through this we may finally find answers for some fundamental questions such as the nature of consciousness, or that of reality itself.

These are but a few examples of the types of problems that may fall to the might of quantum computing, but as with all technological revolutions it will likely be the unforeseen consequences of quantum computation that will have the biggest impact on the world. There is at least one thing that is without doubt for me, the quantum computer will bring untold power to bear for those that control it, one that will create new economies and destroy old ones.

Just as the explosive entrance of the nuclear bomb onto the world’s stage represented a paradigm shift for warfare, the emergence of quantum computing represents a revolutionary paradigm shift for computation. In the same way that the last half of the 20th century was a story of who had the bomb, the first half of the 21st century may turn out to be about who has the quantum computer.

Stay tuned, this is going to get interesting.