The King has No Clothes: The Psychological Threat of the Singularity

First we had to realize that we were naked. Then slowly, with bent backs and strain, generational toil wove the garments of our birthright, our dress of self-importance. 

The history of the human race can be viewed as a story of the exponentially increasing power in human hands. From the simplest stone tools to the smartphone, technology has always served to better the power of humans to realize their dreams. In the 20th century, this process reached apogee, with focused ingenuity and effort we literally reached to the moon and connected together the minds of the planet. We were capable of anything.

As a species and as individuals we were assured of our self-importance.

But now times are changing, we stand on the cusp of a new revolution. We are creating artificially intelligent machines that will better realize the needs and the dreams of man. Yes, this will of course serve to extend and continue the chain of betterment, delivering yet more power to the hands of the individual, but I also think an important distinction must be made for we will no longer be realizing our dreams, rather our dreams will be realized for us.

While to some it might seem an overly academic or semantic point, as to whether we realize our dreams or have our dreams realized for us, the psychology of this could not be more important. It is the difference between the self-made man (or woman) and the lottery winner.

On one hand we have the achiever, at his heart he is a rational actor imparting himself upon the world. His life reinforces the view that our actions have merit, and that our decisions matter. He is why we believe that hard work can makes success, and that we have the power to control our own lives.

On the other hand, we have dumb luck; the lottery winner, the born-rich, the untethered. His life reminds us of the happenstance of the universe and us mere victims of the fortune or misfortune that should happen to float our way. It is a sad fact that this kind of good luck offers little happiness to those who receive it, people thrive on earned success not that which is given out too easily. We need to struggle for our victories.

While our world has always found itself balanced somewhere in between these two extreme philosophies, I wonder whether the future might tip us disastrously towards the psychology of the lottery winner. If and when we live in a world wherein everything that we have ever dreamed is made possible by the power of artificial intelligence, will it also destroy our will to want anything?

How can human beings hope to retain their robes of self-importance, in a world wherein their accomplishment means little in the face of ever smarter, ever-more powerful artificial life. Our self-assuredness is based entirely on a view that we are of value to the world, that we can change it. The singularity threatens to collapse this view entirely, leaving us as cold and exposed as we ever were. 

Humans need to believe that the king has clothes, without struggle and success we have nothing left.  

Taken from a different angle, the question is this: What happens to human will as struggle dries up? What do we have to offer to a dream world of digital efficiency? 

Ultimately, it seems to me that humans are so hopelessly addicted to conflict that any world we create will require a certain dose of synthetic strife. We will seek to recreate both our happiness and our sorrows. If we cannot even envision a story which does not involve conflict, how can we hope to live in a world without it? 

**********

I just want to point out that The Matrix was a great movie. Also this speech by Alan Watts is quite relevant. 

Thought infection is still looking for its first guest poster. If you have an interest in futurism and you would like to try your hand at writing for a growing blog, send me an email at thought.infected@gmail.com

Advertisement

The Upside Down Economy

A message blinking in his peripheral vision roused Isaac from a light sleep.

A familiar symbol was flashing, below it was the single line of glowing text.

“You have been accredited for contribution.”

In and of itself, this was nothing unusual small accreditations for this or that were getting pretty commonplace these days. He focused sleepily on the message, this triggered the link to carry him through for more information, materializing what looked like a sheet of paper appeared floating in front of his face above his bed.

He blinked incredulously at the digits at the bottom of the page. 300 microbits? It must be some sort of mistake. He would have enough to pay his rent for years with that. Hell, he could probably buy the place for it. This couldn’t be right.

He pushed further into the document. Apparently an accreditation agent had identified several comments he had made to a thread on the future of medicine some time in 2014.

2014? That’s ancient history… these accreditation bots just keep digging deeper, thought Isaac.

His comments had been deemed second order attribution status for several innovations made by BioMark.

He could not believe it, how could he have had any impact on one of the largest companies in the world.

Comments, what comments? Of course he couldn’t remember some comments he had made over 15 years ago on some archaic website.  His thoughts pulled him seamlessly through the document and into an archived version of the Reddit comment history for the user name “SmellingBee”.

How clever, he snorted and rolled his eyes as an irresistable smile crept across his face. He was lost for a moment in nostalgic memories of being a young student in the thundering teens. The tone of the room changed subtly, reflecting his mood, memories sublimated into memorabilia fading slowly into existence. A movie poster, a television, a smart phone. The objects of a simpler time, a bygone era. He could hear an old favorite song somewhere.

Apparenly SmellingBee had made some predictions on the future of biomarkers and their role in the medicine of the future. The words flashed across his retinas.

The smart bathroom will be as important as the smart phone to the future medicine. Think about it, if you want to figure out what is going on inside of someone you access to something from inside their body. People don’t want to give blood or tissue, but they can’t help shitting and pissing all the time.

The smart bathroom will know more about your body than you will, and it will be a medical revolution.

He remembered the comments only hazily. He like so many others had been overtaken with excitement for futurism back then, surely this was only one amongst thousands of comments.

Of course, he wasn’t the first to come up with such an idea for a smart bathroom, the Japanese had been experimenting with it for years by 2014, but the digital record showed no signs that he had been exposed to these ideas. As far as the accreditation agent could identify, evidence indicated that he had come up with the idea independently. More importantly the digital record also showed that Noah Marks himself had first come across the idea of a smart bathroom in his post.

His heart accelerated. Noah Marks read my comment? The founder of BioMark, the company that ushered in the true age of smart medicine? Isaac could hardly breathe.

A glossy review of the history of BioMark began playing on his wall.

Noah Marks started BioMark in 2014 as a small medical app company in his garage. His first apps collected simple data on blood pressure and body temperature and provided a conduit for doctors to access patient data. The first BioMark breakthrough came as the company applied new artificial intelligence algorithms to the huge amounts of medical data they were collecting. With this, BioMark developed a proprietary algorithm to predict heart attacks several days before they happen.

The field of medicine was transformed overnight, with BioMark at the center of the revolution. Our software and algorithms quickly became integrated into the fabric of every insurance plan in the country. The rates of heart attacks plummeted, and the IPO for BioMark was the largest in the American history.

A picture of Noah Marks shaking hands with the president of the United States slid across his wall. 

The next wave of BioMark breakthroughs came with the release of smart bathroom technologies. Within a few more BioMark provided the data necessary to cure many diseases including intestinal cancer, inflammatory bowel disease, depression, among others. 

BioMark, delivering you to the future

The cheesy byline was accompanied by images of healthy looking old people smiling. By this time Isaac wasn’t paying attention anyways, he was excitedly pacing back and forth in his small room, head down and thinking rapidly.

It might be small, but he lived comfortably. And anyways, it did not seem so small when on only a whim he could change the aesthetics to put him in any place he might want to be. He could as easily live in a 15th century castle or on the surface of mars.

Isaac’s life had until now been a very typical one for the late 2020’s, spending his hours coming up with creative ways to meet his spending allotments. He was the product of the turmoil of the early 2020’s, the time known as “the upset”. A time when the automation of the economy had reached record levels and finally left the majority of people out of work.

In response to this the governments of the western world had finally realized people were far more important to the economy as consumers than as producers. Seemingly overnight, swift changes were made to implement a clever plan of Mincome for all of their citizens.

People would receive money without the need to work, but this money did not come without its own chains. People were now required to meet daily, weekly, and monthly spending requirements. For the health of the economy, people were legally required to spend a significant proportion of their days making strategic spending decisions. The luxurious lives of the privileged few in the 20th century, was the trap of the 21st century.

The robotic economy could produce all the goods that humanity could want, but it needed people to create the demand to keep that economic engine running. So, people whiled away their days endlessly browsing consumer goods, entertainment, travel, luxury goods, trying their best to meet their spending requirements; stuck on a treadmill of receiving money and finding creative ways to spend it.

The only way to exit the endless loop of spending was either to pay someone else or hire an artificial intelligence agent of sufficient power to spend in your stead.  These options were too expensive to be available to the majority of people. Average people might save what little was left after they met their spending requirements, in hopes of one day retiring from spending, but it seemed that fewer and fewer could afford it every year as spending requirements constantly inched up to feed a growing economy. 

There were of course those who had the resources to outsource their spending, those free to spend their time working. Once inside, workers earned more than adequately to pay for outsources spending AIs. Those on the inside, stayed inside. The world had turned upside down, but somehow all the parts had stayed in the same place.

Power still flowed up an economic ladder without rungs in the middle. Those in control stayed in control, and everyone else did what they were told. This is how it always was, and how it stayed. 

But 300 microbits, that would change things for him. He could easily afford to lease the computational resources to pay for an AI to meet his economic obligations. He was free. Isaac threw on some dirty clothing from the day before and walked out of his door.

——–

In thinking about the future automated economy many see the institution of some kind of a “Mincome”, or basic income, for all people as already a forgone conclusion. If we take this to its logical extreme and imagine a world where nobody needs to work but everybody needs to spend, what does this world look like? This is the first in what I hope to be a recurring series following the life of Isaac in the world of the upside down economy. 

Totally Autonomous Businesses are Coming to Eat Your Lunch

You are probably already doing business with your replacement; algorithmic salesmen are everywhere, and they are just the beginning.

If you have ever bought something from an online retailer such as Amazon or even if you purchased these same commercial goods in a store, you have already been buying things from algorithmic salesmen. These increasingly complex algorithms use a number of metrics to try to set the best price for any particular item, including what the current market demand for these goods is and what other retailers are selling this item for.

If you want to get an idea of just how advanced the algorithmic pricing model is, you can try this simple experiment yourself. Go to a website which offers flight deals, of which many can be easily found through google so I won’t need to shill any here. Next, type in a destination you might be interested in going to and check out the prices that are on offer.

Now you need to wait a few days and go back and check the prices again. Have the prices changed? Now try opening the private-mode of your browser (such as incognito for Chrome), and perform the same airline search. You may find that the prices are suddenly lower. This is algorithmic sales in action, and it is the future of business.

Not only is the pricing algorithm using data such as the number of seats available of a particular flight, the prices that competitor airlines are charging, and the amount of time left before the flight, but it is actually using information about you as a customer to determine an individualized price aimed at maximizing profits.

Visions might come to mind of a robot dressed in a cheap suit hocking used cars. 

There remains little doubt that airlines are embracing this model of sales, as they make it increasingly difficult and expensive to buy an airline ticket through any means other than the internet. This model of mostly automated pricing could even be argued as the key driving factor behind the recovery of profitability in the airline industry over the last decade. 

The role of these kinds of algorithmic business strategies do not stop at the sales counter either. Have you noticed that google is getting better and at delivering those targeted ads for exactly what you are looking for. In particular, I have found myself much more often clicking the ad-links for specialized scientific items I am googling for. Algorithmic marketing through google is getting better at connecting me to the product that I am looking for, or the one that I didn’t even know that I am looking for.

Algorithmization of business is a much larger threat to the economies of the western world than the kinds of robotic automation of transportation or production that is typically discussed. Algorithms will replace more modern workers than robots. 

So what is all this driving towards?

Ultimately I see this kind of algorithmization of business leading to the evolution of a new kind of business, and one that involves little or no people. Over the next few years, I envisage the emergence and rise of totally autonomous businesses (or TABs).

TABs are business entities which employ few or no people at all; clever algorithms which identify demand through monitoring various data sources and act upon it. The classic example of this kind of algorithmic sales is the story of the algorithmic price-war which raised the price of a simple textbook on Amazon from somewhere around $70 to over $23 million. While this exposes the weakness of algorithmic sales, it is merely the infancy of an industry that stands to take the business world by storm. 

Say for instance that there is a swell of interest in widgets in South Africa. An algorithm notices that people are paying 80% more for widgets in South Africa than someone in India. The TAB then identifies a source for widgets, gets quotes for shipping said widgets, and can even identify retailers looking for widgets in South Africa. The algorithm then executes the sale and moves on to the next business deal.

Sounds like a typical business deal of the kind that enriches middlemen the world over. I predict that competition between these kinds of TAB businesses will serve to deliver goods to market with increasing efficiency, and involving diminishing numbers of human employees. These services will do nothing but connect producers to consumers, and will profit only their owners.

Businessmen hold themselves in high esteem for their ability “to generate wealth” with increasing efficiency, and rightly so. But with the ubiquity of the internet today, the friction of that created so much wealth for the middlemen of the business world is rapidly disappearing. The emergence of the totally autonomous business will be the death knell for middle-managers everywhere, and if we are not careful it may just be the end of the middle class too.

———-

This post was partially inspired by an interesting and insightful post by BTC Geek on Bitcoin and the Dawn of the Autonomous Corporation

The Life and Death of Economic Fairy Tales

The world is not what you think it is.

The astonishing pattern recognition engine that is your brain has leveraged years of observation, thousands of years of social evolution, and billions of years of genetic evolution to create the most amazing predictive model of the world ever constructed… but still it is not and will never be a true representation of the real world.

We are creatures of perception, living not in the real world but in an evolving map of it. 

While the gap between the real world and our model of it holds true in even our most dispassionate pursuit of natural scientific knowledge, nowhere is it more true than for the study of human systems. In particular, our understanding of something as complex as the mass of human interaction which makes up the economy is nothing more than a convenient story reflecting the current conditions of the world. We create economic fairy tales to break a complex system into easily digested chunks of metaphor. 

I do not suggest that we spin these economic fairy tales towards nefarious ends or willful ignorance, rather we believe in them for the same reason that we see faces in the martian landscape.  The human mind abhors a perceptual vacuum, and we will reach out to fill any gap of understanding with explanation. We create stories because we must.

Over the course of the 20th century, as the industrial revolution took hold and transformed human life we developed a deep belief in many such economic fairy tales, which fall all over the political spectrum. While these beliefs might have had pragmatic value in their time, changing conditions of the world mean that we must now reexamine some of our deeply held beliefs. It is time to rewrite some of our economic fairy tales or face collapse of the systems we have built on them. 

  1. The fairy tale: Growth in the economy will lead to the creation of new jobs.
    Why it’s wrong: While this may have held true for previous epochs of economic growth, there seems to be a diminishing connection between between growth in economic output and the employment of people. Continual growth in automation, robotification, and artificial intelligence will mean that more and more can get done with less and less human involvement. I cannot stress enough how much of a fundamental problem this will be for a society built around the job as the sole legitimate mechanism of wealth redistribution. more here…and here
    What it should be: Job growth should no longer be the political or economic goal.

  2. The fairy tale: Economic growth will always raise the living standard.
    Why it’s wrong: The view that economic growth will make everyone’s lives better is central to the belief in the economic system as an overall good, sometimes simplified as “economic growth raises all boats”. Given the staggering disparity in wealth going to the top 1% or 0.1% of people, there is no reason to think that economic gains could not be so disproportionately distributed as to actually make some people worse off. The economy is not a zero-sum game, in that there could be more for everyone every year that we see growth, but there is no law that dictates that all those within the social hierarchy gain equally, or gain at all. This is of particular concern if we realize that the growth of capital could potentially become decoupled from the real world. growing of its own volition independent of actual growth in servicing human need. more here…
    What it should be: We do not need absolute equality, but we should aim for equality of betterment.

  3. The fairy tale: Economic growth means destruction of the environment.
    Why it’s wrong: People on the left of the political spectrum seem to be totally dedicated to the belief that the economy is an absolute bad when it comes to the environment. A properly regulated economic system could be the best friend of the environment, innovating new means to improve efficiency and decrease environmental impact. Economic benefit means environmental destruction only as long as we allow it. more here
    What it should beWe can have economic prosperity and environmental sustainability, but unless we have both we are going to end up with neither.

  4. The fairy tale: Technological progress will allow people live happier, healthier lives.
    Why it’s wrong: There is no belief more near and dear to the optimistic futurist such as I, than the view that despite ourselves technology will allow us to live happier and healthier lives than ever before. Yes, this is one possible outcome of technological progress, but technology can just as easily be turned towards human destruction. Maybe the robots can save us, but it’s up to us to ask them to. more here…
    What it should be: Technology will serve to magnify our successes and our sins, it is up to us to choose which ones.

These are just a few examples of the types of economic fairy tales that have grown out of the industrial revolution. While the use of such explanatory models to aid in our understanding of the world is absolutely inescapable, we risk becoming fundamentalist if we attach overzealously to any one of these beliefs. We must be willing to let go of our understanding if the changing conditions of the world dictate such a change. If we refuse to change our beliefs and the intricate systems of politics, justice, and economics which we have built on top of them then we face nothing short of oblivion.

We must adapt or die. 

The Thorium Problem Should be the Thorium Solution

I watched this wonderful talk from Jim Kennedy this week. In the talk, Jim beautifully breaks down the surprising economic connections between the loss of control of the rare-earth mineral market, the decline in the manufacturing dominance of America, and the potential to develop thorium as a nuclear fuel source.

Kennedy posits that by controlling the relatively small ($3 billion) market on rare-earth mineral production, China has put a lock on the huge market of value-added goods ($4+ trillion). Basically, by controlling access to the rare-earth minerals which are absolutely essential for the production of electronics, China is putting immense economic pressure on manufacturers to produce their goods in China.

Kennedy presents a strong argument, and where it really gets fascinating is the connection of this industry to nuclear regulation. Essentially, it seems that one of the major issues that has forced rare-earth mining producers to shut-down production in America is due to their tendency to produce a particular mining by-product known as thorium. Wherever you find rare-earth minerals you also find thorium.

Thorium is a relatively common mineral in the earth’s crust; it is also radioactive. Although it should be noted that this radioactivity does not make thorium particularly dangerous, as its alpha radiation is not able to penetrate human skin. In fact, people have a long history with production and use of thorium in the glowing mantle used in gas lamps, and it is thought that generally these uses are safe.

Because of its reactivity, thorium also has the potential for use as a nuclear fuel. In particular, there is a lot of excitement around the development of a thorium fuel cycle in a next generation reactor known as a molten-salt reactor. This type of reactor provides numerous benefits over current generation nuclear, including dramatically lower waste production and much improved safety systems. I would highly recommend a watch of this short video for an introduction to the liquid-fluoride thorium reactor, a technology that has the potential to provide sustainable and safe base-load electrical power for the future.

Through developing a thorium fuel cycle, we could easily meet world demand for power for thousands of years. This technology is actually so promising that it seems unbelievable, maybe even downright magical. Here is the fact we have forgotten though, nuclear technology is magicalAll that we have to do is bring the right concentrations of the right rocks together in the right space and they just start producing energy (like magic).  

If this fact alone doesn’t just blow your mind, then you have lost your wonderment for the world. But it doesn’t really matter. It is a fact that a massive amount of energy can and will be obtained from the radioactive decay of fissionable material. The connection to nuclear arms and a number of high-profile accidents over the years may have taken the shine off of nuclear energy, but none of that changes the fact that it is a reliable source of cheap plentiful power. We cannot wish away the existence of nuclear technology, because it would make the world simpler. 

Trying to wish away the existence of nuclear technology is exactly what killed rare-earth mining in America. Because thorium was a major by-product of rare-earth mining, and thorium was classified as a nuclear fuel, it presented huge regulatory hurdles. The last rare-earth mine in America was actually closed following a minor release of mineral thorium. This impediment to rare-earth production in America is known as the thorium problem.

In order to make rare earth mining in America profitable, there needs to a profitable use of thorium in order to offset the regulatory costs of dealing with it. Hence, the development of new nuclear technology and a thorium fuel cycle would potentially provide not only a huge source of sustainable power, but it would also solve a major problem for rare-earth mineral production, and could lead to a reinvigorated electronics production industry in North America. Through a modest investment in research and development of thorium based nuclear reactors, we can turn the thorium problem into the thorium solution

While it does not seem to be gaining any traction in America (beyond the internet), it would seem that the great potential benefit of thorium as a nuclear fuel has not gone unnoticed in China. China sees that by developing power from thorium, they may be able to produce cheap baseload power in a safe and sustainable way. More importantly, as Jim Kennedy warns, China is not going to share this technology but they will lease it globally for great profit.

The energy industry is an absolute giant, and the only one that is bigger than the electronics manufacturing sector that China already has a lock on. If China can develop next generation nuclear technology for 10 cents a kilowatt hour, even while dealing with the minuscule amount of waste produced by a thorium reactor, this has the potential to turn the world upside down.

If China develops a modular, mass-produced thorium nuclear technology, it will also gain the associated economic and political power and there will be no looking back.

People have developed an irrational fear of nuclear energy over recent decades, but I would not count on this being enough to stop people from buying it. Maybe I am wrong and we in the west will turn our noses up at any thought of new nuclear power, but I am sure this will not be the case in the most heavily populated parts of the world. The developing world is desperately thirsty for electricity, and they are going to buy it from whoever can offer it to them at a competitive price.

More than just offering a means to raise the developing world in a more sustainable way, thorium could offer us a sustainable means to tap into high power applications in the western world. Things like going to space cost a lot of energy. Do you want to go to space? Thorium, or something very much like it, is going to be needed if we want to provide the kind of power that would be necessary so we all can go to space one day.

If we want to dream big, we are going to need the power to do it. 

The Arma Infinitum

In a dusty tome, on an old sagging shelf, in the dimly-lit basement of a disused library, there is an account of a forgotten legend. The words tell of an ancient people who sought to build the Arma Infinitum – the infinity weapon. It does not describe how they managed to build such a weapon, but only the consequences of its existence.

With their legendary weapon, the people had immeasurable power. Against this weapon there was no hope of defence. One by one the enemies of the people were vanquished. Entire armies surrendered at even the rumour of this glorious weapon. The people marched across the land, meeting with victory after victory. They laid claim to the best lands far and wide, and built a fantastic empire, the likes of which had never been seen.

It is said the the Arma Infinitum was so powerful, that even the gods cowered at its magnificence. The people saw they might use their weapon to gain control of the heavenly kingdom as well as the terrestrial one. With their shining weapon in hand, the people learned to control the winds and seas, and the earth and fire.

And now the people, having vanquished both kings and gods, looked out across their lands and considered what they might do next. The people no longer needed to worry about war with man or nature, as their weapon had brought them absolute victory. They had no worry of hunger or disease, as control of the sun, winds and rains had made them healthy and well-fed. Despite their success though, the people still longed for more.

So the people looked for more powerful gods to slay. Guided by their terrible weapon the people searched after the slumbering gods of the old world. They roused these titanic gods and whole worlds were destroyed as they did battle. But the Arma Infinitum would not be stopped, and the old gods eventually fell just as the new ones had.

Indeed the people learned to twist the very essence of the world to their own dreams and desires, they recreated their world to their liking and lived in perfect existence. But with no men or gods left to fight, their perfect existence was unfulfilling and eventually, as the Titans had before them, the people settled down for sleep. Even now, the people still slumber with the Arma Infinitum at their side, awaiting the next who would challenge them.

***********

At this point you probably should have figured out that the Arma Infinitum is actually a highly powerful artificial intelligence. A strong AI offers truly may offer the solution to any and all of our terrestrial and heavenly problems. Every problem we can logically conceive of, from the simple like how to efficiently distribute resources to the absolutely outlandish like how to overcome the limits of light speed, are problems that may potentially be solved by AI. AI may truly be the weapon that vanquishes all of our enemies but the question that bugs me is this: if AI does manage to solve all of our problems, what then?

Confessions of a Biological Scientist Part IV: SimEureka

Science has proceeded uninterrupted for hundreds of years now, through its progress we have emerged from ignorance and awakened to the reality of our Universe. But scientific advancement is now retarded by a fundamental problem, the scientists. In the near future something as important as science will no longer be left to imperfect and inefficient biological scientists, but will become be the realm of digital scientists. In this fourth installment of Confessions of a Biological Scientist, I will discuss whether computers can really provide the creativity necessary for scientific discovery – can we simulate the eureka moment?

Computers are much better than any human at crunching raw numbers, over the last couple of decades they have also come to excel at storing and recalling massive amounts of information with extremely high fidelity. Where computers still lag behind humans though is their ability to form robust models based on data. Thusfar, computers have proven to be poor pattern recognizers, but this problem is far from intractable and significant progress has been made in recent years.

If you have ever snapped a picture and a box magically appeared around the face of your subject, or used more advanced algorithms that sorts like faces from a batch of photographs uploaded to Google+ or Facebook, then you have had interaction with computer pattern recognition algorithms. Using artificial learning algorithms, computers are taking their first baby steps to recognizing patterns in a way that will allow them true recognition of the objects that they are seeing.

Computers might soon be able to differentiate a cup from a cat, but could they really leapfrog all the way to scientific discovery in the near future?  Can computers really express the deep sort of creativity necessary to make scientific progress?

Well firstly they don’t need to, and secondly yes they can (maybe)

Computers don’t need express human-like creativity to make significant scientific progress. Scientists often try to bill themselves as exceptional geniuses, bent over messy desks covered with pages of data and obscure scientific reports, coming to realize unexpected connections and innovative hypotheses. While this view is not an entirely false representation, science can be done in the exact opposite manner (and often is).

As opposed to the rapidfire testing of exciting hypotheses, science can also be the carefully controlled collection of data following minimal perturbation of a complex system. This type of science requires no reall creativity, and would require pattern recognition algorithms no more complex than those that find faces in your facebook photos. When you change input A, what happens to output B through Z.  This type of predictable, slow and step-wise science is already quite mechanical and is particularly susceptible to automation. 

The prototypical example of this type of science is the work of Dr Ross D King, who developed the robotic scientist known as ADAM. With this robot, Dr. King automated the entire scientific process of developing and testing hypotheses for the role of yeast genes in metabolic pathways (summarized here). This work provides a proof-of-concept that boring, step-wise, science can already be done in an entirely automated manner.

While I describe the kind of science done by ADAM as boring, the implication of it are anything but. The fact is that the majority of science needs to be boring, careful testing of predictable hypotheses in order to disentangle the rules underpinning complex biological systems. Indeed, it might be only through the patience and consistency of a fully-automated scientist that we can understand how the complex interactions of the tens of thousands of proteins present in a given cell. 

So if we were to hypothetically unleash robot scientists to perform this kind of simple science without any deep creative abilities, how much progress could we really make – How far does boring get us? In my view, boring science can get us (just about) everywhere we want to go.

To illustrate just how great the progress is that could be made, let me give an example of how a cancer diagnosis could go in an age of robotic science:

You come into the lab complaining of non-specific symptoms. A cross-check with your daily biodata, and advanced diagnostic testing shows a clear sign of early lung cancer. You are put on a standard cocktail of drugs to slow the progression of your disease while vital information is collected. With the aid of a robotically controlled surgical robot, a sample of your cancer cells are then quickly biopsied via nano-surgery and sent to a computerized analysis and treatment lab (CAT-Lab). 

Within this lab your cells are analyzed directly for protein and DNA content and within days, CAT-Lab will provide information to begin improving your treatment.

Your cells will then expanded within the lab.Techniques are applied to maintain your cells as close to their original state as possible. These expanded cancer cells are then embedded into a standard matrix of lung cells, to simulate the in vivo environment of your lung. The robotic scientist then tests a huge array of cancer drug cocktails. Through stepwise experimentation, a customized cocktail is developed to maximize cancer cell killing, while minimizing damage to your healthy cells. You are immediately put on a customized combination of drugs that quickly and cleanly eliminates the cancer from your lungs.

While at this point your cancer has more than likely been completely taken care of, CAT-Lab  has also performed an analysis of the likelihood of recurrence in your case. The analysis suggests that it would be economical to develop a customized “vaccine” rather than run a small risk of the cancer reoccurring. A clear signature of surface protein expression for your cancer cells has been identified. Targeting this combination of markers, a nano-bot swarm which will instantly kill any cell bearing this signature is developed and placed as a rice sized nodule in your shoulder. Should your cancer reoccur, these nanobots will rapidly eliminate the cancer cells before they can cause disease.

So you can see, scaling current generation scientific techniques to automation level could really lead us to miraculous ends. While the scenario I describe above might sound like science fiction, it is absolutely possible in an age of automated science and does not require any special science to be invented beyond what can already be done today. While the extent of science that will be done might be greater or less than what I have described depending on the efficiency and efficacy of such robotic laboratories, I absolutely foresee this type of analysis to begin in the next decade.

The side-benefit of this kind of approach to customized and automated medical analysis would be that it also provides reams of data for “publication”. Taking that data and figuring out what kind of major discoveries lurk inside of these huge data-sets requires more creativity than would provided by robotic scientists like ADAM, or the CAT-Lab. Boring science can get us to the future, but it won’t ever revolutionize the way we understand the world. So can computers ever recapitulate the creativity that is necessary for scientific leaps exemplified by the great theories like evolutionary theory or relativity?

Can we create a robot genius? 

The answer to this question is much deeper and harder than the question of whether robots can do basic science. In my view, creativity is a natural extension of our every-day pattern recognition abilities combined with a bit of randomnes and taken to the logical extreme. What we recognize a genius is simply the recognition of a highly unexpected pattern in the world around us. When we can extrapolate some new information across an opaque gap between old and new knowledge, we call it genius. 

So if you can accept my view that our pattern recognition abilities naturally lead to creativity, then it seems obvious that exponentially progressing computers which can already recognize faces will soon match and eventually surpass us on the creative front as well. So can a computer do science? The answer is unequivocally, yes, and this will be enough to get us to the future. But, can a computer be creative enough to be the next Einstein? Maybe – we will just have to wait and see for that one. 

Confessions of a Biological Scientist – Part III: The Natural Digitization of Science

Science has proceeded uninterrupted for hundreds of years now, through its progress we have emerged from ignorance and awakened to the reality of our Universe. But scientific advancement is now retarded by a fundamental problem, the scientists. In the near future something as important as science will no longer be left to imperfect and inefficient biological scientists, but will become be the realm of digital scientists. In this third of Confessions of a Biological Scientist, I will discuss the increasingly digital nature of modern science, and why I see a natural progression towards completely digital scientists. 

How do you determine the truth of something? If I were to say something like “All dogs have hair”, what actions would you take to determine the truth of this statement?

Your first course of action with a simple statement such as this, is to devise a counter-example and test to see whether this exists. In the case of my statement that all dogs have hair, the existence of a hairless dog would be adequate to disprove my statement. So you must seek a “hairless dog”. Thus, your next step is going to be to consult a knowledge repository of some sort to see if you can find evidence for a hairless dog.

If it were 100, 50, or even 20 years ago you would have probably started with a book.You might have had a set of encyclopedias in your home, wherein you could turn to the index and look up the term dog.  You would probably first look up dog, where it might described them as a “furry friend of man”. At this point you might conclude that dogs all have fur, but being particularly cunning you might also check the index for the term “hairless dog”.  And what if your search were to turn up nothing? For the time being, within the given dataset you would have had to conclude that my statement was tentatively true. 

Now, it being 2013, you would take a much different tack in addressing the my question. You would simply access a search engine and type in the term “Hairless dog” to see if any such thing exists. Google being as large a data set as we have access to at this point, you can trust that if you can’t find a hairless dog on google, then it probably doesn’t exist. But, you would of course quickly be redirected to the wikipedia page for “Hairless dogs” and you would see that there are actually several breeds listed as hairless.

So at this point you would conclude that my statement was false, and you might devise a counter-statement: Most, but not all dogs have hair.

You have just performed a scientific action. You have taken a hypothesis, devised a confuting example, tested this using a given dataset, and then revised the statement based on this new data. This example also shows just how efficient and fast digital technologies are at testing largest data set ever assembled (ie the internet) to test truth.

This move to digital, searchable knowledge has been a revolution for assessing truth, and has become central to the modern scientific method.

As a scientist, I test scientific statements in the exact same way all of the time. In fact I would never set out to actually do an experiment without first testing it in the literature. As an example, lets say I think that the interaction of two particular proteins might be causing a certain type of cells to die, I must first check google to see what other work has been done on this. 

I would examine the peer-reviewed set of scientific knowledge (known generally as “the literature”), and see how much evidence there is to support my particular hypothesis. It is mostly based on this result (and my own proprietary unreleased data), that I will decide whether to proceed with the experiment. I will do a sort of cost-benefit analysis to determine the originality of the experiment versus the likelihood of it working.

Likelihood of experiment

In one situation I might find nothing to indicate that this interaction occurs, or that it has no connection to cell death (marked as A above). In this case, I will likely have to throw away the hypothesis. Unless I already have some strong evidence of my own to suggest that this effect is occurring, I would take the lack of any suggestion as to this effect in the literature as evidence that this is not occurring. Based on this, I would throw away the hypothesis and look in another direction. This experiment is extremely original, but it is very unlikely to work.

Aside: There is an additional reason not to perform such an experiment which has no supporting evidence in the literature. Even if the experiment did happen to work, the fact that there has never been anyone else to suggest this effect would mean that I would have to go to extra lengths to prove my hypothesis. Going out on a limb is only worth it for a particularly juicy hypothesis, risky experiments are only worth it if it will lead to a big finding and an important scientific impact. 

Another possibility might be that I find that there already exists overwhelming evidence that these proteins interact and cause this effect (marked as B above). In this case, the experiment is very likely to work but it is totally unoriginal and not really of much value to advancing scientific knowledge. I would simply be repeating experiments for what is already known. Thus, in this situation I would again likely not perform the experiment, rather I would simply cite the interaction of these proteins as scientific fact and move on to a different direction.

Only if an experiment falls into the sweet spot between being completely unknown and completely known would I feel that the hypothesis is justified and I should move on to testing it. Scientists are constantly on the lookout for facts which are balanced between true and false, wherein we can inject a bit of data to tip them in either direction. While the image of scientists is one in a white lab coat with a beaker in hand, the reality is just as likely to be hunched over a keyboard, testing hypotheses using digital tools. 

If we think of these types of searches as not just casual googling, but as digital experiments we quickly realize that more science is already more digital world than it is physical one. As this trend continues into the future and becomes increasingly automated, it is only a matter of time until we replace scientists altogether.

So if so much of my work is already a matter of examining digital literature to perform simple testing of hypotheses and cost-benefit analyses to determine what hypotheses might be most advantageous to follow, then what is preventing a computer from performing similar analyses?

The major hurdle that I see for computer systems face today is digesting scientific data which we have encoded in natural human language. As it stands, computers are only beginning to be able to understand what we humans are really saying, but examples like IBM’s Watson show how we are rapidly making strides in accomplishing these ends. As computers are starting to understand the meaning of natural language, I do not think it will be long before a computational strategy develops to try to identify the low hanging fruit of science.

Even as it stands today, we need to perhaps ask how much of this work is really being done by the scientist? If Google is using advanced search techniques to link ideas and thoughts together algorithmically, based on deep learning algorithms, and using a staggeringly vast data set the likes of which a single human could never comprehend, then how much of this is already part of modern science.

It is indisputable that computers are already helping us find the experiments which best hit that sweet spot between originality and likelihood of working. Just as with everything else, the scientific world is naturally becoming more and more digitized. Soon enough, instead of us asking the computers whether this or that experiment is a good ideas, the computers will actually start telling us which experiments to do do, after that it is only a matter of time before they cut out the error prone scientists all together.

Next week I will discuss how we scientists with our lossy natural language and inadequate annotation are holding back this natural digitalization of science, and how we can start to help to put ourselves out of a job. Watch this video and lets meet back here next week. 

——–

Interested in writing for Thought Infection? I am looking for contributors and guest bloggers. If you have an interest in futurism and you would like to try your hand at writing for a growing blog, send me an email at thought.infected@gmail.com

Confessions of a Biological Scientist – Part II: The Organic Engine of Science

Scientific advancement is the greatest endeavour that humans have ever undertaken, but this is not to say it is perfect. Science as a system was evolved, not designed, and thus it suffers all of the warts, inefficiencies and limitations of any organic system. In the near future something as important as science will no longer be left to imperfect and inefficient biological scientists, but will become be the realm of digital scientists. In this second part of Confessions of a Biological Scientist, I will discuss the imperfections of the scientific system as and why artificial intelligence may be necessary to overcome our collective limitations. 

Despite what the current dogma of scientific boosterism might say about it, I do not believe that humans are born as scientists. Yes, we are naturally curious. Yes, we naturally ask questions and recognize patterns. But taking these raw abilities and turning them into scientific thought requires a set of tools that are not necessarily natural to the human mind. It is for this reason that we have developed elaborate systems of education, research and publication in order to propagate scientific thought.

The modern system of scientific education is a long and arduous process. To become a scientist, one must first have a broad base of knowledge from which to build towards scientific thought. A good understanding of math, physics, and chemistry is an absolute must for any scientist. After this, a student must immerse themselves in a specific field, and eventually an even more specific discipline, for many years before they are adequately educated in the technical and academic knowledge necessary to work in a given field. All told, it takes at least 25 years and hundreds of thousands to turn a single curious child into a PhD graduate. 

While it does succeed in producing more new scientific grads every year, the scientific education system is far from efficient. In today’s world, the age at which a scientist can expect to get their first post as an independent academic researcher has been steadily increasing. As we pour an increasing amount of energy goes into training scientists, it would seem that the scientific education system as a whole is actually becoming less efficient over time. Is there any way this process will realistically be able to compete if an AI emerged with even a low level of scientific ability?

This process whereby a system can actually lose efficiency over time is a characteristic flaw of organic systems. If there is inadequate selective pressure to maintain or increase efficiency, then over time the system may tend towards inefficiency, accruing errors which are never eliminated. Just as the human population has accrued costly maladaptations over time such as poor eyesight, obesity, and other genetic diseases, the scienctific system also carries with it negative traits which are copied from generation to generation of scientists. 

And educational inefficiency is not the only, or even the worst, of these maladaptations found within scientific system.

If we hold that the ideal of science is the quest for pure truth, then it should be the ideas that best fit the data which are held to be the best. Unfortunately, this is not the case. Human communication has evolved from story-telling, and science is no exception. Scientists are often more interested with what provides the most captivating story and resonates with the current scientific paradigms rather than what is the simplest truth or best-fit model.

This tendency of scientists to converge on popular scientific notions is worsened by the publication arm of the scientific machine. Peer-reviewed scientific journals are ranked according to their impact factor, which is a metric based on the average number of citations that a paper published would get. Just as in politics or business, the goal of the science game is to be popular.

Journals want to publish work that is up to scientific rigor, yes, but even more importantly than this, they want to publish work that will be popular. But what determines what science is going to be most cited? It is not necessarily what is the best for the advancement of science, but simply what is most interesting to the scientists. This applies pressure to scientists to try to make scientific reports more interesting to the scientific audiences, skewing scientific writing towards grandstanding. On some level, science is simply a form of highly controlled entertainment, serving a very specific audience a very specific product.  

Whereas, science might be best served through pairing carefully observed data with simple conclusions and measured insights. All too often it is expected that scientific reports present striking new data with exciting conclusions and deep new insights. Rather than letting the data speak for itself, science must be packaged up neatly and sold one powerpoint slides and scientific manuscript at a time. 

This bombastic style leaves little room for unanswered questions, encouraging scientists to avoid discussing the potential pitfalls of their research, glossing over holes rather than addressing them. No longer it the scientist expected to impartially report data, but we must be salesmen shilling our own observations. We are encouraged to be as lawyers, advocating for our own stories in a battle between ideas.

Through this process of selling and re-selling science we are perpetuating the false perception that our scientific data is perfect, our conclusions unquestionable, and our insights complete. Yes, science is making steps forwards, but it is in shuffling steps not leaps and bounds. 

The funding system for science also further exacerbates the problems created by the scientific publication system, because it is in effect simply an extension of the scientific publication system. Grant applications are ranked by scientists on their merit, but merit is really a function of how well the ideas of the grant resonate with current scientific thinking, and how interesting the ideas are to the panel. The idea that a scientists could slave away for decades on a niche problem which may or may not be of interest to the scientific community at large seems a quaint idea of a lost generation.

In the end, we really lack of any objective measure for the value of a scientific idea. This means that it is the one with the best story who wins. Even with our shiny armour of raw skepticicsm, we are still just as vulnerable to a good story as the rest of humanity. In my mind this is the fundamental limit we face today, human science can only advance as quickly as scientific groupthink can haltingly step from one paradigm to the next (see Kuhn).

In the end, it may be that these negative traits are not just inefficiencies which can be cut away from the scientific system, but they may be an expression of our human imperfection. We are inefficient, political and error-prone beings, thus by extension our science, and our scientific systems are inescapably inefficient, political, and error-prone. Human science is an organic machine, complete with imperfections and limitations. 

Perhaps the only way to overcome the current limits to scientific advancement will be to remove the humans from science altogether. With the advancement of artificial intelligence it may soon be possible to create a new type of science, wherein our collective advancement is not limited to the whims of human minds. Next week I will discuss  the first baby steps of robot scientists and how I imagine these scientists will begin to replace us, and ultimately the entire scientific system over the next two decades.

——–

Interested in writing for Thought Infection? I am looking for contributors and guest bloggers. If you have an interest in futurism and you would like to try your hand at writing for a growing blog, send me an email at thought.infected@gmail.com

Confessions of a Biological Scientist – Part I: The Limits of Meat

Science has proceeded uninterrupted for hundreds of years now, through its progress we have emerged from ignorance and awakened to the reality of our Universe. But scientific advancement is now retarded by a fundamental problem, the scientists. In the near future something as important as science will no longer be left to imperfect and inefficient biological scientists, but will become be the realm of digital scientists. In this first part of Confessions of a Biological Scientist, I will discuss the limitations of biology, and why science must soon leave us behind.

I am a scientist. I spend my days in a manner that is likely more or less consistent with how you might imagine that a scientist would; researching, devising, performing, and analysing experiments to test new hypothesis about how the world works. 

Notwithstanding the current challenge of obtaining and maintaining funding for scientific projects, being a scientist is a pretty good gig. Science offers a chance to do a job where you can truly embrace your creative side. Scientists also get to travel the world to present data and meet fascinating people. Most importantly for me though, being a scientist lets me do something that I know is making a positive impact on the world.

What has been bugging me lately though, is the question of just how much value I am adding to this equation really. I am not calling into question the value of science itself, that argument is easily dispatched with a look at the magical world which science has revealed. What I am questioning is the value of scientists, or to be more specific, the value of human scientists.

Before delving into the meat of my argument, I must make the obvious disclaimer that currently there is no alternative to employing scientists. If we want to do science, it is absolutely a necessary evil that we must deal with the biological and social inefficiencies of humans in order to take steps towards scientific progress. We have no choice but to keep feeding these meat-scientists for now, but I see a juggernaut on the horizon, a new kind of scientist is coming and it will make me nothing more than a child in a sandbox. 

The profound deficiencies of human science are actually much deeper than they might at first seem; the biological requirements of being human are more than simply a cost of doing science, they are an active deficit against it.

The first limitation of our biological bodies is that we are provided with only a limited set of five senses by which we can absorb data. Even with five senses, we are so reliant on our gelatinous orbs (eyes), that we insist on converting all data into into visual graphs, tables, and pictures. This obligatory photocopying of data into a form amenable to visual digestion is a lossy process, and predisposes our understanding of phenomena which can be understood visually. As someone who spends a lot of time making neat little powerpoint models to communicate new scientific findings, I am very familiar with both the power and the limitations of visual understanding.  

While some scientific phenomena seem to have somehow transcended the visual world (I am thinking specifically of mathematical and physical discussion of higher dimensional space) the limitations of our biological brains are still an ultimate barrier to our understanding of natural phenomena. In order to understand something, we must have some comparable a priori understanding on which to draw.

We grope for an analogy by which we can explain what is happening. Electrons flow like water, proteins fold like origami, and beehives act like a single organism. Ultimately this need to explain new phenomena through pre-exisiting ones limits us to only step-wise advancement in science. We cannot leapfrog over ideas which we do not yet understand. 

Even if we do have the pre-existing understanding to appreciate a phenomenon we are witnessing, our ability to identify the underlying mathematical relationships is highly limited. We are great at seeing a linear trend, and maybe we can pick out the odd logarithmic relationship, but we are hopelessly inept when it comes to seeing the complex algorithms of the world.

In cellular biology, we are particularly guilty of reporting on naturalistic phenomenon, while glossing over the mathematical relationships that underpin the systems we study. We produce an endless supply of scientific reports full of experimental data, hypotheses, and neat little models, but it is the rare exception which contains a mathematical equation.

For an idea of what I am talking about, check out the cell biology or biochemistry sections of top level scientific journals like Science or Nature. What proportion would you suppose contains a mathematical equation? Yes there are statistical tests of various relationships, but this is all too often the entirety of mathematical analysis in a paper. When this protein goes down, this other one goes up in a statistically significant manner, but that is usually as far as we go.

Although this might seem harsh criticism of the natural sciences, there are good reasons that there is so little math in biology, and it is certainly something that I myself am guilty of. The problem stems from the fact that biological systems are highly complex and non-linear systems. While biological brains are readily capable of understanding how two or maybe three factors interact, we have no means of understanding how the thousands of factors involved in a single cell are interacting.

I would propose that biological humans will never be able to understand complex biological systems with mathematical precision. It will require a new kind of scientist which can see the complex mathematical relationships and account for the thousands or millions of factors which interact in a given cell, a given body, or a given society. It will require a computer scientist.

Computers will make better scientists because they are not subject to the limits of human biology. To a computer all data is mathematical. There is no need for intermediating steps to convert data into a simplified visual form. Computers can collect data from any number of imaginable sensory devices and perform mathematical analyses directly upon this data.

Computers also have no need to understand why something is as it is. While this is often cited as a weakness of computers, it can also be seen as a positive. A computer could theoretically identify the multi-variate mathematical relationships that rule a complex system with no need to understand why this relationship exists. A computational analysis of a complex system would reveal properties about how things are, not why they are that way.

Aside: In scientific discussions, this type of discussion might break out into a correlation versus causation argument, but I have always felt that causation may be nothing more than a highly statistically significant correlation. We never really know there are no other factors which could be causative in the system. With enough data, I am convinced that an adequately intelligent computer system could identify relevant mathematical relationships which underpin natural systems which are every bit as good and better than what humans have devised.

Simply put, my argument is this: The world is made of math, computers will make better scientists because they are better than us at math. 

John Henry died with a hammer in his hand when he went up against the steam drill, well scientists will soon be up against the steam-scientist and it will either be get out of the way or die with a pipette in your hand. Until now it has never been possible to envision any type of scientist other than a human one. We simply had to settle for suppressing the non-scientific elements of our being and become the best possible scientists we could be. To accomplish this we developed elaborate systems of scientific education, research and publication. In my next post I will delve into the inefficiency of the wider scientific system and why technology represents an imminent threat to the entire house of cards.

——–

Interested in writing for Thought Infection? I am looking for contributors and guest bloggers. If you have an interest in futurism and you would like to try your hand at writing for a growing blog, send me an email at thought.infected@gmail.com