Why I don’t buy Super Intelligence

Aidan Cunniffe
Spare Thoughts By Aidan Cunniffe
6 min readAug 23, 2017

--

Musk, Zuckerberg, Hawking, Bostrom, Kurzweil. All great minds and all ones who have taken a side supporting, refuting or in Bostrom’s case minting the idea of a runaway Super Intelligence.

The standard explanation goes like this: once we build a real human level AI, that AI will improve itself exponentially over a short period of time. The outcome of this is an intelligence so much more advanced than ours that we look like ants in comparison to its prowess.

Here’s why I don’t buy it:

Power

The first approach any AI would take to increase its abilities is to add more horsepower. Herein lies the first flaw. There’s no reason to believe that more computing power alone means a smarter AI. The elephant brain contains around 3x more neurons than the average human brain. Algorithms, if you even dare to call them that, matter more it seems than raw power.

Similarly, the latency and bandwidth limitations inherent in the our current architectures, even those of GPUs, compound the more units you try to cram together. It’s likely that adding computing power will help up to a point, but eventually an apex will be reached and the exponential increase will flatten out.

Algorithms

The second approach an AI would resort to is optimizing its algorithms. What structural changes can be made to its neural networks (if we’re even using those) that increase its performance? The reductionist in me says that if we understand human level intelligence well enough to build an AI, surely that AI can make some optimizations. Evolution does not seek the optimum, just the good enough. So there may be some real optimizations to be made but only in limited numbers. Does that get us exponential compounding growth as the prognosticators promise/warn? I’m not so sure.

Intelligent objects seem to mirror the complexity of the things they are able to understand. The more complicated the cognition, the more complicated the neural circuits and the neurons themselves. The elephant brain may be bigger, but the frontal regions are far less complicated and developed than that of a human. The roundworm on the other hand has such a simple nervous system that there’s a chance we will actually understand exactly how it works this decade.

So to become super intelligent, our AI would have to learn not just the principles of its own intelligence, but those needed to engineer one far greater. At each step, the ant must dissect the mind of god, then implement its secrets. Each round a smarter AI faces a dramatically more complex problem. This promethean cycle is possible — I can’t rule it out — but it’s going to take centuries, not years to play out.

The ‘Problem’ Problem

Then comes the training data. What lifetime of information would you feed into this AI to train it for Super Intelligence? Human interactions, the most complicated problem class we know of have finite limits because the players all have a finite brain. Would you train it to find the answers to scientific question we know of today? What about the questions of tomorrow, the hard questions we can’t begin to know?

I’m trying to understand what sort of environment you could put an AI in that would drive it to become Super Intelligent and I can’t think of how we’d pull that off. If you have some ideas, I’d love to hear them.

Brute Force

Well lets just randomize all the hyper-parameters and network types and see what happens…

I don’t think this gets us very far at all, at least not on current substrates. Supposedly our best AIs, the one’s we are trying to make better, consume enormous computing resources. The idea of changing a few parameters, and then playing out an entire simulation of life to train the AI is probably not going to be feasible. It took billions of years for life to emerge on Earth, even if we can train these AI instances in a fraction of a human life, and even if we had millions running at once, progress would be linear.

Semantics

Intelligence isn’t a linear spectrum. Psychologists have been trying to dispel that rumor, one which they admittedly started, for years now. Unfortunately measures like IQ and EQ which reside on a linear scale have stuck and most people think of intelligence as one thing that is higher or lower in each individual.

We’ll probably spend more time twiddling our thumbs and fighting over what smarter really means than building Super Intelligence. The directors in most board rooms will be perfectly happy defining smarter as “does our tasks better”. This will lead to advances in narrow fields — the trend that’s held throughout the entire history of AI over the decades. Self actualizing AIs that think about improving themselves are about as useful to solving practical problems of industry as a room of self-actualizing philosophers.

In summary, Super Intelligence is hype. The march on will be slower than forecasted.

Engineered vs Happening

One final aside. Right now AIs, even the ones we dream about having in the future are all engineered. We imagine there being some logical program that runs and we improve the intelligence by improving that program.

On the other hand, intelligent animals just happen. And in addition to happening they happen to improve. The distinction is this, no logical rules define the behavior of an animal brain. Sometimes we can interpret what we see and form logical models to describe them, but the things they describe are not adhering to our models. At the most fundamental levels quantum mechanics plays a role defying all logic (logic alone can’t produce randomness).

What makes the brain so powerfully interesting is all the edge cases. The fact that foreign proteins (drugs) can alter the way the brain works for short and long periods of time is fascinating. Biology allows all kinds of similar proteins to interact in ways that are difficult to predict. On top of that epigenetic changes occur on the scale of individual neurons all the way up to the entire brain.

The sheer variety incumbent in this system defies all attempts to perfectly describe it. The DNA narrowly and the entire information processing aspects of biology broadly are what enable evolution to work over time. This is the part of biology we should be trying to replicate; if we can intelligence will emerge through simulations.

Logic is a way of thinking about the world, but logic and the real world are not perfectly interoperable. This uncomfortable fact is often dispelled when talking about “recreating the brain in computers”.

My gut tells me the substrate or the program has to be wrong. We either need to create a substrate more akin to a brain, with the ability to surprise us with mutations from cell to cell and generation to generation OR we need to stop trying to model real world events in logic world (the processor).

Maybe instead of working on creating an AI we should try to create a new form of life. One that is native to the world of logic. Instead of moving around and exploring its environment, it would read from memory addresses. It’s motor system would be writes and running commands. It’d be the first OS not designed for human use. Instead it would exist to harbor programs that live and reproduce entirely in silicon described by their own binary DNA.

It’s hard to say exactly what would happen if we built this system. Providing a rich environment to slowly mature from base complexity to something intelligent would be the hardest task. But this logical intelligence, this life native to the substrate would be able to evolve in ways we will never be able to engineer. I’m a betting man, in a horserace to super intelligence I think this approach would win, especially if coupled with some selective breeding and computational eugenics. Standby for all sorts of moral questions…

--

--