IBM has a really interesting – and just slightly scary – plan. In cooperation with Switzerland’s École Polytechnique Fédérale de Lausanne, they want to simulate the human brain.
They’re building a computer model. This is not the same thing as Artificial Intelligence (AI), programming a machine to act human. That would be a ‘top down’ approach; trying to understand how the mind works by looking at what it does. Instead this is ‘bottom up’, simulating the nuts and bolts of the brain, its biological wiring, its cells, even its molecules.
Which is quite an undertaking – in fact it is hard to exaggerate how big the task is. The brain is often described as the most complex thing in the known universe. Complexity is a thing that’s difficult to define but easy to perceive. Looking into the back of a TV, you’re instantly aware that it’s more complex than say a food mixer. Basically it looks more tricky to fix. The parts are small, numerous, and connected together in many different ways. Perhaps that’s the most intiuitive shorthand measure of complexity – the number of different ways that the parts of something interconnect. The human brain has far more connected parts than any other thing known, certainly more than any computer. Even Japan’s Earth Simulator, built to model the climate of the entire planet, is nothing compared to the brain of an average person.
It’s no surprise therefore that they aren’t trying to do the whole thing at once, or anything approaching that. They are starting with the best bit though: the neocortex (also called the cerebrum), the outside layer of the brain that’s most recent in evolutionary terms. It’s not unique to us, but it is far more developed in humans than in any other animal and appears to be responsible for what we experience as thought.
Even alone though, this is still far too complex for current technology to tackle. All they’re hoping to simulate right now is what’s known as a neocortical column. This can be described as a single ‘circuit’ of the brain, one of its processing units. The whole neocortex contains about a million of these. And for the moment at least, they only plan to model it on the level of its cells; to get down to the molecules that make up the cells will take vastly more computational power again. Yet even this is an immensely ambitious target. To model just one circuit of the brain in this (relatively) simple way will require four whole modules of Blue Gene – the technology IBM used to take the title of world’s fastest supercomputer back from the Earth Simulator.
So how far are we then from modelling the whole brain? Well assuming this first stage succeeds – it won’t be easy – all they really need to do is scale it up. Vastly. These four Blue Gene racks would fit in a normal kitchen. Four million? They would take up a golf course, and require the energy of five medium-sized power stations.
When you consider that your actual brain fits inside your head and runs reasonably well on sandwiches and cups of tea, you realise what a gap there is between nature’s technology and our own.
What’s the point then in going to all this trouble when a brain can be made much more cheaply using just two humans? If the object were to create machines that think, this would clearly be a madly inefficient way to go about it. But that’s not the object. The fact is we know amazingly little about how our own brains work. Simulating a part of one, even a solitary neocortical circuit, will teach us so much about what is really going on in there. Modelling allows you to find out why something is the way it is, because it can show you what would happen if it were different. The beneficial applications of that are obvious; as we see how it works, we gain greater insight into why it fails – what causes schizophrenia, Alzheimer’s, autism, the things that plague our minds.
But though it’s always good when research has palpable benefits, I think we need no such excuse when it comes to researching the structure and function of the brain. To know ones own mind – that is surely a philosophical imperative.