How Computers Can Learn: For Starters, Chuck the Silicon

The power of every computer you've ever owned is also its weakness: the digital microchip

  • Share
  • Read Later
Johner RF / Getty Images

Just how stupid is your computer? The short answer is that it’s really, really stupid. The longer answer is that it’s stupider than a slime mold. The protoplasmic microbe known as the Physarum polycepharum can move from place to place by what’s known as shuttle streaming, which is a very fancy way of saying, well, oozing — and one of the places it can ooze with surprising ease is through a maze. Put a food source at the beginning and end of a microscopic maze and, after eight hours of trial and error, the organism can change its shape so its front and back ends can both reach the goodies — and by the shortest route possible too. In its own primitive way, through mindless chemical signals that respond only to the presence of nutrient, the slime mold learns — something your computer will never, ever do.

Learning has always been what separates the nimble carbon-based information processing system from the rigid — if powerful — silicon one and has made the long-dreamed-of concept of artificial intelligence so elusive. The weakness of computer intelligence is also its central strength: its binary intelligence. No matter how big or powerful the system, all information is stored as nothing more than a series of on-off signals on microtransistors. There’s no high-charge, low-charge, sort of off, sort of on — a universe of nuance that’s forever lost on the machine.

This is in sharp contrast to the way the synapses behave in the human or animal brain. Each neuron is synaptically connected to thousands of others around it, and the signals that run through them can vary in unlimited ways. They can be sudden and powerful (the house-shaking bam! of the first clap of thunder you hear as a baby) or they can be subtle and repetitive (the signature footfalls of the adults outside your bedroom that you must hear again and again before you distinguish mom’s from dad’s from the sitter’s). We learn instantly or in tiny increments, indelibly or forgettably, and all of this is encoded by electrochemical signals that run through our synaptic networks in an infinite variety of strengths and directions, changing the brain in the process — which is what we mean by learning in the first place.

(MORE: Why Your Smartphone Will Be Your Next PC)

But the computer’s woeful lack of learning skills might be slowly changing, if, as engineers hope, the transistor can be replaced by what is known as the memristor — or memory resistor. The concept of a memristor — a logic gate that works like a human neuron — has been around for decades, but as a new paper in the Journal of Physics, by physicist Andy Thomas of Germany‘s Bielefeld University, points out, we might actually be getting close to putting theory into practice.

Last year, Thomas and his colleagues developed a memristor just to satisfy themselves that the thing could, in its primitive way, behave like a neuron in the brain. Like all memristors, it consisted of a charge-resistant nanomaterial sandwiched between two electrodes — and that was pretty much it. But there’s magic in that resistance.

While traditional computers can do things with astonishing speed, every time they repeat the same task is like the first time. That’s because when transistors are done with the task, their on-off, binary circuits are, essentially, wiped clean. A memristor does things differently. When a current flows through it in one direction, it increases its resistance to the charge; a current flowing the other way causes resistance to decrease. And when the current goes off, the last level of resistance is preserved. The memristor, essentially, remembers that final charge.

(MORE: Lessons Learned From the New York Times-Tesla Motors Dustup)

“A memristor can store information more precisely,” Thomas said in a press release. “[It delivers] the basis for the gradual learning and forgetting of an artificial brain.”

The new paper does not so much break new ground in engineering — the memristor Thomas and his colleagues have this year is the same as the one last year — but it does explain the audacious claim that a web of the things could eventually operate like a brain. For starters, there’s a flexibility that allows memristors to learn in different ways. A charge of a particular intensity for a particular time will produce a particular level of resistance — and a charge of half the resistance for twice the time will produce the same level. This is a very brain-like way to operate. You can study distractedly for four hours to get ready for tomorrow’s test, or you can concentrate twice as hard and need just two hours to learn the same material.

A network of interconnected memristors can practice localized learning as well, which also mimics the brain. Every nerve cell in every lobe of your brain might ultimately be connected to every other one, if only via very circuitous routes, but that doesn’t mean that the whole massive network lights up when a charge goes through a single area. One set of circuits can have you humming a tune while other circuits are letting you draw a picture or work in the garden or do nothing at all. This kind of so-called input-specifity, Thomas reports, has also been observed in memristor systems, with only target pathways activated while adjacent ones remain still.

(MORE: North Korea to Allow Mobile Internet to Foreigners)

What’s more, memristors have the power to ignore, allowing current to pass only when a certain voltage threshold is achieved and blocking it if the level falls too low. That’s the key to the selective attention that allows you to read or think or watch a movie and either not notice or soon tune out distracting thoughts or sounds or smells round you.

Finally, memristors are energy efficient. A big computer brain with lots and lots of chips requires lots and lots of power — since every transistor on every chip may eventually require a charge. But when the whole point of your system is to resist a charge, you run small and cool instead of big and hot. “The need for less power is particularly obvious,” writes Thomas, “if we compare the performance of the brain of even an invertebrate with a CPU and contrast power consumption.”

Silicon computers aren’t going anywhere soon — or maybe even ever. The fact is, they do steam-shovel work like data processing and complex calculations infinitely better and faster than humans do. The subtler stuff — the learning and creating and even imagining — is so far limited to us. But it’s that so-far part that might be the key.

(MORE: China’s Red Hackers: The Tale of One Patriotic Cyberwarrior)

4 comments
j.kenton.pate
j.kenton.pate

This is a really unfortunate article. One useful way to analyze information processing systems, called "Marr's levels of analysis," is to focus on the computational level, the algorithmic/representational level, or the implementational level. This approach is popular in computational cognitive science, and applies naturally to learning problems.

For example, the computational-level goal for the slime mold is to find a short path through a maze. This goal can be achieved through different kinds of algorithms that rely on different kinds of representations; for example Djikstra's algorithm formalizes an environment as a set of nodes with edges representing potential paths, and finds the shortest path. Another potential algorithm for this computational goal would be a local greedy search using heuristics based on the strength of the chemical signal. Finally, a given algorithm can be implemented, that is, physically realized, many different ways. It's well-known that (formalizations of) multi-layer neural networks can implement arbitrary functions, given enough neurons, and we also know that collections of completely ordinary transistors (yes, operating on a binary representation!) can implement arbitrary functions, given enough memory.

These "memristors" look like another potentially useful tool for thinking about implementation-level analyses. However, this is not a the fundamental step forward that the article makes it out to be. It's obvious that brains and slime molds are not physically made out of memristors, so any advances with memristors will necessarily be mediated by algorithmic and computational-level analyses, just as advances with transistors are. As DBritt mentioned, it would have been nice to see at least some acknowledgement of the enormous efforts in machine learning and computational cognitive science towards computational and algorithmic-level analyses.


John K Pate

DBritt
DBritt like.author.displayName like.author.displayName 2 Like

Frankly, this is an example of really really bad science reporting.  The premise of this article is completely wrong.  There are whole fields of computer science dedicated to machine learning, not discussed at all.  Of course no one has made anything as complex as a brain, but to say that it is "impossible" to model learning using binary representation is completely specious.  You can model the behavior of a memristor using binary representation, so anything you can do with a memristor you can do in binary.  Please look up "computational universality" on Wikipedia.

The argument about efficiency is equally bad.. the description of how a memristor "blocks" current could apply equally to a transistor.  The differences between transistors and memristors are important, but the description completely misses them.

Good science reporting.. hell even mediocre science reporting.. quotes more than one scientist.  It's easy for a scientist to overstate the implications of his/her work to a journalist, but another voice in the field allows to to evaluate those claims.  There is no such attempt here.

Overall very poor.


luscusrex
luscusrex

the slime mold learns — something your computer will never, ever do."

And with what authority do you dismiss  the technological singularity ?




michael_gautier
michael_gautier

Unworkable. A gap of million years or more exist between biology and silicon. A better goal for computing is in how they can augment our abilities rather than emulate or replace them.