by Stephen Thornquist
figures by Michael Gerhardt
A computer has finally beaten the Go world champion, a challenge that was considered the “holy grail” of artificial intelligence for nearly twenty years. Go’s impenetrability comes from the fact that a computer’s ability to methodically analyze every possible outcome is useless in Go. Every turn has around 300 possible moves (see Figure 1), so looking only two turns into the future requires thinking about 90,000 possibilities. Seeing three turns ahead means analyzing 27 million, and going up to six requires the consideration of over a trillion outcomes.
Humans are so good at Go precisely because we don’t use this strategy; instead, professional Go players have an intuition about how a game will play out. That way, they only have to think about the moves that another player would realistically make, reducing a huge number of possible outcomes to only a few. For a computer to compete with humans at Go, it needs to be able to intuitively evaluate the board, and it’s this idea of programming in “intuition” that captured the attention of computer scientists and made Go such an attractive challenge.
Writing a program that has intuition is a lot easier said than done, of course. Almost by definition, intuition involves some amount of je ne sais quoi, but computer programming is all about providing strict instructions. So computer scientists have decided that if you can’t beat them, join them. Since brains seem to be so incredibly good at developing intuition, a recently emerged branch of computer science has dedicated itself to studying an idea known as an “artificial neural network,” (informally called neural networks or just neural nets). The basic idea of a neural network is to use structures and processes that resemble the ones that our brains use to perform computations. So to understand neural networks, we need to understand how brains compute, a study called “computational neuroscience.”
Computation in the brain
Imagine that you’re driving a car, but you can’t see out any of the windows. Instead, you have a friend who can see through the windows (but can’t drive the car) and she’s describing what she can see to you. It’s going to be very hard to get anything done; maybe she can’t see something important, or maybe you have a hard time figuring out how fast you’re going. Sometimes it might be hard to hear her. It could even be that there’s so much to see that it would take forever to describe it all.
This is the problem the brain faces: the parts of your brain making decisions are not the same parts that receive information about the world around you. Light, for example, has to go through many cells between the eye and any part of the brain that makes decisions based on the light, so it’s important to keep track of what’s going on without bogging the rest of the brain down with unnecessary information.
These are the two major questions of computational neuroscience: the encoding problem, referring to how neurons in the brain are able to represent the world around them (this is your friend, trying to come up with an efficient way to describe what she sees); and the decoding problem, or how to get this information back out if you know what the neurons are doing (you, trying to understand what she said). The vast majority of work in the field is directed towards understanding the nature of the information processing that happens in one of these two contexts.
The primary visual cortex (V1), located at the rear of the human brain, presents a straightforward example of this information processing because its input is very intuitive: the image that your eye sees. How V1 functions is illustrated in figure 2. There are certain cells, called “simple cells,” in V1 that look at specific regions in your vision and detect only edges and lines that go in a specific direction (the cells in the bottom-left of Figure 2). There are other cells in V1 called “complex cells” (bottom-right) that, like the simple cells, respond only to lines oriented in a particular direction, but will respond if these lines are located anywhere in the image.
Each of these cells “encodes” different features; the simple cells encode the presence of a line in a particular area, while the complex cells encode the presence of such a line at all. The complex cells are able to figure out if there’s a line anywhere in the image because they listen to all of the simple cells, and from the simple cells they “decode” the presence of a line. So in one bout of encoding and decoding, the brain is able to go from little information at all about the image to knowing there is a line and it’s facing a particular way somewhere in this picture. While this example seems very simple, it is only the first of many steps just like it, where neurons decode the signals that they receive (top-left), and use that information to encode whether another feature is present (top-right). Eventually, you reach neurons capable of incredible tasks, like telling you whether a specific person’s face or name is present in a picture, or whether the picture should elicit fear.
Neural networks and developing artificial intelligence
This method of solving a problem serves as the inspiration for neural networks. In a neural network, the computer processes information by making model neurons (often hundreds or thousands), and allowing each of them to listen to particular aspects of the input. These model neurons are then used as the inputs to another set of model neurons, just like in the brain, each of which selects certain inputs to listen to. The process is repeated up to a final layer, which the computer interprets as the output.
The computer is shown many examples where the correct output is known and uses its neural network (which is initially set up mostly randomly) to guess what its output should be. When it guesses incorrectly, it modifies which of its inputs each neuron listens to (how they “decode,” and thus what they themselves “encode”), trying to reduce the number of times it guesses the wrong output in order to reproduce the correct one. Ultimately, the neural network arrives at a configuration in which it can take in arbitrary inputs and generate reasonable outputs. What’s really remarkable about the process is how little you actually have to program in: the neural network figures out most of the solutions to problems on its own. The abstractions it comes up with to understand the information it observes form a sort of intuition of how specific types of inputs interact.
The whole process ends up working almost astonishingly well. Neural networks underlie the algorithms used by Facebook to recognize you in pictures, by Netflix in recommending films, and, of course, by Google to play Go “intuitively.” In AlphaGo in particular, all we know is what we’ve put in: the state of the board, and a few descriptive variables like the number of white and black stones. The rest of how to go from there to the right move is buried deep in AlphaGo’s mind, the wiring and responses that the neural network has learned. What’s really incredible about these technologies is that they often find efficient representations of the world that closely mimic those that our own brains use, like the preference for oriented edges in V1 described above when trying to classify images. Though neural networks are only a superficial approximation of the way computation is actually done in the brain, they are starting to be used by neuroscientists as well, as they try to model and understand the way real brains compute. This work is beginning to give us some of the first real insights into how simple computations like those performed by single neurons can come together to generate the richness we ourselves experience.
So now that we’ve beaten Go, have we achieved “artificial intelligence?” Well, not quite. AI is notorious for moving the goal posts: twenty years ago, beating the world chess champion was considered the true test of artificial intelligence. After IBM’s Deep Blue defeated Garry Kasparov in 1997, suddenly chess was considered too simple a problem. Douglas Hofstadter famously quipped “AI is whatever hasn’t been done yet.” And just as they have been every time we pass an AI milestone, experts are quite sure AlphaGo does not capture what we usually think of when we say “AI.” But what we have learned is that neural networks are powerful tools for developing systems that seem to behave intelligently, and emulating the way brains perform computations is likely to play a part in solving whatever problem we decide to call the next “holy grail” of AI.
Stephen Thornquist is a 3rd year PhD student in the Program in Neuroscience at Harvard.
This article is part of the April 2016 Special Edition on Neurotechnology.
Further reading:
For more information about AlphaGo, check out the website for Deep Mind (the organization within Google that developed AlphaGo).
The original paper describing how AlphaGo works can be found in Nature.
Thanks for sharing Intuition in silico: How Ideas From Computational Neuroscience Help Programmers Build Smarter Computers.
This writing is not only incredibly informative but well written, after just learning about something myself in a class just yesterday about predicting future purchasing in the real estate industry in which i’m involved in, the interaction of program writing and intertwining of information is endless. I’m ready to reread your Neutrotechnology arcticle/blog again as soon as I finish this message. Andrea