There’s a long way to go in understanding the brain
Neuroscientists speculate on ways to plug gaps in current knowledge about neural circuitry
Scientists pour a lot of brainpower into understanding how their experimental equipment works.
You don’t want to be fooled into thinking you’ve made a great discovery because of some quirk in the apparatus you didn’t know about. Just the other day, a new paper published online suggested that the instruments used to detect gravitational waves exhibited such a quirk, tricking scientists into claiming the detection of waves that maybe weren’t really there.
It appears that gravity wave fans can relax, though. A response to the challenge pretty much establishes that the new criticism doesn’t undermine the wave discoveries. Of course, you never know — supposedly well-established results sometimes do fade away. Often that’s because scientists have neglected to understand the most important part of the entire experimental apparatus — their own brains.
It’s the brain, after all, that devises experiments and interprets their results. How the brain perceives, how it makes decisions and judgments, and how those judgments can go awry are at least as important to science as knowing the intricacies of nonbiotic experimental machinery. And as any brain scientist will tell you, there’s still a long way to go before understanding the brain will get crossed off science’s to-do list. But there has been progress. A recent special issue of the journal Neuron offers a convenient set of “perspective” papers exploring the current state of understanding of the brain’s inner workings. Those papers show that a lot is known. But at the same time they emphasize that there’s a lot we don’t know.
Glancing at the table of contents reveals the first lesson about understanding the brain: It’s a complex problem that needs to be approached from multiple perspectives.
On one level, there’s the dynamics of electrical currents that constitute the main signaling method of the brain’s nerve cells. Then on a higher level there’s the need to figure out the rules by which nerve cells make connections (synapses) and create the neural circuitry for processing sensory input, learning and behaving. Another challenge is understanding how nerve cell networks represent memories and how you recall what you’ve learned. And it’s essential to understand how neurobiological processing conducted by molecules and cells and electrical signaling gets translated into behaviors, from simple bodily movements to complex social interactions.
Nerve cells in the brain, or neurons, are known to communicate among themselves by transmitting electrical signals, aided by chemical signaling at the synapses connecting the neurons. But there are gaps in understanding how that process takes the brain from perceptions to thoughts to actions. Each of Neuron’s perspective papers both describes what’s already known about how the brain works and offers speculations where scientists lack full knowledge about how the brain does it jobs.
Much of the effort to explain the brain involves mapping the electrical signaling throughout the entire network of nerve cell connections. Per Roland of the University of Copenhagen, for instance, discusses how those signals vary in space and time. He emphasizes the important balance between signaling that incites neurons to send signals and the messaging that inhibits signaling, keeping some neurons quiet.
Sophie Denève and colleagues of the Ecole Normale Supérieure in Paris also emphasize the balance between excitation and inhibition in neural circuitry. That balance is important, they say, for understanding how the whole brain can learn to do things based on changes in the connections between individual neurons. Somehow the rules governing synaptic connections between cells enable such “local” activity to modify the “global” neural circuitry that carries out the brain’s many functions. Excitation-inhibition balance, plus feedback from the global network influencing synapse strength, “can ensure that global functions can be learned with local learning rules,” Denève and colleagues write.
Almost all these approaches to figuring out the brain involve how it manipulates information. In a sense, the ultimate key question is how the brain conducts the mysterious process by which it absorbs information in the form of lights and colors, sounds, smells and tactile inputs and transforms them into physical actions — ideally behaviors that are appropriate responses to the inputs. Just (OK, not “just,” but sort of) as in a computer, the brain transforms input into output; information about the external world is manipulated to produce information about how to react to it.
But because sensory input has its limits, and some of it is ambiguous, the informational variables of the external world cannot be gauged with certainty, Xaq Pitkow and Dora Angelaki of Baylor College of Medicine and Rice University in Houston point out in their perspective. So the brain’s behavioral choices must be based on some method of computing probabilities to infer the likely state of the world — and then choosing the wisest (probably) actions in response.
“It is widely accepted that the brain somehow approximates probabilistic inference,” Pitkow and Angelaki write. But nobody really knows how the brain does it. Pitkow and Angelaki propose that multiple populations of the brain’s neurons perform various computations to make appropriate behavioral decisions.
Patterns of electrical signaling by these neurons must represent the original sensory stimuli — that is, the patterns in the stimuli are encoded in the patterns of electrical signaling among the neurons. Those neural signaling patterns, in Pitkow and Angelaki’s description, are then recoded into another set of patterns; that process sorts out the important variables in the environment from those that don’t matter. Those patterns are then decoded in the process of generating behavioral actions.
In sum, the brain appears to implement algorithms for collecting and assessing information about the environment and encoding that information in messages that tell the body what to do. Somehow those algorithms allow the brain to conduct statistical computations that combine beliefs about the environment with the expected outcome of different behaviors.
Pitkow and Angelaki present sophisticated speculation about the possible ways the brain could accomplish this task. It’s clearly an unimaginably complicated process, and figuring out how the brain does it will require more sophisticated experiments than neuroscientists have so far imagined. Much research on brain function in animals, for instance, offers the animal a choice of two options, given various external conditions. But tasks of that nature are vastly simpler than the jobs that evolution optimized brains for.
“The real benefit of complex inferences like weighing uncertainty may not be apparent unless the uncertainty has complex structure,” Pitkow and Angelaki argue. “Overly simple tasks” are “ill-suited to expose the inferential computations that make the brain special.”
And so truly understanding the brain, it seems, will require better experiments — using apparatus that is more fully understood than the brain now is — of sufficient complexity to be worthy of probing the brain’s abilities.
Follow me on Twitter: @tom_siegfried