New Brain Maps With Unmatched Detail May Change Neuroscience

Sitting at the desk in his lower-campus office at Cold Spring Harbor Laboratory, the neuroscientist Tony Zador turned his computer monitor toward me to show off a complicated matrix-style graph. Imagine something that looks like a spreadsheet but instead of numbers it’s filled with colors of varying hues and gradations. Casually, he said: “When I tell people I figured out the connectivity of tens of thousands of neurons and show them this, they just go ‘huh?’ But when I show this to people …” He clicked a button onscreen and a transparent 3-D model of the brain popped up, spinning on its axis, filled with nodes and lines too numerous to count. “They go ‘What the _____!’”

Quanta Magazine


About

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

What Zador showed me was a map of 50,000 neurons in the cerebral cortex of a mouse. It indicated where the cell bodies of every neuron sat and where they sent their long axon branches. A neural map of this size and detail has never been made before. Forgoing the traditional method of brain mapping that involves marking neurons with fluorescence, Zador had taken an unusual approach that drew on the long tradition of molecular biology research at Cold Spring Harbor, on Long Island. He used bits of genomic information to imbue a unique RNA sequence or “bar code” into each individual neuron. He then dissected the brain into cubes like a sheet cake and fed the pieces into a DNA sequencer. The result: a 3-D rendering of 50,000 neurons in the mouse cortex (with as many more to be added soon) mapped with single cell resolution.

This work, Zador’s magnum opus, is still being refined for publication. But in a paper recently published by Nature, he and his colleagues showed that the technique, called MAPseq (Multiplexed Analysis of Projections by Sequencing), can be used to find new cell types and projection patterns never before observed. The paper also demonstrated that this new high-throughput mapping method is strongly competitive in accuracy with the fluorescent technique, which is the current gold standard but works best with small numbers of neurons.

Tony Zador, a neurophysiologist at Cold Spring Harbor Laboratory, realized that genome sequencing techniques could scale up to tame the astronomical numbers of neurons and interconnections in the brain.
jeansweep/Quanta Magazine

The project was born from Zador’s frustration during his “day job” as a neurophysiologist, as he wryly referred to it. He studies auditory decision-making in rodents: how their brain hears sounds, processes the audio information and determines a behavioral output or action. Electrophysiological recordings and the other traditional tools for addressing such questions left the mathematically inclined scientist unsatisfied. The problem, according to Zador, is that we don’t understand enough about the circuitry of the neurons, which is the reason he pursues his “second job” creating tools for imaging the brain.

The current state of the art for brain mapping is embodied by the Allen Brain Atlas, which was compiled from work in many laboratories over several years at a cost upward of $25 million. The Allen Atlas is what’s known as a bulk connectivity atlas because it traces known subpopulations of neurons and their projections as groups. It has been highly useful for researchers, but it cannot distinguish subtle differences within the groups or neuron subpopulations.

If we ever want to know how a mouse hears a high-pitched trill, processes that the sound means a refreshing drink reward is available and lays down new memories to recall the treat later, we will need to start with a map or wiring diagram for the brain. In Zador’s view, lack of knowledge about that kind of neural circuitry is partly to blame for why more progress has not been made in the treatment of psychiatric disorders, and why artificial intelligence is still not all that intelligent.

Justus Kebschull, a Stanford University neuroscientist, an author of the new Nature paper and a former graduate student in Zador’s lab, remarked that doing neuroscience without knowing about the circuitry is like “trying to understand how a computer works by looking at it from the outside, sticking an electrode in and probing what we can find. … Without ever knowing the hard drive is connected to the processor and the USB pod provides input to the whole system, it’s difficult to understand what’s happening.”

Inspiration for MAPseq struck Zador when he learned of another brain mapping technique called Brainbow. Hailing from the lab of Jeff Lichtman at Harvard University, this method was remarkable in that it genetically labeled up to 200 individual neurons simultaneously using different combinations of fluorescent dyes. The results were a tantalizing, multicolored tableau of neon-colored neurons that displayed, in detail, the complex intermingling of axons and neuron cell bodies. The groundbreaking work gave hope that mapping the connectome—the complete plan of neural connections in the brain—was soon to be a reality. Unfortunately, a limitation of the technique in practice is that through a microscope, experimenters could resolve only about five to 10 distinct colors, which was not enough to penetrate the tangle of neurons in the cortex and map many neurons at once.

That’s when the lightbulb went on in Zador’s head. He realized that the challenge of the connectome’s huge complexity might be tamed if researchers could harness the increasing speed and dwindling costs of high-throughput genomic sequencing techniques. “It’s what mathematicians call reducing it to a previously solved problem,” he explained.

In MAPseq, researchers inject an animal with genetically modified viruses that carry a variety of known RNA sequences, or “bar codes.” For a week or more, the viruses multiply inside the animal, filling each neuron with some distinctive combination of those bar codes. When the researchers then cut the brain into sections, the RNA bar codes can help them track individual neurons from slide to slide.

Zador’s insight led to the new Nature paper, in which his lab and a team at University College London led by the neuroscientist Thomas Mrsic-Flogel used MAPseq to trace the projections of almost 600 neurons in the mouse visual system. (Editor’s note: Zador and Mrsic-Flogel both receive funding from the Simons Foundation, which publishes Quanta.)

Six hundred neurons is a modest start compared with the tens of millions in the brain of a mouse. But it was ample for the specific purpose the researchers had in mind: They were looking to discern whether there is a structure to the brain’s wiring pattern that might be informative about its function. A currently popular theory is that in the visual cortex, an individual neuron gathers a specific bit of information from the eye—about the edge of an object in the field of view, or a type of movement or spatial orientation, for example. The neuron then sends a signal to a single corresponding area in the brain that specializes in processing that type of information.

To test this theory, the team first mapped a handful of neurons in mice in the traditional way by inserting a genetically encoded fluorescent dye into the individual cells. Then, with a microscope, they traced how the cells stretched from the primary visual cortex (the brain area that receives input from the eyes) to their endpoints elsewhere in the brain. They found that the neurons’ axons branched out and sent information to many areas simultaneously, overturning the one-to-one mapping theory.

Next, they asked if there were any patterns to these projections. They used MAPseq to trace the projections of 591 neurons as they branched out and innervated multiple targets. What the team observed was that the distribution of axons was structured: Some neurons always sent axons to areas A, B and C but never to D and E, for example.

These results suggest the visual system contains a dizzying level of cross-connectivity and that the pattern of those connections is more complicated than a one-to-one mapping. “Higher visual areas don’t just get information that is specifically tailored to them,” Kebschull said. Instead, they share many of the same inputs, “so their computations might be tied to each other.”

Nevertheless, the fact that certain cells do project to specific areas also means that within the visual cortex there are specialized cells that have not yet been identified. Kebschull said this map is like a blueprint that will enable later researchers to understand what these cells are doing. “MAPseq allows you to map out the hardware. … Once we know the hardware we can start to look at the software, or how the computations happen,” he said.

MAPseq’s competitive edge in speed and cost for such investigations is considerable: According to Zador, the technique should be able to scale up to handle 100,000 neurons within a week or two for only $10,000 — far faster than traditional mapping would be, at a fraction of the cost.

Such advantages will make it more feasible to map and compare the neural pathways of large numbers of brains. Studies of conditions such as schizophrenia and autism that are thought to arise from differences in brain wiring have often frustrated researchers because the available tools don’t capture enough details of the neural interconnections. It’s conceivable that researchers will be able to map mouse models of these conditions and compare them with more typical brains, sparking new rounds of research. “A lot of psychiatric disorders are caused by problems at the circuit level,” said Hongkui Zeng, executive director of the structured science division at the Allen Institute for Brain Science. “Connectivity information will tell you where to look.”

High-throughput mapping also allows scientists to gather lots of neurological data and look for patterns that reflect general principles of how the brain works. “What Tony is doing is looking at the brain in an unbiased way,” said Sreekanth Chalasani, a molecular neurobiologist at the Salk Institute. “Just as the human genome map has provided a scaffolding to test hypotheses and look for patterns in [gene] sequence and function, Tony’s method could do the same” for brain architecture.

The detailed map of the human genome didn’t immediately explain all the mysteries of how biology works, but it did provide a biomolecular parts list and open the way for a flood of transformative research. Similarly, in its present state of development, MAPseq cannot provide any information about the function or location of the cells it is tagging or show which cells are talking to one another. Yet Zador plans to add this functionality soon. He is also collaborating with scientists studying various parts the brain, such as the neural circuits that underlie fear conditioning.

“I think there are insights to be derived from connectivity. But just like genomes themselves aren’t interesting, it’s what they enable that is transformative. And that’s why I’m excited,” Zador said. “I’m hopeful it’s going to provide the scaffolding for the next generation of work in the field.”

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more: https://www.wired.com/story/new-brain-maps-with-unmatched-detail-may-change-neuroscience/

A revolution in our sense of self | Nick Chater

Psychologists have tried to plumb the depths of human motivation to make sense of our behaviour. But our inner mental world is a fiction, sustained by constant improvisation

At the climax of Anna Karenina, the heroine throws herself under a train as it moves out of a station on the edge of Moscow. But did she really want to die? Had the ennui of Russian aristocratic life and the fear of losing her lover, Vronsky, become so intolerable that death seemed the only escape? Or was her final act mere capriciousness, a theatrical gesture of despair, not seriously imagined even moments before the opportunity arose?

We ask such questions, but can they possibly have answers? If Tolstoy says that Anna has dark hair, then Anna has dark hair. But if Tolstoy doesnt tell us why Anna jumped to her death, then Annas motives are surely a void. We can attempt to fill this void with our own interpretations and debate their plausibility. But there is no hidden truth about what Anna really wanted, because, of course, Anna is a fictional character.

Suppose instead that Anna were a historical figure and Tolstoys masterpiece a journalistic reconstruction. Now Annas motivation becomes a matter of history, rather than a literary interpretation. Yet our method of inquiry remains the same: the very same text would now be viewed as providing (perhaps unreliable) clues about the mental state of a real person, not a fictional character. Historians, rather than literary scholars, might debate competing interpretations.

Now imagine that we could ask Anna herself. Suppose the great train slammed on its brakes just in time. Anna, apparently mortally injured, is conveyed in anonymity to a Moscow hospital and, against the odds, pulls through. We catch up with Anna convalescing in a Swiss sanatorium. But, as likely as not, Anna will be as unsure as anyone else about her true motivations. After all, she too has to engage in a process of interpretation as she attempts to account for her behaviour. To be sure, she may have data unavailable to an outsider she may, for example, remember the despairing words Vronsky has left me forever running through her mind as she approached the edge of the platform. However, any such advantage may be more than outweighed by the distorting lens of self-perception. In truth, autobiography always deserves a measure ofscepticism.

Keira
What motivates Anna to take her own life? Keira Knightley in Anna Karenina, directed by Joe Wright. Photograph: Universal Pictures/Sportsphoto/Allstar

There are two opposing conclusions that one might draw from this vignette. One is that our minds have dark and unfathomable hidden depths. From this viewpoint, we cannot expect people to look reliably within themselves and compile a complete and true account of their beliefs and motives. Psychologists, psychiatrists and neuroscientists have long debated how best to plumb the deep waters of human motivation. Word associations, the interpretation of dreams, hours of intensive psychotherapy, behavioural experiments, physiological recordings and brain imaging have been popular options.

I believe, though, that our reflections should lead us to a different conclusion: that the interpretation of real people is no different from the interpretation of fictionalcharacters. If Tolstoys novel had been reportage, and Anna a living, breathing member of the 19th-century Russian aristocracy, then, of course, there would be a truth about whether Anna was born on a Tuesday. But, I argue, there would still be no truths about the real Annas motives. No amount of therapy, dream analysis, word association, experiment or brain scanning can recover a persons true motives, not because they are difficult to find, but because there is nothing to find.

Evidence of a hoax

This is not a conclusion I have come to lightly. As a psychologist, I want to understand how people think and decide. It would be awfully convenient if the rich stories we tell about our own thoughts were at least roughly on the right track; if they just needed to be tidied, pruned and generally knocked into shape to get a true picture. It would be convenient, but utterly wrong. The weight of evidence against the reality of mental depth is simply overwhelming. Having resisted the evidence for years, Ive finally admitted defeat.

12
Now you see it: the 12 dots illusion. Photograph: Jacques Ninio

Perception provides some ominous clues. Consider Jacques Ninios wonderful 12 dots illusion. Twelve black dots are arranged in three rows of four dots each. The dots are large enough to be seen clearly and simultaneously against a white background. But when arranged on the grid, they seem only to appear when you are paying attention to them. Dots we are not attending to are somehow swallowed up into the diagonal grey lines. Interestingly, we can pay attention to adjacent pairs of dots, to lines of dots, to triangles and even squares although these are highly unstable. But our attention is in short supply and, where we are not attending, the dots disappear.

Remarkably, the limits of attention apply just as well as we scan our everyday environment: we can attend to just one object at a time the other objects are effectively invisible. Our sense that we can grasp the entire visual world in full detail and colour is, then, a hoax. Instead, we see through a remarkably narrow window of attention, grasping just one object, word or face at a time. But the hoax is sustained because, as soon as we wonder about, say, the colour of a vase or the identity of a word, our eyes and our attention can, almost instantly, flick into action, lock on the target and answer our question. And the answer is created so fluently that we imagine that it was there all along.

By extension, then, we may begin to doubt our phenomenology of a rich inner world, teeming with ideas and feelings. Indeed, it turns out that here, too, our brains have been inventing wildly. To pick one particularly striking example, let us consider the remarkable classic studies of cognitive neuroscientist Michael Gazzaniga on patients with split brains, whose left and right brain hemispheres have been surgicallysevered.

Our brains have crossover wiring: the left hemisphere sees the right half of the visual world and controls the right hand, and vice versa. So this means that, for split-brain patients, the right and left hemispheres can be shown entirely different stimuli and make wholly independent responses. In a famous demonstration, Gazzaniga shows a snowy scene to the right hemisphere and a chickens foot to the left. The right hemisphere has to find a picture that matches what it sees (the snowy scene) and naturally enough chooses a picture of a shovel (with the left hand). How does the left hemisphere (the seat of language) explain this choice? It should be baffled, because it knows nothing about the real cause of the right hemispheres choice, because it cant see the snowy scene. Yet, quick as a flash, it has a ready answer: the chickens foot is associated with a chicken and you need a shovel to clean out the chicken shed. Elegant, but entirely wrong.

Our language system is continually generating a flow of plausible-sounding explanations of the reasons behind our actions but, suspiciously, the flow continues with the same speed and confidence when our language system cannot possibly know the truth. And it continues without balking. It was confabulating all along.

Our inner, mental world is a work of the imagination. We invent interpretations of ourselves and other people in the flow of experience, just as we conjure up those of fictional characters from a flow of written text. Returning to Anna, we can wonder whether she despaired primarily of her precipitous social fall, the future of her son or the meaninglessness of aristocratic life, rather than being tormented by love. There is no ground truth about the right interpretation, though some are more compelling and better evidenced in Tolstoys text than others. But Tolstoy, the journalist, would have nothing more than interpretations of the real Annas behaviour; she could only venture one more interpretation of her own behaviour.

The unfolding of a life is not so different to that of a novel. We generate our beliefs, values and actions in the moment. Thoughts, like fiction, come into existence in the instant that they are invented and not a moment before. The sense that behaviour is merely the surface of a vast sea, immeasurably deep and teeming with inner motives, beliefs and desires is a conjuring trick played by our own minds. The truth is not that the depths are empty, or even shallow, but that the mind is flat: the surface is all there is.

The improvised mind has an answer for everything. Each choice, preference or belief small and large can, when challenged, yield an easy flow of rationalisation. Why this sofa? Why Bach, not Brahms? Why this choice of career? Why children or not? Why evolution, not creationism? How does a bicycle work, or a violin, or a currency? And each justification can be buttressed with further justifications, caveats and clarifications, and each of these be defended further, seemingly without end. Our creative powers are so great, and so effortless, that we can fancy we must be consulting an inner oracle, which can look up preformed answers to eachquestion.

German
Bach or Brahms? The improvised mind can justify any preference. Photograph: Stock Montage/Getty Images

One crucial clue that the inner oracle is an illusion comes, on closer analysis, from the fact that our explanations are less than watertight. Indeed, they are systematically and spectacularly leaky. Now it is hardly controversial that our thoughts seem fragmentary and contradictory. I cant quite tell you how a fridge works or how electricity flows around the house. I continually fall into confusion and contradiction when struggling to explain rules of English grammar, how quantitative easing works or the difference between a fruit and a vegetable.

But cant the gaps be filled in and the contradictions somehow resolved? The only way to find out is to try. And try we have. Two thousand years of philosophy have been devoted to the problem of clarifying many of our commonsense ideas: causality, the good, space, time, knowledge, mind and many more; clarity has, needless to say, not been achieved. Moreover, science and mathematics began with our commonsense ideas, but ended up having to distort them so drastically whether discussing heat, weight, force, energy and many more that they were refashioned into entirely new, sophisticated concepts, with often counterintuitive consequences. This is one reason why real physics took centuries to discover and presents a fresh challenge to each generation of students.

Philosophers and scientists have found that beliefs, desires and similar every-day psychological concepts turn out to be especially puzzling and confused. We project them liberally: we say that ants know where the food is and want to bring it back to the nest; cows believe it is about rain; Tamagotchis want to be fed; autocomplete thinks I meant to type gristle when I really wanted grist. We project beliefs and desires just as wildly on ourselves and others; since Freud, we even create multiple inner selves (id, ego, superego), each with its own motives and agendas. But such rationalisations are never more than convenient fictions. Indeed, psychoanalysis is projection at its apogee: stories of greatest possible complexity can be spun from the barest fragments of behaviours or snippets of dreams.

An experiment in artificialintelligence

Yet perhaps our thoughts and actions may be guided by commonsense theories that, though different from scientific theories, could be coherent nonetheless. This is a seductive idea. Starting in the 1950s, decades of intellectual effort were poured into a particularly sophisticated and concerted attempt to crystallise some of our commonsense theories. The goal was to systematise and organise human thought to replicate it and create machines that think like people.

Early attempts to create artificial intelligence followed this approach. Hopes were high. Over successive decades, leading researchers forecast that human-level intelligence would be achieved within 20 to 30 years. By the 1970s, serious doubts began to set in. By the 1980s, the programme of mining and systematising knowledge started to grind to a halt. Indeed, the project of coaxing the theories from our inner oracle failed in a particularly instructive way. Drawing out the knowledge, beliefs, motives and so on that underpinned peoples behaviour turned out to be hopelessly difficult.

Chess grandmasters, it turns out, cant really explain how they play chess, doctors cant explain how they diagnose patients and none of us can remotely explain how we understand the everyday world of people and objects. What we say sounds like explanation but really it is a barely coherent jumble. Perhaps the single most important discovery from the first decades of artificial intelligence is just how profound and irremediable this problem is.

Cars
Intelligence not required: a car assembled by robots at Jaguar Land Rover. Photograph: Matt Crossick/Empics

The project of modelling artificial on human intelligence has since been quietly abandoned. Instead, over recent decades, AI researchers have made advances by building machines that learn not from people but from direct confrontation with huge quantities of data: images, speech waves, linguistic corpora, chess games and so on. Much of AI has mutated into a distinct but related field: machine learning. This has been possible because of advances on a number of fronts: computers have become faster, data-sets larger and learning methods cleverer. But at no stage have human beliefs been mined or commonsense theories reconstructed.

The spectacular improvisation of the human mind is, I believe, the core of human intelligence and the ability that allows us to deal so successfully with the complex, open-ended challenges thrown at us by our physical environment and the social world. AI and robotics have succeeded precisely where those improvisational abilities are not required: in the pristine worlds of chess, Go and car assembly plants, for example. Dont be fooled: the rise of the robots is no more than super-sophisticated automation. The amazing creativity of your brain, as it helps us improvise our way through daily life, wont be replicated in silicon in the near future, perhaps never.

Inventing our future selves

Dont despair. This does not mean there isnt something we can define as a self. Our brains are relentless and compelling improvisers, creating the mind, moment by moment. But, as with any improvisation, in dance, music or storytelling, each fresh thought is not created out of nothing, but built from the fragments of past improvisations. So each of us is a unique history, together with a wonderfully creative machine for redeploying that history to create new perceptions, thoughts, emotions and stories. The layering of that history makes some patterns of thought natural for us, others awkward or uncomfortable. While drawing on our past, we are continually reinventing ourselves, and by directing that reinvention, we can shape who we are and who we will become.

So we are not driven by hidden, inexorable forces from a dark and subterranean mental world. Instead, our thoughts and actions are transformations of past thoughts and actions and we often have considerable latitude, a certain judicial discretion, regarding which precedents we consider, which transformations we allow. As todays thought or action are tomorrows precedents, we are reshaping ourselves, moment by moment.

This viewpoint contradicts the Freudian inner depths, but it meshes naturally with the cognitive behavioural therapy (CBT) for which there is the best clinical backing. Reshaping our thoughts and actions is hard and requires establishing new patterns of thought and behaviour that overwrite the old opening up productive channels along which our thoughts may more happily and productively flow. CBT aims to do precisely this: to establish new behaviours (to approach, rather than avoid, a phobic object) and thoughts (shifting thoughts away from negative ruminations) and to create new precedents that may, slowly, come to dominate the old. Therapies of all kinds can help us rewrite the story of our past, to create traditions of thought and action that more constructively address the future. What therapy does not, and cannot do, is to reveal pathologies lurking in our innermost depths not because those depths are murky, but because they are nonexistent.

This is all very well, you may say. But surely we need beliefs and motives to explain why our thoughts and behaviour make sense, rather than being a completely incoherent jumble. Surely there are crucial inner facts about us, large and small, that set the course of our actions: the things we value, the ideals we believe in, the passions that move us. But if the mind is flat, despite the stories we tell about ourselves and each other, beliefs and motives cannot be driving our behaviour because they are a projection rather than a reality.

But layers of precedents the successive adaptation and transformation of previous thoughts and actions to create new thoughts and actions can provide a different, and more compelling, explanation for the orderly (and, on occasions, the disorderly) nature of thought. In particular, our culture can be viewed as a shared canon of precedents things we do, want, say, or think that create order in society as well as within each individual.

By laying down new precedents, we incrementally and collectively create our culture, but our new precedents are based on old, shared precedents, so that our culture also creates us. Considered in isolation, our selves turn out to be partial, fragmentary and alarmingly fragile; we are only the most lightly sketched of literary creations. Yet, collectively, we can construct lives, organisations and societies, which can be remarkably stable and coherent.

This is, I believe, a liberating thought. We are not driven by hidden motives, bound by unconscious forces or hopelessly imprisoned by our past. Each new thought and action is a chance to reshape ourselves, if only slightly. Our freedom has its limits, of course. Amateur saxophonists cant freely choose to play like Charlie Parker, new learners of English cant spontaneously emulate Sylvia Plath and physics students cant spontaneously reason like Albert Einstein.

Charlie
Freedom has its limits: we cant choose to play like Charlie Parker. Photograph: Hulton Getty

New actions, skills and thoughts require building a rich, deep mental tradition; there is no shortcut to the thousands of hours needed to lay down the traces on which expertise is based. Each of us is a unique tradition from which our new thoughts and actions are created. So each of us will play music, write and think in our own way. Yet the same points arise in our everyday lives, our fears and worries, our sometimes bumpy interactions with other people. Our freedom consists not in the ability to transform ourselves magically in a single jump, but to reshape our thoughts and behaviours, one step at a time. Our current thoughts and actions are continually, if slowly, reprogramming our minds.

Does this viewpoint imply that we are blank slates, on which any mental patterns can be written? Not at all. Musical traditions build on the rhythmic pattern generators in our nervous systems, the way our brain groups sounds as voices and much more. Linguistic traditions are shaped by our vocal apparatus, how our brains generate and recognise complex sequences and so on. Human music and language can take many forms but not any form. Traditions of thought are no different; they, too, will be profoundly shaped by the biases and predilections of our brains and our genes.

So our thoughts and behaviour are influenced by, but not determined by, biology; and neither are we hemmed in by occult psychic forces within us. Any prisons of thought are of our own invention and can be dismantled just as they have been constructed. If the mind is flat if we imagine our minds, lives and culture we have the power to imagine an inspiring future and to make it real.

Nick Chater is the author of The Mind Is Flat (Allen Lane) and professor of behavioural science at Warwick Business School and co-founder of Decision Technology Ltd.

Read more: https://www.theguardian.com/commentisfree/2018/apr/01/revolution-in-our-sense-of-self-sunday-essay

Brainless Embryos Suggest Bioelectricity Guides Growth

The tiny tadpole embryo looked like a bean. One day old, it didn’t even have a heart yet. The researcher in a white coat and gloves who hovered over it made a precise surgical incision where its head would form. Moments later, the brain was gone, but the embryo was still alive.

The brief procedure took Celia Herrera-Rincon, a neuroscience postdoc at the Allen Discovery Center at Tufts University, back to the country house in Spain where she had grown up, in the mountains near Madrid. When she was 11 years old, while walking her dogs in the woods, she found a snake, Vipera latastei. It was beautiful but dead. “I realized I wanted to see what was inside the head,” she recalled. She performed her first “lab test” using kitchen knives and tweezers, and she has been fascinated by the many shapes and evolutionary morphologies of the brain ever since. Her collection now holds about 1,000 brains from all kinds of creatures.

Quanta Magazine


About

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

This time, however, she was not interested in the brain itself, but in how an African clawed frog would develop without one. She and her supervisor, Michael Levin, a software engineer turned developmental biologist, are investigating whether the brain and nervous system play a crucial role in laying out the patterns that dictate the shapes and identities of emerging organs, limbs and other structures.

For the past 65 years, the focus of developmental biology has been on DNA as the carrier of biological information. Researchers have typically assumed that genetic expression patterns alone are enough to determine embryonic development.

To Levin, however, that explanation is unsatisfying. “Where does shape come from? What makes an elephant different from a snake?” he asked. DNA can make proteins inside cells, he said, but “there is nothing in the genome that directly specifies anatomy.” To develop properly, he maintains, tissues need spatial cues that must come from other sources in the embryo. At least some of that guidance, he and his team believe, is electrical.

In recent years, by working on tadpoles and other simple creatures, Levin’s laboratory has amassed evidence that the embryo is molded by bioelectrical signals, particularly ones that emanate from the young brain long before it is even a functional organ. Those results, if replicated in other organisms, may change our understanding of the roles of electrical phenomena and the nervous system in development, and perhaps more widely in biology.

“Levin’s findings will shake some rigid orthodoxy in the field,” said Sui Huang, a molecular biologist at the Institute for Systems Biology. If Levin’s work holds up, Huang continued, “I think many developmental biologists will be stunned to see that the construction of the body plan is not due to local regulation of cells … but is centrally orchestrated by the brain.”

Bioelectrical Influences in Development

The Spanish neuroscientist and Nobel laureate Santiago Ramón y Cajal once called the brain and neurons, the electrically active cells that process and transmit nerve signals, the “butterflies of the soul.” The brain is a center for information processing, memory, decision making and behavior, and electricity figures into its performance of all of those activities.

But it’s not just the brain that uses bioelectric signaling—the whole body does. All cell membranes have embedded ion channels, protein pores that act as pathways for charged molecules, or ions. Differences between the number of ions inside and outside a cell result in an electric gradient—the cell’s resting potential. Vary this potential by opening or blocking the ion channels, and you change the signals transmitted to, from and among the cells all around. Neurons do this as well, but even faster: To communicate among themselves, they use molecules called neurotransmitters that are released at synapses in response to voltage spikes, and they send ultra-rapid electrical pulses over long distances along their axons, encoding information in the pulses’ pattern, to control muscle activity.

Levin has thought about hacking networks of neurons since the mid-1980s, when he was a high school student in the suburbs near Boston, writing software for pocket money. One day, while browsing a small bookstore in Vancouver at Expo 86 with his father, he spotted a volume called The Body Electric, by Robert O. Becker and Gary Selden. He learned that scientists had been investigating bioelectricity for centuries, ever since Luigi Galvani discovered in the 1780s that nerves are animated by what he called “animal electricity.”

However, as Levin continued to read up on the subject, he realized that, even though the brain uses electricity for information processing, no one seemed to be seriously investigating the role of bioelectricity in carrying information about a body’s development. Wouldn’t it be cool, he thought, if we could comprehend “how the tissues process information and what tissues were ‘thinking about’ before they evolved nervous systems and brains?”

He started digging deeper and ended up getting a biology doctorate at Harvard University in morphogenesis—the study of the development of shapes in living things. He worked in the tradition of scientists like Emil du Bois-Reymond, a 19th-century German physician who discovered the action potential of nerves. In the 1930s and ’40s, the American biologists Harold Burr and Elmer Lund measured electric properties of various organisms during their embryonic development and studied connections between bioelectricity and the shapes animals take. They were not able to prove a link, but they were moving in the right direction, Levin said.

Before Genes Reigned Supreme

The work of Burr and Lund occurred during a time of widespread interest in embryology. Even the English mathematician Alan Turing, famed for cracking the Enigma code, was fascinated by embryology. In 1952 he published a paper suggesting that body patterns like pigmented spots and zebra stripes arise from the chemical reactions of diffusing substances, which he called morphogens.

"This electrical signal works as an environmental cue for intercellular communication, orchestrating cell behaviors during morphogenesis and regeneration."

Masayuki Yamashita

But organic explanations like morphogens and bioelectricity didn’t stay in the limelight for long. In 1953, James Watson and Francis Crick published the double helical structure of DNA, and in the decades since “the focus of developmental biology has been on DNA as the carrier of biological information, with cells thought to follow their own internal genetic programs, prompted by cues from their local environment and neighboring cells,” Huang said.

The rationale, according to Richard Nuccitelli, chief science officer at Pulse Biosciences and a former professor of molecular biology at the University of California, Davis, was that “since DNA is what is inherited, information stored in the genes must specify all that is needed to develop.” Tissues are told how to develop at the local level by neighboring tissues, it was thought, and each region patterns itself from information in the genomes of its cells.

The extreme form of this view is “to explain everything by saying ‘it is in the genes,’ or DNA, and this trend has been reinforced by the increasingly powerful and affordable DNA sequencing technologies,” Huang said. “But we need to zoom out: Before molecular biology imposed our myopic tunnel vision, biologists were much more open to organism-level principles.”

The tide now seems to be turning, according to Herrera-Rincon and others. “It’s too simplistic to consider the genome as the only source of biological information,” she said. Researchers continue to study morphogens as a source of developmental information in the nervous system, for example. Last November, Levin and Chris Fields, an independent scientist who works in the area where biology, physics and computing overlap, published a paper arguing that cells’ cytoplasm, cytoskeleton and both internal and external membranes also encode important patterning data—and serve as systems of inheritance alongside DNA.

And, crucially, bioelectricity has made a comeback as well. In the 1980s and ’90s, Nuccitelli, along with the late Lionel Jaffe at the Marine Biological Laboratory, Colin McCaig at the University of Aberdeen, and others, used applied electric fields to show that many cells are sensitive to bioelectric signals and that electricity can induce limb regeneration in nonregenerative species.

According to Masayuki Yamashita of the International University of Health and Welfare in Japan, many researchers forget that every living cell, not just neurons, generates electric potentials across the cell membrane. “This electrical signal works as an environmental cue for intercellular communication, orchestrating cell behaviors during morphogenesis and regeneration,” he said.

However, no one was really sure why or how this bioelectric signaling worked, said Levin, and most still believe that the flow of information is very local. “Applied electricity in earlier experiments directly interacts with something in cells, triggering their responses,” he said. But what it was interacting with and how the responses were triggered were mysteries.

That’s what led Levin and his colleagues to start tinkering with the resting potential of cells. By changing the voltage of cells in flatworms, over the last few years they produced worms with two heads, or with tails in unexpected places. In tadpoles, they reprogrammed the identity of large groups of cells at the level of entire organs, making frogs with extra legs and changing gut tissue into eyes—simply by hacking the local bioelectric activity that provides patterning information.

And because the brain and nervous system are so conspicuously active electrically, the researchers also began to probe their involvement in long-distance patterns of bioelectric information affecting development. In 2015, Levin, his postdoc Vaibhav Pai, and other collaborators showed experimentally that bioelectric signals from the body shape the development and patterning of the brain in its earliest stages. By changing the resting potential in the cells of tadpoles as far from the head as the gut, they appeared to disrupt the body’s “blueprint” for brain development. The resulting tadpoles’ brains were smaller or even nonexistent, and brain tissue grew where it shouldn’t.

Unlike previous experiments with applied electricity that simply provided directional cues to cells, “in our work, we know what we have modified—resting potential—and we know how it triggers responses: by changing how small signaling molecules enter and leave cells,” Levin said. The right electrical potential lets neurotransmitters go in and out of voltage-powered gates (transporters) in the membrane. Once in, they can trigger specific receptors and initiate further cellular activity, allowing researchers to reprogram identity at the level of entire organs.

Lucy Reading-Ikkanda/Quanta Magazine

This work also showed that bioelectricity works over long distances, mediated by the neurotransmitter serotonin, Levin said. (Later experiments implicated the neurotransmitter butyrate as well.) The researchers started by altering the voltage of cells near the brain, but then they went farther and farther out, “because our data from the prior papers showed that tumors could be controlled by electric properties of cells very far away,” he said. “We showed that cells at a distance mattered for brain development too.”

Then Levin and his colleagues decided to flip the experiment. Might the brain hold, if not an entire blueprint, then at least some patterning information for the rest of the body, Levin asked—and if so, might the nervous system disseminate this information bioelectrically during the earliest stages of a body’s development? He invited Herrera-Rincon to get her scalpel ready.

Making Up for a Missing Brain

Herrera-Rincon’s brainless Xenopus laevis tadpoles grew, but within just a few days they all developed highly characteristic defects—and not just near the brain, but as far away as the very end of their tails. Their muscle fibers were also shorter and their nervous systems, especially the peripheral nerves, were growing chaotically. It’s not surprising that nervous system abnormalities that impair movement can affect a developing body. But according to Levin, the changes seen in their experiment showed that the brain helps to shape the body’s development well before the nervous system is even fully developed, and long before any movement starts.

The body of a tadpole normally develops with a predictable structure (A). Removing a tadpole’s brain early in development, however, leads to abnormalities in tissues far from the head (B).

That such defects could be seen so early in the development of the tadpoles was intriguing, said Gil Carvalho, a neuroscientist at the University of Southern California. “An intense dialogue between the nervous system and the body is something we see very prominently post-development, of course,” he said. Yet the new data “show that this cross-talk starts from the very beginning. It’s a window into the inception of the brain-body dialogue, which is so central to most vertebrate life as we know it, and it’s quite beautiful.” The results also raise the possibility that these neurotransmitters may be acting at a distance, he added—by diffusing through the extracellular space, or going from cell to cell in relay fashion, after they have been triggered by a cell’s voltage changes.

Herrera-Rincon and the rest of the team didn’t stop there. They wanted to see whether they could “rescue” the developing body from these defects by using bioelectricity to mimic the effect of a brain. They decided to express a specific ion channel called HCN2, which acts differently in various cells but is sensitive to their resting potential. Levin likens the ion channel’s effect to a sharpening filter in photo-editing software, in that “it can strengthen voltage differences between adjacent tissues that help you maintain correct boundaries. It really strengthens the abilities of the embryos to set up the correct boundaries for where tissues are supposed to go.”

To make embryos express it, the researchers injected messenger RNA for HCN2 into some frog egg cells just a couple of hours after they were fertilized. A day later they removed the embryos’ brains, and over the next few days, the cells of the embryo acquired novel electrical activity from the HCN2 in their membranes.

The scientists found that this procedure rescued the brainless tadpoles from most of the usual defects. Because of the HCN2 it was as if the brain was still present, telling the body how to develop normally. It was amazing, Levin said, “to see how much rescue you can get just from very simple expression of this channel.” It was also, he added, the first clear evidence that the brain controls the development of the embryo via bioelectric cues.

As with Levin’s previous experiments with bioelectricity and regeneration, many biologists and neuroscientists hailed the findings, calling them “refreshing” and “novel.” “One cannot say that this is really a step forward because this work veers off the common path,” Huang said. But a single experiment with tadpoles’ brains is not enough, he added — it’s crucial to repeat the experiment in other organisms, including mammals, for the findings “to be considered an advance in a field and establish generality.” Still, the results open “an entire new domain of investigation and new of way of thinking,” he said.

Experiments on tadpoles reveal the influence of the immature brain on other developing tissues, which appears to be electrical, according to Levin and his colleagues. Photo A shows the appearance of normal muscle in young tadpoles. In tadpoles that lack brains, the muscles fail to develop the correct form (B). But if the cells of brainless tadpoles are made to express ion channels that can restore the right voltage to the cells, the muscles develop more normally (C).
Celia Herrera-Rincon and Michael Levin

Levin’s research demonstrates that the nervous system plays a much more important role in how organisms build themselves than previously thought, said Min Zhao, a biologist at the University of California, Davis, and an expert on the biomedical application and molecular biophysics of electric-field effects in living tissues. Despite earlier experimental and clinical evidence, “this paper is the first one to demonstrate convincingly that this also happens in [the] developing embryo.”

“The results of Mike’s lab abolish the frontier, by demonstrating that electrical signaling from the central nervous system shapes early development,” said Olivier Soriani of the Institut de Biologie de Valrose CNRS. “The bioelectrical activity can now be considered as a new type of input encoding organ patterning, allowing large range control from the central nervous system.”

Carvalho observed that the work has obvious implications for the treatment and prevention of developmental malformations and birth defects—especially since the findings suggest that interfering with the function of a single neurotransmitter may sometimes be enough to prevent developmental issues. “This indicates that a therapeutic approach to these defects may be, at least in some cases, simpler than anticipated,” he said.

Levin speculates that in the future, we may not need to micromanage multitudes of cell-signaling events; instead, we may be able to manipulate how cells communicate with each other electrically and let them fix various problems.

Another recent experiment hinted at just how significant the developing brain’s bioelectric signal might be. Herrera-Rincon soaked frog embryos in common drugs that are normally harmless and then removed their brains. The drugged, brainless embryos developed severe birth defects, such as crooked tails and spinal cords. According to Levin, these results show that the brain protects the developing body against drugs that otherwise might be dangerous teratogens (compounds that cause birth defects). “The paradigm of thinking about teratogens was that each chemical is either a teratogen or is not,” Levin said. “Now we know that this depends on how the brain is working.”

The body of a tadpole normally develops with a predictable structure (A). Removing a tadpole’s brain early in development, however, leads to abnormalities in tissues far from the head (B).

These findings are impressive, but many questions remain, said Adam Cohen, a biophysicist at Harvard who studies bioelectrical signaling in bacteria. “It is still unclear precisely how the brain is affecting developmental patterning under normal conditions, meaning when the brain is intact.” To get those answers, researchers need to design more targeted experiments; for instance, they could silence specific neurons in the brain or block the release of specific neurotransmitters during development.

Although Levin’s work is gaining recognition, the emphasis he puts on electricity in development is far from universally accepted. Epigenetics and bioelectricity are important, but so are other layers of biology, Zhao said. “They work together to produce the biology we see.” More evidence is needed to shift the paradigm, he added. “We saw some amazing and mind-blowing results in this bioelectricity field, but the fundamental mechanisms are yet to be fully understood. I do not think we are there yet.”

But Nuccitelli says that for many biologists, Levin is on to something. For example, he said, Levin’s success in inducing the growth of misplaced eyes in tadpoles simply by altering the ion flux through the local tissues “is an amazing demonstration of the power of biophysics to control pattern formation.” The abundant citations of Levin’s more than 300 papers in the scientific literature—more than 10,000 times in almost 8,000 articles—is also “a great indicator that his work is making a difference.”

The passage of time and the efforts of others carrying on Levin’s work will help his cause, suggested David Stocum, a developmental biologist and dean emeritus at Indiana University-Purdue University Indianapolis. “In my view, his ideas will eventually be shown to be correct and generally accepted as an important part of the framework of developmental biology.”

“We have demonstrated a proof of principle,” Herrera-Rincon said as she finished preparing another petri dish full of beanlike embryos. “Now we are working on understanding the underlying mechanisms, especially the meaning: What is the information content of the brain-specific information, and how much morphogenetic guidance does it provide?” She washed off the scalpel and took off her gloves and lab coat. “I have a million experiments in my mind.”

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more: https://www.wired.com/story/brainless-embryos-suggest-bioelectricity-guides-growth/