Brainless Embryos Suggest Bioelectricity Guides Growth

The tiny tadpole embryo looked like a bean. One day old, it didn’t even have a heart yet. The researcher in a white coat and gloves who hovered over it made a precise surgical incision where its head would form. Moments later, the brain was gone, but the embryo was still alive.

The brief procedure took Celia Herrera-Rincon, a neuroscience postdoc at the Allen Discovery Center at Tufts University, back to the country house in Spain where she had grown up, in the mountains near Madrid. When she was 11 years old, while walking her dogs in the woods, she found a snake, Vipera latastei. It was beautiful but dead. “I realized I wanted to see what was inside the head,” she recalled. She performed her first “lab test” using kitchen knives and tweezers, and she has been fascinated by the many shapes and evolutionary morphologies of the brain ever since. Her collection now holds about 1,000 brains from all kinds of creatures.

Quanta Magazine


About

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

This time, however, she was not interested in the brain itself, but in how an African clawed frog would develop without one. She and her supervisor, Michael Levin, a software engineer turned developmental biologist, are investigating whether the brain and nervous system play a crucial role in laying out the patterns that dictate the shapes and identities of emerging organs, limbs and other structures.

For the past 65 years, the focus of developmental biology has been on DNA as the carrier of biological information. Researchers have typically assumed that genetic expression patterns alone are enough to determine embryonic development.

To Levin, however, that explanation is unsatisfying. “Where does shape come from? What makes an elephant different from a snake?” he asked. DNA can make proteins inside cells, he said, but “there is nothing in the genome that directly specifies anatomy.” To develop properly, he maintains, tissues need spatial cues that must come from other sources in the embryo. At least some of that guidance, he and his team believe, is electrical.

In recent years, by working on tadpoles and other simple creatures, Levin’s laboratory has amassed evidence that the embryo is molded by bioelectrical signals, particularly ones that emanate from the young brain long before it is even a functional organ. Those results, if replicated in other organisms, may change our understanding of the roles of electrical phenomena and the nervous system in development, and perhaps more widely in biology.

“Levin’s findings will shake some rigid orthodoxy in the field,” said Sui Huang, a molecular biologist at the Institute for Systems Biology. If Levin’s work holds up, Huang continued, “I think many developmental biologists will be stunned to see that the construction of the body plan is not due to local regulation of cells … but is centrally orchestrated by the brain.”

Bioelectrical Influences in Development

The Spanish neuroscientist and Nobel laureate Santiago Ramón y Cajal once called the brain and neurons, the electrically active cells that process and transmit nerve signals, the “butterflies of the soul.” The brain is a center for information processing, memory, decision making and behavior, and electricity figures into its performance of all of those activities.

But it’s not just the brain that uses bioelectric signaling—the whole body does. All cell membranes have embedded ion channels, protein pores that act as pathways for charged molecules, or ions. Differences between the number of ions inside and outside a cell result in an electric gradient—the cell’s resting potential. Vary this potential by opening or blocking the ion channels, and you change the signals transmitted to, from and among the cells all around. Neurons do this as well, but even faster: To communicate among themselves, they use molecules called neurotransmitters that are released at synapses in response to voltage spikes, and they send ultra-rapid electrical pulses over long distances along their axons, encoding information in the pulses’ pattern, to control muscle activity.

Levin has thought about hacking networks of neurons since the mid-1980s, when he was a high school student in the suburbs near Boston, writing software for pocket money. One day, while browsing a small bookstore in Vancouver at Expo 86 with his father, he spotted a volume called The Body Electric, by Robert O. Becker and Gary Selden. He learned that scientists had been investigating bioelectricity for centuries, ever since Luigi Galvani discovered in the 1780s that nerves are animated by what he called “animal electricity.”

However, as Levin continued to read up on the subject, he realized that, even though the brain uses electricity for information processing, no one seemed to be seriously investigating the role of bioelectricity in carrying information about a body’s development. Wouldn’t it be cool, he thought, if we could comprehend “how the tissues process information and what tissues were ‘thinking about’ before they evolved nervous systems and brains?”

He started digging deeper and ended up getting a biology doctorate at Harvard University in morphogenesis—the study of the development of shapes in living things. He worked in the tradition of scientists like Emil du Bois-Reymond, a 19th-century German physician who discovered the action potential of nerves. In the 1930s and ’40s, the American biologists Harold Burr and Elmer Lund measured electric properties of various organisms during their embryonic development and studied connections between bioelectricity and the shapes animals take. They were not able to prove a link, but they were moving in the right direction, Levin said.

Before Genes Reigned Supreme

The work of Burr and Lund occurred during a time of widespread interest in embryology. Even the English mathematician Alan Turing, famed for cracking the Enigma code, was fascinated by embryology. In 1952 he published a paper suggesting that body patterns like pigmented spots and zebra stripes arise from the chemical reactions of diffusing substances, which he called morphogens.

"This electrical signal works as an environmental cue for intercellular communication, orchestrating cell behaviors during morphogenesis and regeneration."

Masayuki Yamashita

But organic explanations like morphogens and bioelectricity didn’t stay in the limelight for long. In 1953, James Watson and Francis Crick published the double helical structure of DNA, and in the decades since “the focus of developmental biology has been on DNA as the carrier of biological information, with cells thought to follow their own internal genetic programs, prompted by cues from their local environment and neighboring cells,” Huang said.

The rationale, according to Richard Nuccitelli, chief science officer at Pulse Biosciences and a former professor of molecular biology at the University of California, Davis, was that “since DNA is what is inherited, information stored in the genes must specify all that is needed to develop.” Tissues are told how to develop at the local level by neighboring tissues, it was thought, and each region patterns itself from information in the genomes of its cells.

The extreme form of this view is “to explain everything by saying ‘it is in the genes,’ or DNA, and this trend has been reinforced by the increasingly powerful and affordable DNA sequencing technologies,” Huang said. “But we need to zoom out: Before molecular biology imposed our myopic tunnel vision, biologists were much more open to organism-level principles.”

The tide now seems to be turning, according to Herrera-Rincon and others. “It’s too simplistic to consider the genome as the only source of biological information,” she said. Researchers continue to study morphogens as a source of developmental information in the nervous system, for example. Last November, Levin and Chris Fields, an independent scientist who works in the area where biology, physics and computing overlap, published a paper arguing that cells’ cytoplasm, cytoskeleton and both internal and external membranes also encode important patterning data—and serve as systems of inheritance alongside DNA.

And, crucially, bioelectricity has made a comeback as well. In the 1980s and ’90s, Nuccitelli, along with the late Lionel Jaffe at the Marine Biological Laboratory, Colin McCaig at the University of Aberdeen, and others, used applied electric fields to show that many cells are sensitive to bioelectric signals and that electricity can induce limb regeneration in nonregenerative species.

According to Masayuki Yamashita of the International University of Health and Welfare in Japan, many researchers forget that every living cell, not just neurons, generates electric potentials across the cell membrane. “This electrical signal works as an environmental cue for intercellular communication, orchestrating cell behaviors during morphogenesis and regeneration,” he said.

However, no one was really sure why or how this bioelectric signaling worked, said Levin, and most still believe that the flow of information is very local. “Applied electricity in earlier experiments directly interacts with something in cells, triggering their responses,” he said. But what it was interacting with and how the responses were triggered were mysteries.

That’s what led Levin and his colleagues to start tinkering with the resting potential of cells. By changing the voltage of cells in flatworms, over the last few years they produced worms with two heads, or with tails in unexpected places. In tadpoles, they reprogrammed the identity of large groups of cells at the level of entire organs, making frogs with extra legs and changing gut tissue into eyes—simply by hacking the local bioelectric activity that provides patterning information.

And because the brain and nervous system are so conspicuously active electrically, the researchers also began to probe their involvement in long-distance patterns of bioelectric information affecting development. In 2015, Levin, his postdoc Vaibhav Pai, and other collaborators showed experimentally that bioelectric signals from the body shape the development and patterning of the brain in its earliest stages. By changing the resting potential in the cells of tadpoles as far from the head as the gut, they appeared to disrupt the body’s “blueprint” for brain development. The resulting tadpoles’ brains were smaller or even nonexistent, and brain tissue grew where it shouldn’t.

Unlike previous experiments with applied electricity that simply provided directional cues to cells, “in our work, we know what we have modified—resting potential—and we know how it triggers responses: by changing how small signaling molecules enter and leave cells,” Levin said. The right electrical potential lets neurotransmitters go in and out of voltage-powered gates (transporters) in the membrane. Once in, they can trigger specific receptors and initiate further cellular activity, allowing researchers to reprogram identity at the level of entire organs.

Lucy Reading-Ikkanda/Quanta Magazine

This work also showed that bioelectricity works over long distances, mediated by the neurotransmitter serotonin, Levin said. (Later experiments implicated the neurotransmitter butyrate as well.) The researchers started by altering the voltage of cells near the brain, but then they went farther and farther out, “because our data from the prior papers showed that tumors could be controlled by electric properties of cells very far away,” he said. “We showed that cells at a distance mattered for brain development too.”

Then Levin and his colleagues decided to flip the experiment. Might the brain hold, if not an entire blueprint, then at least some patterning information for the rest of the body, Levin asked—and if so, might the nervous system disseminate this information bioelectrically during the earliest stages of a body’s development? He invited Herrera-Rincon to get her scalpel ready.

Making Up for a Missing Brain

Herrera-Rincon’s brainless Xenopus laevis tadpoles grew, but within just a few days they all developed highly characteristic defects—and not just near the brain, but as far away as the very end of their tails. Their muscle fibers were also shorter and their nervous systems, especially the peripheral nerves, were growing chaotically. It’s not surprising that nervous system abnormalities that impair movement can affect a developing body. But according to Levin, the changes seen in their experiment showed that the brain helps to shape the body’s development well before the nervous system is even fully developed, and long before any movement starts.

The body of a tadpole normally develops with a predictable structure (A). Removing a tadpole’s brain early in development, however, leads to abnormalities in tissues far from the head (B).

That such defects could be seen so early in the development of the tadpoles was intriguing, said Gil Carvalho, a neuroscientist at the University of Southern California. “An intense dialogue between the nervous system and the body is something we see very prominently post-development, of course,” he said. Yet the new data “show that this cross-talk starts from the very beginning. It’s a window into the inception of the brain-body dialogue, which is so central to most vertebrate life as we know it, and it’s quite beautiful.” The results also raise the possibility that these neurotransmitters may be acting at a distance, he added—by diffusing through the extracellular space, or going from cell to cell in relay fashion, after they have been triggered by a cell’s voltage changes.

Herrera-Rincon and the rest of the team didn’t stop there. They wanted to see whether they could “rescue” the developing body from these defects by using bioelectricity to mimic the effect of a brain. They decided to express a specific ion channel called HCN2, which acts differently in various cells but is sensitive to their resting potential. Levin likens the ion channel’s effect to a sharpening filter in photo-editing software, in that “it can strengthen voltage differences between adjacent tissues that help you maintain correct boundaries. It really strengthens the abilities of the embryos to set up the correct boundaries for where tissues are supposed to go.”

To make embryos express it, the researchers injected messenger RNA for HCN2 into some frog egg cells just a couple of hours after they were fertilized. A day later they removed the embryos’ brains, and over the next few days, the cells of the embryo acquired novel electrical activity from the HCN2 in their membranes.

The scientists found that this procedure rescued the brainless tadpoles from most of the usual defects. Because of the HCN2 it was as if the brain was still present, telling the body how to develop normally. It was amazing, Levin said, “to see how much rescue you can get just from very simple expression of this channel.” It was also, he added, the first clear evidence that the brain controls the development of the embryo via bioelectric cues.

As with Levin’s previous experiments with bioelectricity and regeneration, many biologists and neuroscientists hailed the findings, calling them “refreshing” and “novel.” “One cannot say that this is really a step forward because this work veers off the common path,” Huang said. But a single experiment with tadpoles’ brains is not enough, he added — it’s crucial to repeat the experiment in other organisms, including mammals, for the findings “to be considered an advance in a field and establish generality.” Still, the results open “an entire new domain of investigation and new of way of thinking,” he said.

Experiments on tadpoles reveal the influence of the immature brain on other developing tissues, which appears to be electrical, according to Levin and his colleagues. Photo A shows the appearance of normal muscle in young tadpoles. In tadpoles that lack brains, the muscles fail to develop the correct form (B). But if the cells of brainless tadpoles are made to express ion channels that can restore the right voltage to the cells, the muscles develop more normally (C).
Celia Herrera-Rincon and Michael Levin

Levin’s research demonstrates that the nervous system plays a much more important role in how organisms build themselves than previously thought, said Min Zhao, a biologist at the University of California, Davis, and an expert on the biomedical application and molecular biophysics of electric-field effects in living tissues. Despite earlier experimental and clinical evidence, “this paper is the first one to demonstrate convincingly that this also happens in [the] developing embryo.”

“The results of Mike’s lab abolish the frontier, by demonstrating that electrical signaling from the central nervous system shapes early development,” said Olivier Soriani of the Institut de Biologie de Valrose CNRS. “The bioelectrical activity can now be considered as a new type of input encoding organ patterning, allowing large range control from the central nervous system.”

Carvalho observed that the work has obvious implications for the treatment and prevention of developmental malformations and birth defects—especially since the findings suggest that interfering with the function of a single neurotransmitter may sometimes be enough to prevent developmental issues. “This indicates that a therapeutic approach to these defects may be, at least in some cases, simpler than anticipated,” he said.

Levin speculates that in the future, we may not need to micromanage multitudes of cell-signaling events; instead, we may be able to manipulate how cells communicate with each other electrically and let them fix various problems.

Another recent experiment hinted at just how significant the developing brain’s bioelectric signal might be. Herrera-Rincon soaked frog embryos in common drugs that are normally harmless and then removed their brains. The drugged, brainless embryos developed severe birth defects, such as crooked tails and spinal cords. According to Levin, these results show that the brain protects the developing body against drugs that otherwise might be dangerous teratogens (compounds that cause birth defects). “The paradigm of thinking about teratogens was that each chemical is either a teratogen or is not,” Levin said. “Now we know that this depends on how the brain is working.”

The body of a tadpole normally develops with a predictable structure (A). Removing a tadpole’s brain early in development, however, leads to abnormalities in tissues far from the head (B).

These findings are impressive, but many questions remain, said Adam Cohen, a biophysicist at Harvard who studies bioelectrical signaling in bacteria. “It is still unclear precisely how the brain is affecting developmental patterning under normal conditions, meaning when the brain is intact.” To get those answers, researchers need to design more targeted experiments; for instance, they could silence specific neurons in the brain or block the release of specific neurotransmitters during development.

Although Levin’s work is gaining recognition, the emphasis he puts on electricity in development is far from universally accepted. Epigenetics and bioelectricity are important, but so are other layers of biology, Zhao said. “They work together to produce the biology we see.” More evidence is needed to shift the paradigm, he added. “We saw some amazing and mind-blowing results in this bioelectricity field, but the fundamental mechanisms are yet to be fully understood. I do not think we are there yet.”

But Nuccitelli says that for many biologists, Levin is on to something. For example, he said, Levin’s success in inducing the growth of misplaced eyes in tadpoles simply by altering the ion flux through the local tissues “is an amazing demonstration of the power of biophysics to control pattern formation.” The abundant citations of Levin’s more than 300 papers in the scientific literature—more than 10,000 times in almost 8,000 articles—is also “a great indicator that his work is making a difference.”

The passage of time and the efforts of others carrying on Levin’s work will help his cause, suggested David Stocum, a developmental biologist and dean emeritus at Indiana University-Purdue University Indianapolis. “In my view, his ideas will eventually be shown to be correct and generally accepted as an important part of the framework of developmental biology.”

“We have demonstrated a proof of principle,” Herrera-Rincon said as she finished preparing another petri dish full of beanlike embryos. “Now we are working on understanding the underlying mechanisms, especially the meaning: What is the information content of the brain-specific information, and how much morphogenetic guidance does it provide?” She washed off the scalpel and took off her gloves and lab coat. “I have a million experiments in my mind.”

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more: https://www.wired.com/story/brainless-embryos-suggest-bioelectricity-guides-growth/

How Cells Pack Tangled DNA Into Neat Chromosomes

A human cell carries in its nucleus two meters of spiraling DNA, split up among the 46 slender, double-helical molecules that are its chromosomes. Most of the time, that DNA looks like a tangled ball of yarn—diffuse, disordered, chaotic. But that messiness poses a problem during mitosis, when the cell has to make a copy of its genetic material and divide in two. In preparation, it tidies up by packing the DNA into dense, sausagelike rods, the chromosomes’ most familiar form. Scientists have watched that process through a microscope for decades: The DNA condenses and organizes into discrete units that gradually shorten and widen. But how the genome gets folded inside that structure—it’s clear that it doesn’t simply contract—has remained a mystery. “It’s really at the heart of genetics,” said Job Dekker, a biochemist at the University of Massachusetts Medical School, “a fundamental aspect of heredity that’s always been such a great puzzle.”

Quanta Magazine


About

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

To solve that puzzle, Dekker teamed up with Leonid Mirny, a biophysicist at the Massachusetts Institute of Technology, and William Earnshaw, a biologist at the University of Edinburgh in Scotland. They and their colleagues used a combination of imaging, modeling and genomic techniques to understand how the condensed chromosome forms during cell division. Their results, published recently in *Science and confirmed in part by experimental evidence reported by a European team in this week's issue of the journal, paint a picture in which two protein complexes sequentially organize the DNA into tight arrays of loops along a helical spine.

The researchers collected minute-by-minute data on chromosomes—using a microscope to see how they changed, as well as a technology called Hi-C, which provides a map of how frequently pairs of sequences in the genome interact with one another. They then generated sophisticated computer simulations to match that data, allowing them to calculate the three-dimensional path the chromosomes traced as they condensed.

Their models determined that in the lead-up to mitosis, a ring-shaped protein molecule called condensin II, composed of two connected motors, lands on the DNA. Each of its motors move in opposite directions along the strand while remaining attached to one another, causing a loop to form; as the motors continue to move, that loop gets larger and larger. (Mirny demonstrated the process for me by clasping a piece of his computer’s power cord with both hands, held knuckles to knuckles, through which he then proceeded to push a loop of cord.) As tens of thousands of these protein molecules do their work, a series of loops emerges. The ringlike proteins, positioned at the base of each loop, create a central scaffolding from which the loops emanate, and the entire chromosome becomes shorter and stiffer.

Those results lent support to the idea of loop extrusion, a prior proposal about how DNA is packaged. (Loop extrusion is also responsible for preventing duplicated chromosomes from becoming knotted and entangled, according to Mirny. The mechanics of the looped structure cause sister chromatids to repel each other.) But what the scientists observed next came as more of a surprise and allowed them to build further detail into the loop extrusion hypothesis.

After about 10 minutes, the nuclear envelope keeping the chromosomes together broke down, giving a second ring-shaped motor protein, condensin I, access to the DNA. Those molecules performed loop extrusion on the loops that had already formed, splitting each into around five smaller loops on average. Nesting loops in this way enabled the chromosome to become narrower and prevented the initial loops from growing large enough to mix or interact.

According to the researchers’ models, one major aspect of the chromosome’s folding process is the formation of nested loops. First, a ring-shaped motor protein (red) lands on DNA and extrudes a loop. Later, a second protein (blue) extrudes loops on top of that one. When many such molecules across the entire length of the DNA do this, the chromosome compacts.
Dr. Anton Goloborodko

After approximately 15 minutes, as these loops were forming, the Hi-C data showed something that the researchers found even more unexpected. Typically, sequences located close together along the string of DNA were most likely to interact, while those farther apart were less likely to do so. But the team’s measurements showed that “things [then] kind of came back again in a circle,” Mirny said. That is, once the distance between sequences had grown even further, they again had a higher probability of interacting. “It was obvious from the first glance at this data that we’d never seen something like this before,” he said. His model suggested that condensin II molecules assembled into a helical scaffold, as in the famous Leonardo staircase found in the Chambord Castle in France. The nested loops of DNA radiated out like steps from that spiraling scaffold, packing snuggly into the cylindrical configuration that characterizes the chromosome.

“So this single process immediately solves three problems,” Mirny said. “It creates a scaffold. It linearly orders the chromosome. And it compacts it in such a way that it becomes an elongated object.”

“That was really surprising to us,” Dekker said—not only because they’d never observed the rotation of loops along a helical axis, but because the finding taps into a more fundamental debate. Namely, are chromosomes just a series of loops, or do they spiral? And if they do spiral, is it that the entire chromosome twists into a coil, or that only the internal scaffolding does? (The new study points to the latter; the researchers attribute the former helix-related hypothesis to experimental artifacts, the result of isolating chromosomes in a way that promoted excessive spiraling.) “Our work unifies many, many observations that people have collected over the years,” Dekker said.

“This [analysis] provides a revolutionary degree of clarity,” said Nancy Kleckner, a molecular biologist at Harvard University. “It takes us into another era of understanding how chromosomes are organized at these late stages.”

This series of images illustrates how a compacted chromosome takes shape. Ring-shaped motor proteins (red) form a helical scaffold. Folded loops of DNA emanate from that spiraling axis so that they can be packed tightly into a cylindrical rod.
Dr. Anton Goloborodko

Other experts in the field found those results less surprising, instead deeming the study more noteworthy for the details it provided. Hints of the general chromosomal assembly the researchers described were already “in the air,” according to Julien Mozziconacci, a biophysicist at Sorbonne University in France. The more novel aspects of the work, he said, lay in the researchers’ collection of Hi-C data as a function of time, which allowed them to pinpoint specific constraints, such as the sizes of the loops and helical turns. “I think this is a technical tour de force that allows us to see for the first time what people have been thinking,” he said.

Still, Dekker cautioned that, although it’s been known for some time that condensins are involved in this process—and despite the fact that his group has now identified more specific roles for those “molecular hands that cells use to fold chromosomes”—scientists still don’t understand exactly how they do it.

“If condensin is organizing mitotic chromosomes in this manner, how does it do so?” said Kim Nasmyth, a biochemist at the University of Oxford and a pioneer of the loop extrusion hypothesis. “Until we know the molecular mechanism, we can’t say for sure whether condensin is indeed the one driving all this.”

That’s where Christian Häring, a biochemist at the European Molecular Biology Laboratory in Germany, and Cees Dekker, a biophysicist (unrelated to Job Dekker) at Delft University of Technology in the Netherlands, enter the picture. Last year, they and their colleagues directly demonstrated for the first time that condensin does move along DNA in a test tube—a prerequisite for loop extrusion to be true. And in this week's issue of Science, they reported witnessing an isolated condensin molecule extruding a loop of DNA in yeast, in real time. “We finally have visual proof of this happening,” Häring said.

And it happened almost exactly as Mirny and his team predicted it would for the formation of their larger loops—except that in the in vitro experiment, the loops formed asymmetrically: The condensin landed on the DNA and reeled it in from only one side, rather than in both directions as Mirny initially assumed. (Since the experiments involved condensin from yeast, and only examined a single molecule at a time, they could neither confirm nor refute the other aspects of Mirny’s models, namely the nested loops and helical scaffold.)

Once researchers have completely unpacked that biochemistry—and conducted similar studies on how chromosomes unwind themselves—Job Dekker and Mirny think their work can lend itself to a range of practical and theoretical applications. For one, the research could inform potential cancer treatments. Cancer cells divide quickly and frequently, “so anything we know about that process can help specifically target those kinds of cells,” Dekker said.

It could also provide a window into what goes on in the chromosomes of cells that aren’t dividing. “It has wider implications for, I believe, any other thing the cell does with chromosomes,” Job Dekker said. The condensins he and his colleagues are studying have close relatives, called cohesins, that help with organizing the genome and creating loops even when the DNA isn’t getting compacted. That folding process could affect gene expression. Loop extrusion basically brings pairs of loci together, however briefly, at the base of the growing or shrinking loop—something that could very well be happening during gene regulation, when a gene has to be in physical contact with a regulatory element that may be located quite a distance away along the chromosome. “We now have such a powerful system to study this process,” Dekker said.

“I think there’s an incredible amount of synergy between the things we can learn at different parts of the cell cycle,” added Geoff Fudenberg, a postdoctoral researcher at the University of California, San Francisco, who previously worked in Mirny’s lab. Understanding how chromosomes undergo such a “dramatic transition” during mitosis, he said, could also reveal a lot about what they are doing “below the surface” when cells are not dividing and certain activities and behaviors are less clear.

Mirny points out that this type of folding could also provide insights into other processes in cells that involve active changes in shape or structure. Proteins get folded largely by interactions, while motor processes create the cytoskeleton in the cytoplasm. “Now we came to realize that chromosomes may be something in between,” Mirny said. “We need to gain a better understanding of how these types of active systems self-organize to create complex patterns and vital structures.”

Before that’s possible, the researchers will have to confirm and flesh out the solution they’ve proposed to what Job Dekker called a “great puzzle.” Kleckner has high hopes as well. “This work sets the foundation for a whole new way of thinking about what might be going on,” she said.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more: https://www.wired.com/story/how-cells-pack-tangled-dna-into-neat-chromosomes/

3.5 Billion-Year-Old Fossils Challenge Ideas About Earths Start

In the arid, sun-soaked northwest corner of Australia, along the Tropic of Capricorn, the oldest face of Earth is exposed to the sky. Drive through the northern outback for a while, south of Port Hedlund on the coast, and you will come upon hills softened by time. They are part of a region called the Pilbara Craton, which formed about 3.5 billion years ago, when Earth was in its youth.

Quanta Magazine


About

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Look closer. From a seam in one of these hills, a jumble of ancient, orange-Creamsicle rock spills forth: a deposit called the Apex Chert. Within this rock, viewable only through a microscope, there are tiny tubes. Some look like petroglyphs depicting a tornado; others resemble flattened worms. They are among the most controversial rock samples ever collected on this planet, and they might represent some of the oldest forms of life ever found.

Last month, researchers lobbed another salvo in the decades-long debate about the nature of these forms. They are indeed fossil life, and they date to 3.465 billion years ago, according to John Valley, a geochemist at the University of Wisconsin. If Valley and his team are right, the fossils imply that life diversified remarkably early in the planet’s tumultuous youth.

The fossils add to a wave of discoveries that point to a new story of ancient Earth. In the past year, separate teams of researchers have dug up, pulverized and laser-blasted pieces of rock that may contain life dating to 3.7, 3.95 and maybe even 4.28 billion years ago. All of these microfossils—or the chemical evidence associated with them—are hotly debated. But they all cast doubt on the traditional tale.

A sliver of a nearly 3.5-billion-year-old rock from the Apex Chert deposit in Western Australia (top). An example of one of the microfossils discovered in a sample of rock from the Apex Chert (bottom).
Jeff Miller (Epoxy mount); J. William Schopf, UCLA (Microfossil)

As that story goes, in the half-billion years after it formed, Earth was hellish and hot. The infant world would have been rent by volcanism and bombarded by other planetary crumbs, making for an environment so horrible, and so inhospitable to life, that the geologic era is named the Hadean, for the Greek underworld. Not until a particularly violent asteroid barrage ended some 3.8 billion years ago could life have evolved.

But this story is increasingly under fire. Many geologists now think Earth may have been tepid and watery from the outset. The oldest rocks in the record suggest parts of the planet’s crust had cooled and solidified by 4.4 billion years ago. Oxygen in those ancient rocks suggest the planet had water as far back as 4.3 billion years ago. And instead of an epochal, final bombardment, meteorite strikes might have slowly tapered off as the solar system settled into its current configuration.

“Things were actually looking a lot more like the modern world, in some respects, early on. There was water, potentially some stable crust. It’s not completely out of the question that there would have been a habitable world and life of some kind,” said Elizabeth Bell, a geochemist at the University of California, Los Angeles.

Taken together, the latest evidence from the ancient Earth and from the moon is painting a picture of a very different Hadean Earth: a stoutly solid, temperate, meteorite-clear and watery world, an Eden from the very beginning.

Ancient Clues

About 4.54 billion years ago, Earth was forming out of dust and rocks left over from the sun’s birth. Smaller solar leftovers continually pelted baby Earth, heating it up and endowing it with radioactive materials, which further warmed it from within. Oceans of magma covered Earth’s surface. Back then, Earth was not so much a rocky planet as an incandescent ball of lava.

Not long after Earth coalesced, a wayward planet whacked into it with incredible force, possibly vaporizing Earth anew and forming the moon. The meteorite strikes continued, some excavating craters 1,000 kilometers across. In the standard paradigm of the Hadean eon, these strikes culminated in an assault dubbed the Late Heavy Bombardment, also known as the lunar cataclysm, in which asteroids emigrated to the inner solar system and pounded the rocky planets. Throughout this early era, ending about 3.8 billion years ago, Earth was molten and couldn’t support a crust of solid rock, let alone life.

Lucy Reading-Ikkanda/Quanta Magazine

But starting around a decade ago, this story started to change, thanks largely to tiny crystals called zircons. The gems, which are often about the size of the period at the end of this sentence, told of a cooler, wetter and maybe livable world as far back as 4.3 billion years ago. In recent years, fossils in ancient rock bolstered the zircons’ story of calmer climes. The tornadic microfossils of the Pilbara Craton are the latest example.

Today, the oldest evidence for possible life—which many scientists doubt or outright reject—is at least 3.77 billion years old and may be a stunningly ancient 4.28 billion years old.

In March 2017, Dominic Papineau, a geochemist at University College London, and his student Matthew Dodd described tubelike fossils in an outcrop in Quebec that dates to the basement of Earth’s history. The formation, called the Nuvvuagittuq (noo-voo-wog-it-tuck) Greenstone Belt, is a fragment of Earth’s primitive ocean floor. The fossils, about half the width of a human hair and just half a millimeter long, were buried within. They are made from an iron oxide called hematite and may be fossilized cities built by microbial communities up to 4.28 billion years ago, Dodd said.

The bright red rock in the Nuvvuagittuq Greenstone Belt appears to contain tube-shaped microfossils dating to at least 3.77 billion years ago.
Dominic Papineau

“They would have formed these gelatinous, rusty-red-colored mats on the rocks around the vents,” he said. Similar structures exist in today’s oceans, where communities of microbes and bloody-looking tube worms blossom around sunless, black-smoking chimneys.

Dodd found the tubes near graphite and with carbonate “rosettes,” tiny carbon rings that contain organic materials. The rosettes can form through varying nonbiological processes, but Dodd also found a mineral called apatite, which he said is diagnostic of biological activity. The researchers also analyzed the variants, or isotopes, of carbon within the graphite. Generally, living things like to use the more lightweight isotopes, so an abundance of carbon 12 over carbon 13 can be used to infer past biological activity. The graphite near the rosettes also suggested the presence of life. Taken together, the tubes and their surrounding chemistry suggest they are remnants of a microbial community that lived near a deep-ocean hydrothermal vent, Dodd said.

Geologists debate the exact age of the rock belt where they were found, but they agree it includes one of the oldest, if not the oldest, iron formations on Earth. This suggests the fossils are that old, too—much older than anything found previously and much older than many scientists had thought possible.

The microfossils resemble sea life that grows near deep-sea hydrothermal vents.
Matt Dodd

Then in September 2017, researchers in Japan published an examination of graphite flakes from a 3.95-billion-year-old sedimentary rock called the Saglek Block in Labrador, Canada. Yuji Sano and Tsuyoshi Komiya of the University of Tokyo argued their graphite’s carbon-isotope ratio indicates it, too, was made by life. But the graphite flakes were not accompanied by any feature that looked like a fossil; what’s more, the history of the surrounding rock is murky, suggesting the carbon may be younger than it appears.

Farther to the east, in southwestern Greenland, another team had also found evidence of ancient life. In August 2016, Allen Nutman of the University of Wollongong in Australia and colleagues reported finding stromatolites, fossil remains of microbes, from 3.7 billion years ago.

Allen Nutman prospecting for ancient microfossils in the Isua belt in southern Greenland.
Laure Gauthiez-Putallaz

Many geologists have been skeptical of each claim. Nutman’s fossils, for example, come from the Isua belt in southern Greenland, home to the oldest known sedimentary rocks on Earth. But the Isua belt is tough to interpret. Just as nonbiological processes can form Dodd’s carbon rosettes, basic chemistry can form plenty of layered structures without any help from life, suggesting they may not be stromatolites but lifeless pretenders.

In addition, both the Nuvvuagittuq Greenstone Belt and the Isua belt have been heated and squished over billions of years, a process that melts and recrystallizes the rocks, morphing them from their original sedimentary state.

“I don’t think any of those other studies are wrong, but I don’t think any of them are proof,” said Valley, the Wisconsin researcher. “All we can say is [Nutman’s rocks] look like stromatolites, and that’s very enticing.”

Regarding his work with the Pilbara Craton fossils, however, Valley is much less circumspect.

The stromatolites form small wavelike mounds in sedimentary rock. The vertical lines are cuts made by the researchers.
Allen Nutman/University of Wollongong

Signs of Life

The tornadic microfossils lay in the Pilbara Craton for 3.465 billion years before being separated from their natal rock, packed up in a box and shipped to California. Paleobiologist William Schopf of UCLA published his discovery of the strange squiggles in 1993 and identified 11 distinct microbial taxa in the samples. Critics said the forms could have been made in nonbiological processes, and geologists have argued back and forth in the years since. Last year, Schopf sent a sample to Valley, who is an expert with a super-sensitive instrument for measuring isotope ratios called a secondary ion mass spectrometer.

Valley’s team found that some of the apparent fossils had the same carbon-isotope ratio as modern photosynthetic bacteria. Three other types of fossils had the same ratios as methane-eating or methane-producing microbes. Moreover, the isotope ratios correlate to specific species that had already been identified by Schopf. The locations where these isotope ratios were measured corresponded to the shapes of the microfossils themselves, Valley said, adding they are the oldest samples that look like fossils both physically and chemically.

John Valley in his mass spectrometer laboratory at the University of Wisconsin, Madison.
Jeff Miller/UW-Madison

While they are not the oldest samples in the record—supposing you accept the provenance of the rocks described by Dodd, Komiya and Nutman—Schopf’s and Valley’s cyclonic miniatures do have an important distinction: They are diverse. The presence of so many different carbon isotope ratios suggests the rock represents a complex community of primitive organisms. The life-forms must have had time to evolve into endless iterations. This means they must have originated even earlier than 3.465 billion years ago. And that means our oldest ancestors are very, very old indeed.

Watery World

Fossils were not the first sign that early Earth might have been Edenic rather than hellish. The rocks themselves started providing that evidence as far back as 2001. That year, Valley found zircons that suggested the planet had a crust as far back as 4.4 billion years ago.

Zircons are crystalline minerals containing silicon, oxygen, zirconium and sometimes other elements. They form inside magma, and like some better-known carbon crystals, zircons are forever—they can outlast the rocks they form in and withstand eons of unspeakable pressure, erosion and deformation. As a result, they are the only rocks left over from the Hadean, making them invaluable time capsules.

Valley chipped some out of Western Australia’s Jack Hills and found oxygen isotopes that suggested the crystal formed from material that was altered by liquid water. This suggested part of Earth’s crust had cooled, solidified and harbored water at least 400 million years earlier than the earliest known sedimentary rocks. If there was liquid water, there were likely entire oceans, Valley said. Other zircons showed the same thing.

“The Hadean was not hell-like. That’s what we learned from the zircons. Sure, there were volcanoes, but they were probably surrounded by oceans. There would have been at least some dry land,” he said.

Zircons suggest there may even have been life.

In research published in 2015, Bell and her coauthors presented evidence for graphite embedded within a tiny, 4.1-billion-year-old zircon crystal from the same Jack Hills. The graphite’s blend of carbon isotopes hints at biological origins, although the finding is—once again—hotly debated.

“Are there other explanations than life? Yeah, there are,” Bell said. “But this is what I would consider the most secure evidence for some sort of fossil or biogenic structure.”

An X-ray of a 4.1-billion-year-old sample of zircon reveals dark spots made by carbon deposits.
Crystal Shi/Stanford University Department of Earth, Energy, and Environmental Sciences

If the signals in the ancient rocks are true, they are telling us that life was everywhere, always. In almost every place scientists look, they are finding evidence of life and its chemistry, whether it is in the form of fossils themselves or the remnants of life’s long-ago stirrings. Far from fussy and delicate, life may have taken hold in the worst conditions imaginable.

“Life was managing to do interesting things at the same time Earth was dealing with the worst impacts it’s ever had,” said Bill Bottke, a planetary scientist at the Southwest Research Institute in Boulder, Colorado.

Or maybe not. Maybe Earth was just fine. Maybe those impacts weren’t quite as rapid-fire as everyone thought.

Evidence for a Beating

We know Earth, and everything else, was bombarded by asteroids in the past. The moon, Mars, Venus and Mercury all bear witness to this primordial pummeling. The question is when, and for how long.

Based largely on Apollo samples toted home by moonwalking astronauts, scientists came to believe that in the Earth’s Hadean age, there were at least two distinct epochs of solar system billiards. The first was the inevitable side effect of planet making: It took some time for the planets to sweep up the biggest asteroids and for Jupiter to gather the rest into the main asteroid belt.

The second came later. It began sometime between 500 and 700 million years after the solar system was born and finally tapered off around 3.8 billion years ago. That one is called the Late Heavy Bombardment, or the lunar cataclysm.

As with most things in geochemistry, evidence for a world-rending blitz, an event on the hugest scales imaginable, is derived from the very, very small. Isotopes of potassium and argon in Apollo samples suggested bits of the moon suddenly melted some 500 million years after it formed. This was taken as evidence that it was blasted within an inch of its life.

Zircons also provide tentative physical evidence of a late-era hellscape. Some zircons do contain “shocked” minerals, evidence for extreme heat and pressure that can be indicative of something horrendous. Many are younger than 3 billion years, but Bell found one zircon suggesting rapid, extreme heating around 3.9 billion years ago—a possible signature of the Late Heavy Bombardment. “All we know is there is a group of recrystallized zircons at this time period. Given the coincidence with the Late Heavy Bombardment, it was too hard not to say that maybe this is connected,” she said. “But to really establish that, we will need to look at zircon records at other localities around the planet.”

So far, there are no other signs, said Aaron Cavosie of Curtin University in Australia.

Craters on the moon have been taken as evidence for the Late Heavy Bombardment, but reassessments of the geological evidence from Apollo moon rocks casts doubt on whether the asteroid bombardments during the Hadean era were as severe as was thought.
NASA

Moon Rocks

In 2016 Patrick Boehnke, now at the University of Chicago, took another look at those original Apollo samples, which for decades have been the main evidence in favor of the Late Heavy Bombardment. He and UCLA’s Mark Harrison reanalyzed the argon isotopes and concluded that the Apollo rocks may have been walloped many times since they crystallized from the natal moon, which could make the rocks seem younger than they really are.

“Even if you solve the analytical problems,” said Boehnke, “then you still have the problem that the Apollo samples are all right next to each other.” There’s a chance that astronauts from the six Apollo missions sampled rocks from a single asteroid strike whose ejecta spread throughout the Earth-facing side of our satellite.

In addition, moon-orbiting probes like the Gravity Recovery and Interior Laboratory (GRAIL) spacecraft and the Lunar Reconnaissance Orbiter have found around 100 previously unknown craters, including a spike in impacts as early as 4.3 billion years ago.

“This interesting confluence of orbital data and sample data, and all different kinds of sample data—lunar impact glass, Luna samples, Apollo samples, lunar meteorites—they are all coming together and pointing to something that is not a cataclysmic spike at 3.9 billion years ago,” said Nicolle Zellner, a planetary scientist at Albion College in Michigan.

Bottke, who studies asteroids and solar system dynamics, is one of several researchers coming up with modified explanations. He now favors a slow uptick in bombardment, followed by a gradual decline. Others think there was no late bombardment, and instead the craters on the moon and other rocky bodies are remnants from the first type of billiards, the natural process of planet building.

“We have a tiny sliver of data, and we’re trying to do something with it,” he said. “You try to build a story, and sometimes you are just chasing ghosts.”

Life Takes Hold

While it plays out, scientists will be debating much bigger questions than early solar-system dynamics.

If some of the new evidence truly represents impressions of primeval life, then our ancestors may be much older than we thought. Life might have arisen the moment the planet was amenable to it—the moment it cooled enough to hold liquid water.

“I was taught when I was young that it would take billions and billions of years for life to form. But I have not been able to find any basis for those sorts of statements,” said Valley. “I think it’s quite possible that life emerged within a few million years of when conditions became habitable. From the point of view of a microbe, a million years is a really long time, yet that’s a blink of an eye in geologic time.”

“There is no reason life could not have emerged at 4.3 billion years ago,” he added. “There is no reason.”

If there was no mass sterilization at 3.9 billion years ago, or if a few massive asteroid strikes confined the destruction to a single hemisphere, then Earth’s oldest ancestors may have been here from the haziest days of the planet’s own birth. And that, in turn, makes the notion of life elsewhere in the cosmos seem less implausible. Life might be able to withstand horrendous conditions much more readily than we thought. It might not need much time at all to take hold. It might arise early and often and may pepper the universe yet. Its endless forms, from tubemaking microbes to hunkering slime, may be too small or simple to communicate the way life does on Earth—but they would be no less real and no less alive.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more: https://www.wired.com/story/35-billion-year-old-fossils-challenge-ideas-about-earths-start/

Researchers share $22m Breakthrough prize as science gets rock star treatment

Glitzy ceremony honours work including that on mapping post-big bang primordial light, cell biology, plant science and neurodegenerative diseases

The most glitzy event on the scientific calendar took place on Sunday night when the Breakthrough Foundation gave away $22m (16.3m) in prizes to dozens of physicists, biologists and mathematicians at a ceremony in Silicon Valley.

The winners this year include five researchers who won $3m (2.2m) each for their work on cell biology, plant science and neurodegenerative diseases, two mathematicians, and a team of 27 physicists who mapped the primordial light that warmed the universe moments after the big bang 13.8 billion years ago.

Now in their sixth year, the Breakthrough prizes are backed by Yuri Milner, a Silicon Valley tech investor, Mark Zuckerberg of Facebook and his wife Priscilla Chan, Anne Wojcicki from the DNA testing company 23andMe, and Googles Sergey Brin. Launched by Milner in 2012, the awards aim to make rock stars of scientists and raise their profile in the public consciousness.

The annual ceremony at Nasas Ames Research Center in California provides a rare opportunity for some of the worlds leading minds to rub shoulders with celebrities, who this year included Morgan Freeman as host, fellow actors Kerry Washington and Mila Kunis, and Miss USA 2017 Kra McCullough. When Joe Polchinski at the University of California in Santa Barbara shared the physics prize last year, he conceded his nieces and nephews would know more about the A-list attendees than he would.

Oxford University geneticist Kim Nasmyth won for his work on chromosomes but said he had not worked out what to do with the windfall. Its a wonderful bonus, but not something you expect, he said. Its a huge amount of money, I havent had time to think it through. On being recognised for what amounts to his lifes work, he added: You have to do science because you want to know, not because you want to get recognition. If you do what it takes to please other people, youll lose your moral compass. Nasmyth has won lucrative awards before and channelled some of his winnings into Gregor Mendels former monastery in Brno.

Another life sciences prizewinner, Joanne Chory at the Salk Institute in San Diego, was honoured for three decades of painstaking research into the genetic programs that flip into action when plants find themselves plunged into shade. Her work revealed that plants can sense when a nearby competitor is about to steal their light, sparking a growth spurt in response. The plants detect threatening neighbours by sensing a surge in the particular wavelengths of red light that are given off by vegetation.

Chory now has ambitious plans to breed plants that can suck vast quantities of carbon dioxide out of the atmosphere in a bid to combat climate change. She believes that crops could be selected to absorb 20 times more of the greenhouse gas than they do today, and convert it into suberin, a waxy material found in roots and bark that breaks down incredibly slowly in soil. If we can do this on 5% of the landmass people are growing crops on, we can take out 50% of global human emissions, she said.

Three other life sciences prizes went to Kazutoshi Mori at Kyoto University and Peter Walter for their work on quality control mechanisms that keep cells healthy, and to Don Cleveland at the University of California, San Diego, for his research on motor neurone disease.

The $3m Breakthrough prize in mathematics was shared by two British-born mathematicians, Christopher Hacon at the University of Utah and James McKernan at the University of California in San Diego. The pair made major contributions to a field of mathematics known as birational algebraic geometry, which sets the rules for projecting abstract objects with more than 1,000 dimensions onto lower-dimensional surfaces. It gets very technical, very quickly, said McKernan.

Speaking before the ceremony, Hacon was feeling a little unnerved. Its really not a mathematician kind of thing, but Ill probably survive, he said. Ive got a tux ready, but Im not keen on wearing it. Asked what he might do with his share of the winnings, Hacon was nothing if not realistic. Ill start by paying taxes, he said. And I have six kids, so the rest will evaporate.

Chuck Bennett, an astrophysicist at Johns Hopkins University in Baltimore, led a Nasa mission known as the Wilkinson Microwave Anisotropy Probe (WMAP) to map the faint afterglow of the big bangs radiation that now permeates the universe. The achievement, now more than a decade old, won the 27-strong science team the $3m Breakthrough prize in fundamental physics. When we made our first maps of the sky, I thought these are beautiful, Bennett told the Guardian. It is still absolutely amazing to me. We can look directly back in time.

Bennett believes that the prizes may help raise the profile of science at a time when it is sorely needed. The point is not to make rock stars of us, but of the science itself, he said. I dont think people realise how big a role science plays in their lives. In everything you do, from the moment you wake up to the moment you go to sleep, theres something about what youre doing that involves scientific advances. I dont think people think about that at all.

Read more: https://www.theguardian.com/science/2017/dec/04/researchers-share-22m-breakthrough-prize-as-science-gets-rock-star-treatment

Your animal life is over. Machine life has begun. The road to immortality

In California, radical scientists and billionaire backers think the technology to extend life by uploading minds to exist separately from the body is only a few years away

Heres what happens. You are lying on an operating table, fully conscious, but rendered otherwise insensible, otherwise incapable of movement. A humanoid machine appears at your side, bowing to its task with ceremonial formality. With a brisk sequence of motions, the machine removes a large panel of bone from the rear of your cranium, before carefully laying its fingers, fine and delicate as a spiders legs, on the viscid surface of your brain. You may be experiencing some misgivings about the procedure at this point. Put them aside, if you can.

Youre in pretty deep with this thing; theres no backing out now. With their high-resolution microscopic receptors, the machine fingers scan the chemical structure of your brain, transferring the data to a powerful computer on the other side of the operating table. They are sinking further into your cerebral matter now, these fingers, scanning deeper and deeper layers of neurons, building a three-dimensional map of their endlessly complex interrelations, all the while creating code to model this activity in the computers hardware. As thework proceeds, another mechanical appendage less delicate, less careful removes the scanned material to a biological waste container for later disposal. This is material you will no longer be needing.

At some point, you become aware that you are no longer present in your body. You observe with sadness, or horror, or detached curiosity the diminishing spasms of that body on the operating table, the last useless convulsions of a discontinued meat.

The animal life is over now. The machine life has begun.

This, more or less, is the scenario outlined by Hans Moravec, a professor of cognitive robotics at Carnegie Mellon, in his 1988 book Mind Children: The Future of Robot and Human Intelligence. It is Moravecs conviction that the future of the human species will involve a mass-scale desertion of our biological bodies, effected by procedures of this kind. Its a belief shared by many transhumanists, a movement whose aim is to improve our bodies and minds to the point where we become something other and better than the animals we are. Ray Kurzweil, for one, is a prominent advocate of the idea of mind-uploading. An emulation of the human brain running on an electronic system, he writes in The Singularity Is Near, would run much faster than our biological brains. Although human brains benefit from massive parallelism (on the order of 100 trillion interneuronal connections, all potentially operating simultaneously), the rest time of the connections is extremely slow compared to contemporary electronics. The technologies required for such an emulation sufficiently powerful and capacious computers and sufficiently advanced brainscanning techniques will be available, he announces, by the early 2030s.

And this, obviously, is no small claim. We are talking about not just radically extended life spans, but also radically expanded cognitive abilities. We are talking about endless copies and iterations of the self. Having undergone a procedure like this, you would exist to the extent you could meaningfully be said to exist at all as an entity of unbounded possibilities.

I was introduced to Randal Koene at a Bay Area transhumanist conference. He wasnt speaking at the conference, but had come along out of personal interest. A cheerfully reserved man in his early 40s, he spoke in the punctilious staccato of a non-native English speaker who had long mastered the language. As we parted, he handed me his business card and much later that evening Iremoved it from my wallet and had a proper look at it. The card was illustrated with a picture of a laptop, on whose screen was displayed a stylised image of a brain. Underneath was printed what seemed to me an attractively mysterious message: Carboncopies: Realistic Routes to Substrate Independent Minds. Randal A Koene, founder.

I took out my laptop and went to the website of Carboncopies, which I learned was a nonprofit organisation with a goal of advancing the reverse engineering of neural tissue and complete brains, Whole Brain Emulation and development of neuroprostheses that reproduce functions of mind, creating what we call Substrate Independent Minds. This latter term, I read, was the objective to be able to sustain person-specific functions of mind and experience in many different operational substrates besides the biological brain. And this, I further learned, was a process analogous to that by which platform independent code can be compiled and run on many different computing platforms.

It seemed that I had met, without realising it, a person who was actively working toward the kind of brain-uploading scenario that Kurzweil had outlined in The Singularity Is Near. And this was a person I needed to get to know.

Randal
Randal Koene: It wasnt like I was walking into labs, telling people I wanted to upload human minds to computers.

Koene was an affable and precisely eloquent man and his conversation was unusually engaging for someone so forbiddingly intelligent and who worked in so rarefied a field as computational neuroscience; so, in his company, I often found myself momentarily forgetting about the nearly unthinkable implications of the work he was doing, the profound metaphysical weirdness of the things he was explaining to me. Hed be talking about some tangential topic his happily cordial relationship with his ex-wife, say, or the cultural differences between European and American scientific communities and Id remember with a slow, uncanny suffusion of unease that his work, were it to yield the kind of results he is aiming for, would amount to the most significant event since the evolution of Homo sapiens. The odds seemed pretty long from where I was standing, but then again, I reminded myself, the history of science was in many ways an almanac of highly unlikely victories.

One evening in early spring, Koene drove down to San Francisco from the North Bay, where he lived and worked in a rented ranch house surrounded by rabbits, to meet me for dinner in a small Argentinian restaurant on Columbus Avenue. The faint trace of an accent turned out to be Dutch. Koene was born in Groningen and had spent most of his early childhood in Haarlem. His father was a particle physicist and there were frequent moves, including a two-year stint in Winnipeg, as he followed his work from one experimental nuclear facility to the next.

Now a boyish 43, he had lived in California only for the past five years, but had come to think of it as home, or the closest thing to home hed encountered in the course of a nomadic life. And much of this had to do with the culture of techno-progressivism that had spread outward from its concentrated origins in Silicon Valley and come to encompass the entire Bay Area, with its historically high turnover of radical ideas. It had been a while now, he said, since hed described his work to someone, only for them to react as though he were making a misjudged joke or simply to walk off mid-conversation.

In his early teens, Koene began to conceive of the major problem with the human brain in computational terms: it was not, like a computer, readable and rewritable. You couldnt get in there and enhance it, make it run more efficiently, like you could with lines of code. You couldnt just speed up a neuron like you could with a computer processor.

Around this time, he read Arthur C Clarkes The City and the Stars, a novel set a billion years from now, in which the enclosed city of Diaspar is ruled by a superintelligent Central Computer, which creates bodies for the citys posthuman citizens and stores their minds in its memory banks at the end of their lives, for purposes of reincarnation. Koene saw nothing in this idea of reducing human beings to data that seemed to him implausible and felt nothing in himself that prevented him from working to bring it about. His parents encouraged him in this peculiar interest and the scientific prospect of preserving human minds in hardware became a regular topic of dinnertime conversation.

Computational neuroscience, which drew its practitioners not from biology but from the fields of mathematics and physics, seemed to offer the most promising approach to the problem of mapping and uploading the mind. It wasnt until he began using the internet in the mid-1990s, though, that he discovered a loose community of people with an interest in the same area.

As a PhD student in computational neuroscience at Montreals McGill University, Koene was initially cautious about revealing the underlying motivation for his studies, for fear of being taken for a fantasist or an eccentric.

I didnt hide it, as such, he said, but it wasnt like I was walking into labs, telling people I wanted to upload human minds to computers either. Id work with people on some related area, like the encoding of memory, with a view to figuring out how that might fit into an overall road map for whole brain emulation.

Having worked for a while at Halcyon Molecular, a Silicon Valley gene-sequencing and nanotechnology startup funded by Peter Thiel, he decided to stay in the Bay Area and start his own nonprofit company aimed at advancing the cause to which hed long been dedicated: carboncopies

Koenes decision was rooted in the very reason he began pursuing that work in the first place: an anxious awareness of the small and diminishing store of days that remained to him. If hed gone the university route, hed have had to devote most of his time, at least until securing tenure, to projects that were at best tangentially relevant to his central enterprise. The path he had chosen was a difficult one for a scientist and he lived and worked from one small infusion of private funding to the next.

But Silicon Valleys culture of radical techno-optimism had been its own sustaining force for him, and a source of financial backing for a project that took its place within the wildly aspirational ethic of that cultural context. There were people there or thereabouts, wealthy and influential, for whom a future in which human minds might be uploaded to computers was one to be actively sought, a problem to be solved, disruptively innovated, by the application of money.

Transcendence
Brainchild of the movies: in Transcendence (2014), scientist Will Caster, played by Johnny Depp, uploads his mind to a computer program with dangerous results.

One such person was Dmitry Itskov, a 36-year-old Russian tech multimillionaire and founder of the 2045 Initiative, an organisationwhose stated aim was to create technologies enabling the transfer of an individuals personality to a more advanced nonbiological carrier, and extending life, including to the point of immortality. One of Itskovs projects was the creation of avatars artificial humanoid bodies that would be controlled through brain-computer interface, technologies that would be complementary with uploaded minds. He had funded Koenes work with Carboncopies and in 2013 they organised a conference in New York called Global Futures 2045, aimed, according to its promotional blurb, at the discussion of a new evolutionary strategy for humanity.

When we spoke, Koene was working with another tech entrepreneur named Bryan Johnson, who had sold his automated payment company to PayPal a couple of years back for $800m and who now controlled a venture capital concern called the OS Fund, which, I learned from its website, invests in entrepreneurs working towards quantum leap discoveries that promise to rewrite the operating systems of life. This language struck me as strange and unsettling in a way that revealed something crucial about the attitude toward human experience that was spreading outward from its Bay Area centre a cluster of software metaphors that had metastasised into a way of thinking about what it meant to be a human being.

And it was the sameessential metaphor that lay at the heart of Koenes project: the mind as a piece of software, an application running on the platform of flesh. When he used the term emulation, he was using it explicitly to evoke the sense in which a PCs operating system could be emulated on a Mac, as what he called platform independent code.

The relevant science for whole brain emulation is, as youd expect, hideously complicated, and its interpretation deeply ambiguous, but if I can risk a gross oversimplification here, I will say that it is possible to conceive of the idea as something like this: first, you scan the pertinent information in a persons brain the neurons, the endlessly ramifying connections between them, the information-processing activity of which consciousness is seen as a byproduct through whatever technology, or combination of technologies, becomes feasible first (nanobots, electron microscopy, etc). That scan then becomes a blueprint for the reconstruction of the subject brains neural networks, which is then converted into a computational model. Finally, you emulate all of this on a third-party non-flesh-based substrate: some kind of supercomputer or a humanoid machine designed to reproduce and extend the experience of embodiment something, perhaps, like Natasha Vita-Mores Primo Posthuman.

The whole point of substrate independence, as Koene pointed out to me whenever I asked him what it would be like to exist outside of a human body, and I asked him many times, in various ways was that it would be like no one thing, because there would be no one substrate, no one medium of being. This was the concept transhumanists referred to as morphological freedom the liberty to take any bodily form technology permits.

You can be anything you like, as an article about uploading in Extropy magazine put it in the mid-90s. You can be big or small; you can be lighter than air and fly; you can teleport and walk through walls. You can be a lion or an antelope, a frog or a fly, a tree, a pool, the coat of paint on a ceiling.

What really interested me about this idea was not how strange and far-fetched it seemed (though it ticked those boxes resolutely enough), but rather how fundamentally identifiable it was, how universal. When talking to Koene, I was mostly trying to get to grips with the feasibility of the project and with what it was he envisioned as a desirable outcome. But then we would part company I would hang up the call, or I would take my leave and start walking toward the nearest station and I would find myself feeling strangely affected by the whole project, strangely moved.

Because there was something, in the end, paradoxically and definitively human in this desire for liberation from human form. I found myself thinking often of WB Yeatss Sailing to Byzantium, in which the ageing poet writes of his burning to be free of the weakening body, the sickening heart to abandon the dying animal for the manmade and immortal form of a mechanical bird. Once out of nature, he writes, I shall never take/ My bodily form from any natural thing/ But such a form as Grecian goldsmiths make.

One evening, we were sitting outside a combination bar/laundromat/standup comedy venue in Folsom Street a place with the fortuitous name of BrainWash when I confessed that the idea of having my mind uploaded to some technological substrate was deeply unappealing to me, horrifying even. The effects of technology on my life, even now, were something about which I was profoundly ambivalent; for all I had gained in convenience and connectedness, I was increasingly aware of the extent to which my movements in the world were mediated and circumscribed by corporations whose only real interest was in reducing the lives of human beings to data, as a means to further reducing us to profit.

The content we consumed, the people with whom we had romantic encounters, the news we read about the outside world: all these movements were coming increasingly under the influence of unseen algorithms, the creations of these corporations, whose complicity with government, moreover, had come to seem like the great submerged narrative of our time. Given the world we were living in, where the fragile liberal ideal of the autonomous self was already receding like a half-remembered dream into the doubtful haze of history, wouldnt a radical fusion of ourselves with technology amount, in the end, to a final capitulation of the very idea of personhood?

Koene nodded again and took a sip of his beer.

Hearing you say that, he said, makes it clear that theres a major hurdle there for people. Im more comfortable than you are with the idea, but thats because Ive been exposed to it for so long that Ive just got used to it.

Dmitry
Russian billionaire Dmitry Itskov wants to create technologies enabling the transfer of an individuals personality to a more advanced nonbiological carrier. Photograph: Mary Altaffer/AP

In the weeks and months after I returned from San Francisco, I thought obsessively about the idea of whole brain emulation. One morning, I was at home in Dublin, suffering from both a head cold and a hangover. I lay there, idly considering hauling myself out of bed to join my wife and my son, who were in his bedroom next door enjoying a raucous game of Buckaroo. I realised that these conditions (head cold, hangover) had imposed upon me a regime of mild bodily estrangement. As often happens when Im feeling under the weather, I had a sense of myself as an irreducibly biological thing, an assemblage of flesh and blood and gristle. I felt myself to be an organism with blocked nasal passages, a bacteria-ravaged throat, a sorrowful ache deep within its skull, its cephalon. I was aware of my substrate, in short, because my substrate felt like shit.

And I was gripped by a sudden curiosity as to what, precisely, that substrate consisted of, as to what I myself happened, technically speaking, to be. I reached across for the phone on my nightstand and entered into Google the words What is the human… The first three autocomplete suggestions offered What is The Human Centipede about, and then: What is the human body made of, and then: What is the human condition.

It was the second question I wanted answered at this particular time, as perhaps a back door into the third. It turned out that I was 65% oxygen, which is to say that I was mostly air, mostly nothing. After that, I was composed of diminishing quantities of carbon and hydrogen, of calcium and sulphur and chlorine, and so on down the elemental table. I was also mildly surprised to learn that, like the iPhone I was extracting this information from, I also contained trace elements of copper and iron and silicon.

What a piece of work is a man, I thought, what a quintessence of dust.

Some minutes later, my wife entered the bedroom on her hands and knees, our son on her back, gripping the collar of her shirt tight in his little fists. She was making clip-clop noises as she crawled forward, he was laughing giddily and shouting: Dont buck! Dont buck!

With a loud neighing sound, she arched her back and sent him tumbling gently into a row of shoes by the wall and he screamed in delighted outrage, before climbing up again. None of this, I felt, could be rendered in code. None of this, I felt, could be run on any other substrate. Their beauty was bodily, in the most profound sense, in the saddest and most wonderful sense.

I never loved my wife and our little boy more, I realised, than when I thought of them as mammals. I dragged myself, my animal body, out of bed to join them.

To Be a Machine by Mark OConnell is published by Granta (12.99). To order a copy for 11.04 go to bookshop.theguardian.com or call 0330 333 6846. Free UK p&p over 10, online orders only. Phone orders min p&p of 1.99

Read more: https://www.theguardian.com/science/2017/mar/25/animal-life-is-over-machine-life-has-begun-road-to-immortality

Biologists Are Figuring Out How Cells Tell Left From Right

In 2009, after she was diagnosed with stage 3 breast cancer, Ann Ramsdell began to search the scientific literature to see if someone with her diagnosis could make a full recovery. Ramsdell, a developmental biologist at the University of South Carolina, soon found something strange: The odds of recovery differed for women who had cancer in the left breast versus the right. Even more surprisingly, she found research suggesting that women with asymmetric breast tissue are more likely to develop cancer.

Quanta Magazine


About

Original storyreprinted with permission from Quanta Magazine, an editorially independent division of theSimons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences


Asymmetry is not readily apparent. Yet below the skin, asymmetric structures are common. Consider how our gut winds its way through the abdominal cavity, sprouting unpaired organs as it goes. Or how our heart, born from two identical structures fused together, twists itself into an asymmetrical pump that can simultaneously push oxygen-rich blood around the body and draw in a new swig from the lungs, all in a heartbeat. The bodys natural asymmetry is crucially important to our well-being. But, as Ramsdell knew, it was all too often ignored.

In her early years as a scientist, Ramsdell never gave asymmetry much thought. But on the day of her dissertation defense, she put a borrowed slide into a projector (this in the days before PowerPoint). The slide was of a chick embryo at the stage where its heart begins to loop to one side. Afterward a colleague asked why she put the slide in backward. Its an embarrassing story, she said, but I had never even thought about the directionality of heart looping. The chicks developing heart could distinguish between left and right, same as ours. She went on to do her postdoctoral research on why the heart loops to one side.

Years later, after her recovery, Ramsdell decided to leave the heart behind and to start looking for asymmetry in the mammary glands of mammals. In marsupials like wallabies and kangaroos, she read, the left and the right glands produce a different kind of milk, geared toward offspring of different ages. But her initial studies of mice proved disappointingtheir left and right mammary glands didnt seem to differ at all.

The wrybill uses its laterally curved bill to reach insect larvae under rounded riverbed stones.Steve Atwood

Then she zoomed in on the genes and proteins that are active in different cells of the breast. There she found strong differences. The left breast, which appears to be more prone to cancer, also tends to have a higher number of unspecialized cells, according to unpublished work thats undergoing peer review. Those allow the breast to repair damaged tissue, but since they have a higher capacity to divide, they can also be involved in tumor formation. Why the cells are more common on the left, Ramsdell has not yet figured out. But we think it has to do with the embryonic environment the cells grow up in, which is quite different on both sides.

Ramsdell and a cadre of other developmental biologists are trying to unravel why the organisms can tell their right from left. Its a complex process, but the key orchestrators of the handedness of life are beginning to come into clearer focus.

A Left Turn

In the 1990s, scientists studying the activity of different genes in the developing embryo discovered something surprising. In every vertebrate embryo examined so far, a gene called Nodal appears on the left side of the embryo. It is closely followed by its collaborator Lefty, a gene that suppresses Nodal activity on the embryos right. The Nodal-Lefty team appears to be the most important genetic pathway that guides asymmetry, said Cliff Tabin, an evolutionary biologist at Harvard University who played a central role in the initial research into Nodal and Lefty.

But what triggers the emergence of Nodal and Lefty inside the embryo? The developmental biologist Nobutaka Hirokawa came up with an explanation that is so elegant we all want to believe it, Tabin said. Most vertebrate embryos start out as a tiny disk. On the bottom side of this disk, theres a little pit, the floor of which is covered in ciliaflickering cell extensions that, Hirokawa suggested, create a leftward current in the surrounding fluid. A 2002 study confirmed that a change in flow direction could change the expression of Nodal as well.

The twospot flounder lies on the seafloor on its right side, with both eyes on its left side.SEFSC Pascagoula Laboratory; Collection of Brandi Noble, NOAA/NMFS/SEFSC

Damaged cilia have long been associated with asymmetry-related disease. In Kartagener syndrome, for example, immobile cilia in the windpipe cause breathing difficulties. Intriguingly, the body asymmetry of people with the syndrome is often entirely inversed, to become an almost perfect mirror image of what it would otherwise. In the early 2000s, researchers discovered that the syndrome was caused by defects in a number of proteins driving movement in cells, including those of the cilia. In addition, a 2015 Nature study identified two dozen mouse genes related to cilia that give rise to unusual asymmetries when defective.

Yet cilia cannot be the whole story. Many animals, even some mammals, dont have a ciliated pit, said Michael Levin, a biologist at Tufts University who was the first author on some of the Nodal papers from Tabins lab in the 1990s.

In addition, the motor proteins critical for normal asymmetry development dont only occur in the cilia, Levin said. They also work with the cellular skeleton, a network of sticks and strands that provides structure to the cell, to guide its movements and transport cellular components.

An increasing number of studies suggest that this may give rise to asymmetry within individual cells as well. Cells have a kind of handedness, said Leo Wan, a biomedical engineer at the Rensselaer Polytechnic Institute. When they hit an obstacle, some types of cells will turn left while others will turn right. Wan has created a test that consists of a plate with two concentric, circular ridges. We place cells between those ridges, then watch them move around, he said. When they hit one of the ridges, they turn, and their preferred direction is clearly visible.

The red crossbill uses its unique beak to access the seeds in conifer cones.Jason Crotty

Wan believes the cells preference depends on the interplay between two elements of the cellular skeleton: actin and myosin. Actin is a protein that forms trails throughout the cell. Myosin, another protein, moves across these trails, often while dragging other cellular components along. Both proteins are well-known for their activity in muscle cells, where they are crucial for contraction. Kenji Matsuno, a cellular biologist at Osaka University, has discovered a series of what he calls unconventional myosins that appear crucial to asymmetrical development. Matsuno agrees that myosins are likely causing cell handedness.

Consider the fruit fly. It lacks both the ciliated pit as well as Nodal, yet it develops an asymmetric hindgut. Matsuno has demonstrated that the handedness of cells in the hindgut depends on myosin and that the handedness reflected by the cells initial tilt is what guides the guts development. The cells handedness does not just define how they move, but also how they hold on to each other, he explains. Together those wrestling cells create a hindgut that curves and turns exactly the way its supposed to. A similar process was described in the roundworm C. elegans.

Nodal isnt necessary for the development of all asymmetry in vertebrates, either. In a study published in Nature Communications in 2013, Jeroen Bakkers, a biologist at the Hubrecht Institute in the Netherlands, described how the zebra fish heart may curve to the right in the absence of Nodal. In fact, he went on to show that it even does so when removed from the body and deposited into a simple lab dish. That being said, he adds, in animals without Nodal, the heart did not shift left as it should, nor did it turn correctly. Though some asymmetry originates within, the cells do need Nodals help.

The European red slug has a large, dark respiratory pore on its right side.Hans Hillewaert

For Tabin, experiments like this show that while Nodal may not be the entire story, it is the most crucial factor in the development of asymmetry. From the standpoint of evolution, it turns out, breaking symmetry wasnt that difficult, he said. There are multiple ways of doing it, and different organisms have done it in different ways. The key that evolution had to solve was making asymmetry reliable and robust, he said. Lefty and Nodal together are a way of making sure that asymmetry is robust.

Yet others believe that important links are waiting to be discovered. Research from Levins lab suggests that communication among cells may be an under-explored factor in the development of asymmetry.

The cellular skeleton also directs the transport of specialized proteins to the cell surface, Levin said. Some of these allow cells to communicate by exchanging electrical charges. This electrical communication, his research suggests, may direct the movements of cells as well as how the cells express their genes. If we block the [communication] channels, asymmetrical development always goes awry, he said. And by manipulating this system, weve been able to guide development in surprising but predictable directions, creating six-legged frogs, four-headed worms or froglets with an eye for a gut, without changing their genomes at all.

The apparent ability of developing organisms to detect and correct their own shape fuels Levins belief that self-repair might one day be an option for humans as well. Under every rock, there is a creature that can repair its complex body all by itself, he points out. If we can figure out how this works, Levin said, it might revolutionize medicine. Many people think Im too optimistic, but I have the engineering view on this: Anything thats not forbidden by the laws of physics is possible.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more: https://www.wired.com/2017/02/body-can-tell-left-right/

How Life (and Death) Spring From Disorder

Whats the difference between physics and biology? Take a golf ball and a cannonball and drop them off the Tower of Pisa. The laws of physics allow you to predict their trajectories pretty much as accurately as you could wish for.

Quanta Magazine


About

Original storyreprinted with permission from Quanta Magazine, an editorially independent division of theSimons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences


Now do the same experiment again, but replace the cannonball with a pigeon.

Biological systems dont defy physical laws, of coursebut neither do they seem to be predicted by them. In contrast, they are goal-directed: survive and reproduce. We can say that they have a purposeor what philosophers have traditionally called a teleologythat guides their behavior.

By the same token, physics now lets us predict, starting from the state of the universe a billionth of a second after the Big Bang, what it looks like today. But no one imagines that the appearance of the first primitive cells on Earth led predictably to the human race. Laws do not, it seems, dictate the course of evolution.

The teleology and historical contingency of biology, said the evolutionary biologist Ernst Mayr, make it unique among the sciences. Both of these features stem from perhaps biologys only general guiding principle: evolution. It depends on chance and randomness, but natural selection gives it the appearance of intention and purpose. Animals are drawn to water not by some magnetic attraction, but because of their instinct, their intention, to survive. Legs serve the purpose of, among other things, taking us to the water.

Mayr claimed that these features make biology exceptionala law unto itself. But recent developments in nonequilibrium physics, complex systems science and information theory are challenging that view.

Once we regard living things as agents performing a computationcollecting and storing information about an unpredictable environmentcapacities and considerations such as replication, adaptation, agency, purpose and meaning can be understood as arising not from evolutionary improvisation, but as inevitable corollaries of physical laws. In other words, there appears to be a kind of physics of things doing stuff, and evolving to do stuff. Meaning and intentionthought to be the defining characteristics of living systemsmay then emerge naturally through the laws of thermodynamics and statistical mechanics.

This past November, physicists, mathematicians and computer scientists came together with evolutionary and molecular biologists to talkand sometimes argueabout these ideas at a workshop at the Santa Fe Institute in New Mexico, the mecca for the science of complex systems. They asked: Just how special (or not) is biology?

Its hardly surprising that there was no consensus. But one message that emerged very clearly was that, if theres a kind of physics behind biological teleology and agency, it has something to do with the same concept that seems to have become installed at the heart of fundamental physics itself: information.

Disorder and Demons

The first attempt to bring information and intention into the laws of thermodynamics came in the middle of the 19th century, when statistical mechanics was being invented by the Scottish scientist James Clerk Maxwell. Maxwell showed how introducing these two ingredients seemed to make it possible to do things that thermodynamics proclaimed impossible.

Maxwell had already shown how the predictable and reliable mathematical relationships between the properties of a gaspressure, volume and temperaturecould be derived from the random and unknowable motions of countless molecules jiggling frantically with thermal energy. In other words, thermodynamicsthe new science of heat flow, which united large-scale properties of matter like pressure and temperaturewas the outcome of statistical mechanics on the microscopic scale of molecules and atoms.

According to thermodynamics, the capacity to extract useful work from the energy resources of the universe is always diminishing. Pockets of energy are declining, concentrations of heat are being smoothed away. In every physical process, some energy is inevitably dissipated as useless heat, lost among the random motions of molecules. This randomness is equated with the thermodynamic quantity called entropya measurement of disorderwhich is always increasing. That is the second law of thermodynamics. Eventually all the universe will be reduced to a uniform, boring jumble: a state of equilibrium, wherein entropy is maximized and nothing meaningful will ever happen again.

Are we really doomed to that dreary fate? Maxwell was reluctant to believe it, and in 1867 he set out to, as he put it, pick a hole in the second law. His aim was to start with a disordered box of randomly jiggling molecules, then separate the fast molecules from the slow ones, reducing entropy in the process.

Imagine some little creaturethe physicist William Thomson later called it, rather to Maxwells dismay, a demonthat can see each individual molecule in the box. The demon separates the box into two compartments, with a sliding door in the wall between them. Every time he sees a particularly energetic molecule approaching the door from the right-hand compartment, he opens it to let it through. And every time a slow, cold molecule approaches from the left, he lets that through, too. Eventually, he has a compartment of cold gas on the right and hot gas on the left: a heat reservoir that can be tapped to do work.

This is only possible for two reasons. First, the demon has more information than we do: It can see all of the molecules individually, rather than just statistical averages. And second, it has intention: a plan to separate the hot from the cold. By exploiting its knowledge with intent, it can defy the laws of thermodynamics.

At least, so it seemed. It took a hundred years to understand why Maxwells demon cant in fact defeat the second law and avert the inexorable slide toward deathly, universal equilibrium. And the reason shows that there is a deep connection between thermodynamics and the processing of informationor in other words, computation. The German-American physicist Rolf Landauer showed that even if the demon can gather information and move the (frictionless) door at no energy cost, a penalty must eventually be paid. Because it cant have unlimited memory of every molecular motion, it must occasionally wipe its memory cleanforget what it has seen and start againbefore it can continue harvesting energy. This act of information erasure has an unavoidable price: It dissipates energy, and therefore increases entropy. All the gains against the second law made by the demons nifty handiwork are canceled by Landauers limit: the finite cost of information erasure (or more generally, of converting information from one form to another).

Living organisms seem rather like Maxwells demon. Whereas a beaker full of reacting chemicals will eventually expend its energy and fall into boring stasis and equilibrium, living systems have collectively been avoiding the lifeless equilibrium state since the origin of life about three and a half billion years ago. They harvest energy from their surroundings to sustain this nonequilibrium state, and they do it with intention. Even simple bacteria move with purpose toward sources of heat and nutrition. In his 1944 book What is Life?, the physicist Erwin Schrdinger expressed this by saying that living organisms feed on negative entropy.

They achieve it, Schrdinger said, by capturing and storing information. Some of that information is encoded in their genes and passed on from one generation to the next: a set of instructions for reaping negative entropy. Schrdinger didnt know where the information is kept or how it is encoded, but his intuition that it is written into what he called an aperiodic crystal inspired Francis Crick, himself trained as a physicist, and James Watson when in 1953 they figured out how genetic information can be encoded in the molecular structure of the DNA molecule.

A genome, then, is at least in part a record of the useful knowledge that has enabled an organisms ancestorsright back to the distant pastto survive on our planet. According to David Wolpert, a mathematician and physicist at the Santa Fe Institute who convened the recent workshop, and his colleague Artemy Kolchinsky, the key point is that well-adapted organisms are correlated with that environment. If a bacterium swims dependably toward the left or the right when there is a food source in that direction, it is better adapted, and will flourish more, than one that swims in random directions and so only finds the food by chance. A correlation between the state of the organism and that of its environment implies that they share information in common. Wolpert and Kolchinsky say that its this information that helps the organism stay out of equilibriumbecause, like Maxwells demon, it can then tailor its behavior to extract work from fluctuations in its surroundings. If it did not acquire this information, the organism would gradually revert to equilibrium: It would die.

Looked at this way, life can be considered as a computation that aims to optimize the storage and use of meaningful information. And life turns out to be extremely good at it. Landauers resolution of the conundrum of Maxwells demon set an absolute lower limit on the amount of energy a finite-memory computation requires: namely, the energetic cost of forgetting. The best computers today are far, far more wasteful of energy than that, typically consuming and dissipating more than a million times more. But according to Wolpert, a very conservative estimate of the thermodynamic efficiency of the total computation done by a cell is that it is only 10 or so times more than the Landauer limit.

The implication, he said, is that natural selection has been hugely concerned with minimizing the thermodynamic cost of computation. It will do all it can to reduce the total amount of computation a cell must perform. In other words, biology (possibly excepting ourselves) seems to take great care not to overthink the problem of survival. This issue of the costs and benefits of computing ones way through life, he said, has been largely overlooked in biology so far.

Inanimate Darwinism

So living organisms can be regarded as entities that attune to their environment by using information to harvest energy and evade equilibrium. Sure, its a bit of a mouthful. But notice that it said nothing about genes and evolution, on which Mayr, like many biologists, assumed that biological intention and purpose depend.

How far can this picture then take us? Genes honed by natural selection are undoubtedly central to biology. But could it be that evolution by natural selection is itself just a particular case of a more general imperative toward function and apparent purpose that exists in the purely physical universe? It is starting to look that way.

Adaptation has long been seen as the hallmark of Darwinian evolution. But Jeremy England at the Massachusetts Institute of Technology has argued that adaptation to the environment can happen even in complex nonliving systems.

Adaptation here has a more specific meaning than the usual Darwinian picture of an organism well-equipped for survival. One difficulty with the Darwinian view is that theres no way of defining a well-adapted organism except in retrospect. The fittest are those that turned out to be better at survival and replication, but you cant predict what fitness entails. Whales and plankton are well-adapted to marine life, but in ways that bear little obvious relation to one another.

Englands definition of adaptation is closer to Schrdingers, and indeed to Maxwells: A well-adapted entity can absorb energy efficiently from an unpredictable, fluctuating environment. It is like the person who keeps his footing on a pitching ship while others fall over because shes better at adjusting to the fluctuations of the deck. Using the concepts and methods of statistical mechanics in a nonequilibrium setting, England and his colleagues argue that these well-adapted systems are the ones that absorb and dissipate the energy of the environment, generating entropy in the process.

Complex systems tend to settle into these well-adapted states with surprising ease, said England: Thermally fluctuating matter often gets spontaneously beaten into shapes that are good at absorbing work from the time-varying environment.

There is nothing in this process that involves the gradual accommodation to the surroundings through the Darwinian mechanisms of replication, mutation and inheritance of traits. Theres no replication at all. What is exciting about this is that it means that when we give a physical account of the origins of some of the adapted-looking structures we see, they dont necessarily have to have had parents in the usual biological sense, said England. You can explain evolutionary adaptation using thermodynamics, even in intriguing cases where there are no self-replicators and Darwinian logic breaks downso long as the system in question is complex, versatile and sensitive enough to respond to fluctuations in its environment.

But neither is there any conflict between physical and Darwinian adaptation. In fact, the latter can be seen as a particular case of the former. If replication is present, then natural selection becomes the route by which systems acquire the ability to absorb workSchrdingers negative entropyfrom the environment. Self-replication is, in fact, an especially good mechanism for stabilizing complex systems, and so its no surprise that this is what biology uses. But in the nonliving world where replication doesnt usually happen, the well-adapted dissipative structures tend to be ones that are highly organized, like sand ripples and dunes crystallizing from the random dance of windblown sand. Looked at this way, Darwinian evolution can be regarded as a specific instance of a more general physical principle governing nonequilibrium systems.

Prediction Machines

This picture of complex structures adapting to a fluctuating environment allows us also to deduce something about how these structures store information. In short, so long as such structureswhether living or notare compelled to use the available energy efficiently, they are likely to become prediction machines.

Its almost a defining characteristic of life that biological systems change their state in response to some driving signal from the environment. Something happens; you respond. Plants grow toward the light; they produce toxins in response to pathogens. These environmental signals are typically unpredictable, but living systems learn from experience, storing up information about their environment and using it to guide future behavior. (Genes, in this picture, just give you the basic, general-purpose essentials.)

Prediction isnt optional, though. According to the work of Susanne Still at the University of Hawaii, Gavin Crooks, formerly at the Lawrence Berkeley National Laboratory in California, and their colleagues, predicting the future seems to be essential for any energy-efficient system in a random, fluctuating environment.

Theres a thermodynamic cost to storing information about the past that has no predictive value for the future, Still and colleagues show. To be maximally efficient, a system has to be selective. If it indiscriminately remembers everything that happened, it incurs a large energy cost. On the other hand, if it doesnt bother storing any information about its environment at all, it will be constantly struggling to cope with the unexpected. A thermodynamically optimal machine must balance memory against prediction by minimizing its nostalgiathe useless information about the past, said a co-author, David Sivak, now at Simon Fraser University in Burnaby, British Columbia. In short, it must become good at harvesting meaningful informationthat which is likely to be useful for future survival.

Youd expect natural selection to favor organisms that use energy efficiently. But even individual biomolecular devices like the pumps and motors in our cells should, in some important way, learn from the past to anticipate the future. To acquire their remarkable efficiency, Still said, these devices must implicitly construct concise representations of the world they have encountered so far, enabling them to anticipate whats to come.

The Thermodynamics of Death

Even if some of these basic information-processing features of living systems are already prompted, in the absence of evolution or replication, by nonequilibrium thermodynamics, you might imagine that more complex traitstool use, say, or social cooperationmust be supplied by evolution.

Well, dont count on it. These behaviors, commonly thought to be the exclusive domain of the highly advanced evolutionary niche that includes primates and birds, can be mimicked in a simple model consisting of a system of interacting particles. The trick is that the system is guided by a constraint: It acts in a way that maximizes the amount of entropy (in this case, defined in terms of the different possible paths the particles could take) it generates within a given timespan.

Entropy maximization has long been thought to be a trait of nonequilibrium systems. But the system in this model obeys a rule that lets it maximize entropy over a fixed time window that stretches into the future. In other words, it has foresight. In effect, the model looks at all the paths the particles could take and compels them to adopt the path that produces the greatest entropy. Crudely speaking, this tends to be the path that keeps open the largest number of options for how the particles might move subsequently.

You might say that the system of particles experiences a kind of urge to preserve freedom of future action, and that this urge guides its behavior at any moment. The researchers who developed the modelAlexander Wissner-Gross at Harvard University and Cameron Freer, a mathematician at the Massachusetts Institute of Technologycall this a causal entropic force. In computer simulations of configurations of disk-shaped particles moving around in particular settings, this force creates outcomes that are eerily suggestive of intelligence.

In one case, a large disk was able to use a small disk to extract a second small disk from a narrow tubea process that looked like tool use. Freeing the disk increased the entropy of the system. In another example, two disks in separate compartments synchronized their behavior to pull a larger disk down so that they could interact with it, giving the appearance of social cooperation.

Of course, these simple interacting agents get the benefit of a glimpse into the future. Life, as a general rule, does not. So how relevant is this for biology? Thats not clear, although Wissner-Gross said that he is now working to establish a practical, biologically plausible, mechanism for causal entropic forces. In the meantime, he thinks that the approach could have practical spinoffs, offering a shortcut to artificial intelligence. I predict that a faster way to achieve it will be to discover such behavior first and then work backward from the physical principles and constraints, rather than working forward from particular calculation or prediction techniques, he said. In other words, first find a system that does what you want it to do and then figure out how it does it.

Aging, too, has conventionally been seen as a trait dictated by evolution. Organisms have a lifespan that creates opportunities to reproduce, the story goes, without inhibiting the survival prospects of offspring by the parents sticking around too long and competing for resources. That seems surely to be part of the story, but Hildegard Meyer-Ortmanns, a physicist at Jacobs University in Bremen, Germany, thinks that ultimately aging is a physical process, not a biological one, governed by the thermodynamics of information.

Its certainly not simply a matter of things wearing out. Most of the soft material we are made of is renewed before it has the chance to age, Meyer-Ortmanns said. But this renewal process isnt perfect. The thermodynamics of information copying dictates that there must be a trade-off between precision and energy. An organism has a finite supply of energy, so errors necessarily accumulate over time. The organism then has to spend an increasingly large amount of energy to repair these errors. The renewal process eventually yields copies too flawed to function properly; death follows.

Empirical evidence seems to bear that out. It has been long known that cultured human cells seem able to replicate no more than 40 to 60 times (called the Hayflick limit) before they stop and become senescent. And recent observations of human longevity have suggested that there may be some fundamental reason why humans cant survive much beyond age 100.

Theres a corollary to this apparent urge for energy-efficient, organized, predictive systems to appear in a fluctuating nonequilibrium environment. We ourselves are such a system, as are all our ancestors back to the first primitive cell. And nonequilibrium thermodynamics seems to be telling us that this is just what matter does under such circumstances. In other words, the appearance of life on a planet like the early Earth, imbued with energy sources such as sunlight and volcanic activity that keep things churning out of equilibrium, starts to seem not an extremely unlikely event, as many scientists have assumed, but virtually inevitable. In 2006, Eric Smith and the late Harold Morowitz at the Santa Fe Institute argued that the thermodynamics of nonequilibrium systems makes the emergence of organized, complex systems much more likely on a prebiotic Earth far from equilibrium than it would be if the raw chemical ingredients were just sitting in a warm little pond (as Charles Darwin put it) stewing gently.

In the decade since that argument was first made, researchers have added detail and insight to the analysis. Those qualities that Ernst Mayr thought essential to biologymeaning and intentionmay emerge as a natural consequence of statistics and thermodynamics. And those general properties may in turn lead naturally to something like life.

At the same time, astronomers have shown us just how many worlds there areby some estimates stretching into the billionsorbiting other stars in our galaxy. Many are far from equilibrium, and at least a few are Earth-like. And the same rules are surely playing out there, too.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more: https://www.wired.com/2017/02/life-death-spring-disorder/

How Did Life Begin? Dividing Droplets Could Hold the Answer

A collaboration of physicists and biologists in Germany has found a simple mechanism that might have enabled liquid droplets to evolve into living cells in early Earths primordial soup.

Origin-of-life researchers have praised the minimalism of the idea. Ramin Golestanian, a professor of theoretical physics at the University of Oxford who was not involved in the research, called it a big achievement that suggests that the general phenomenology of life formation is a lot easier than one might think.

Quanta Magazine


About

Original storyreprinted with permission from Quanta Magazine, an editorially independent division of theSimons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences


The central question about the origin of life has been how the first cells arose from primitive precursors. What were those precursors, dubbed protocells, and how did they come alive? Proponents of the membrane-first hypothesis have argued that a fatty-acid membrane was needed to corral the chemicals of life and incubate biological complexity. But how could something as complex as a membrane start to self-replicate and proliferate, allowing evolution to act on it?

In 1924, Alexander Oparin, the Russian biochemist who first envisioned a hot, briny primordial soup as the source of lifes humble beginnings, proposed that the mystery protocells might have been liquid dropletsnaturally forming, membrane-free containers that concentrate chemicals and thereby foster reactions. In recent years, droplets have been found to perform a range of essential functions inside modern cells, reviving Oparins long-forgotten speculation about their role in evolutionary history. But neither he nor anyone else could explain how droplets might have proliferated, growing and dividing and, in the process, evolving into the first cells.

Now, the new work by David Zwicker and collaborators at the Max Planck Institute for the Physics of Complex Systems and the Max Planck Institute of Molecular Cell Biology and Genetics, both in Dresden, suggests an answer. The scientists studied the physics of chemically active droplets, which cycle chemicals in and out of the surrounding fluid, and discovered that these droplets tend to grow to cell size and divide, just like cells. This active droplet behavior differs from the passive and more familiar tendencies of oil droplets in water, which glom together into bigger and bigger droplets without ever dividing.

If chemically active droplets can grow to a set size and divide of their own accord, then it makes it more plausible that there could have been spontaneous emergence of life from nonliving soup, said Frank Jlicher, a biophysicist in Dresden and a co-author of the new paper.

The findings, reported in Nature Physics last month, paint a possible picture of lifes start by explaining how cells made daughters, said Zwicker, who is now a postdoctoral researcher at Harvard University. This is, of course, key if you want to think about evolution.

Luca Giomi, a theoretical biophysicist at Leiden University in the Netherlands who studies the possible physical mechanisms behind the origin of life, said the new proposal is significantly simpler than other mechanisms of protocell division that have been considered, calling it a very promising direction.

However, David Deamer, a biochemist at the University of California, Santa Cruz, and a longtime champion of the membrane-first hypothesis, argues that while the newfound mechanism of droplet division is interesting, its relevance to the origin of life remains to be seen. The mechanism is a far cry, he noted, from the complicated, multistep process by which modern cells divide.

Could simple dividing droplets have evolved into the teeming menagerie of modern life, from amoebas to zebras? Physicists and biologists familiar with the new work say its plausible. As a next step, experiments are under way in Dresden to try to observe the growth and division of active droplets made of synthetic polymers that are modeled after the droplets found in living cells. After that, the scientists hope to observe biological droplets dividing in the same way.

Clifford Brangwynne, a biophysicist at Princeton University who was part of the Dresden-based team that identified the first subcellular droplets eight years agotiny liquid aggregates of protein and RNA in cells of the worm C. elegansexplained that it would not be surprising if these were vestiges of evolutionary history. Just as mitochondria, organelles that have their own DNA, came from ancient bacteria that infected cells and developed a symbiotic relationship with them, the condensed liquid phases that we see in living cells might reflect, in a similar sense, a sort of fossil record of the physicochemical driving forces that helped set up cells in the first place, he said.

This Nature Physics paper takes that to the next level, by revealing the features that droplets would have needed to play a role as protocells, Brangwynne added.

Droplets in Dresden

The Dresden droplet discoveries began in 2009, when Brangwynne and collaborators demystified the nature of little dots known as P granules in C. elegans germline cells, which undergo division into sperm and egg cells. During this division process, the researchers observed that P granules grow, shrink and move across the cells via diffusion. The discovery that they are liquid droplets, reported in Science, prompted a wave of activity as other subcellular structures were also identified as droplets. It didnt take long for Brangwynne and Tony Hyman, head of the Dresden biology lab where the initial experiments took place, to make the connection to Oparins 1924 protocell theory. In a 2012 essay about Oparins life and seminal book, The Origin of Life, Brangwynne and Hyman wrote that the droplets he theorized about may still be alive and well, safe within our cells, like flies in lifes evolving amber.

Oparin most famously hypothesized that lightning strikes or geothermal activity on early Earth could have triggered the synthesis of organic macromolecules necessary for lifea conjecture later made independently by the British scientist John Haldane and triumphantly confirmed by the Miller-Urey experiment in the 1950s. Another of Oparins ideas, that liquid aggregates of these macromolecules might have served as protocells, was less celebrated, in part because he had no clue as to how the droplets might have reproduced, thereby enabling evolution. The Dresden group studying P granules didnt know either.

In the wake of their discovery, Jlicher assigned his new student, Zwicker, the task of unraveling the physics of centrosomes, organelles involved in animal cell division that also seemed to behave like droplets. Zwicker modeled the centrosomes as out-of-equilibrium systems that are chemically active, continuously cycling constituent proteins into and out of the surrounding liquid cytoplasm. In his model, these proteins have two chemical states. Proteins in state A dissolve in the surrounding liquid, while those in state B are insoluble, aggregating inside a droplet. Sometimes, proteins in state B spontaneously switch to state A and flow out of the droplet. An energy source can trigger the reverse reaction, causing a protein in state A to overcome a chemical barrier and transform into state B; when this insoluble protein bumps into a droplet, it slinks easily inside, like a raindrop in a puddle. Thus, as long as theres an energy source, molecules flow in and out of an active droplet. In the context of early Earth, sunlight would be the driving force, Jlicher said.

Zwicker discovered that this chemical influx and efflux will exactly counterbalance each other when an active droplet reaches a certain volume, causing the droplet to stop growing. Typical droplets in Zwickers simulations grew to tens or hundreds of microns across depending on their propertiesthe scale of cells.

Lucy Reading-Ikkanda/Quanta Magazine

The next discovery was even more unexpected. Although active droplets have a stable size, Zwicker found that they are unstable with respect to shape: When a surplus of B molecules enters a droplet on one part of its surface, causing it to bulge slightly in that direction, the extra surface area from the bulging further accelerates the droplets growth as more molecules can diffuse inside. The droplet elongates further and pinches in at the middle, which has low surface area. Eventually, it splits into a pair of droplets, which then grow to the characteristic size. When Jlicher saw simulations of Zwickers equations, he immediately jumped on it and said, That looks very much like division, Zwicker said. And then this whole protocell idea emerged quickly.

Zwicker, Jlicher and their collaborators, Rabea Seyboldt, Christoph Weber and Tony Hyman, developed their theory over the next three years, extending Oparins vision. If you just think about droplets like Oparin did, then its not clear how evolution could act on these droplets, Zwicker said. For evolution, you have to make copies of yourself with slight modifications, and then natural selection decides how things get more complex.

Globule Ancestor

Last spring, Jlicher began meeting with Dora Tang, head of a biology lab at the Max Planck Institute of Molecular Cell Biology and Genetics, to discuss plans to try to observe active-droplet division in action.

Tangs lab synthesizes artificial cells made of polymers, lipids and proteins that resemble biochemical molecules. Over the next few months, she and her team will look for division of liquid droplets made of polymers that are physically similar to the proteins in P granules and centrosomes. The next step, which will be made in collaboration with Hymans lab, is to try to observe centrosomes or other biological droplets dividing, and to determine if they utilize the mechanism identified in the paper by Zwicker and colleagues. That would be a big deal, said Giomi, the Leiden biophysicist.

When Deamer, the membrane-first proponent, read the new paper, he recalled having once observed something like the predicted behavior in hydrocarbon droplets he had extracted from a meteorite. When he illuminated the droplets in near-ultraviolet light, they began moving and dividing. (He sent footage of the phenomenon to Jlicher.) Nonetheless, Deamer isnt convinced of the effects significance. There is no obvious way for the mechanism of division they reported to evolve into the complex process by which living cells actually divide, he said.

Other researchers disagree, including Tang. She says that once droplets started to divide, they could easily have gained the ability to transfer genetic information, essentially divvying up a batch of protein-coding RNA or DNA into equal parcels for their daughter cells. If this genetic material coded for useful proteins that increased the rate of droplet division, natural selection would favor the behavior. Protocells, fueled by sunlight and the law of increasing entropy, would gradually have grown more complex.

Jlicher and colleagues argue that somewhere along the way, protocell droplets could have acquired membranes. Droplets naturally collect crusts of lipids that prefer to lie at the interface between the droplets and the surrounding liquid. Somehow, genes might have started coding for these membranes as a kind of protection. When this idea was put to Deamer, he said, I can go along with that, noting that he would define protocells as the first droplets that had membranes.

The primordial plotline hinges, of course, on the outcome of future experiments, which will determine how robust and relevant the predicted droplet division mechanism really is. Can chemicals be found with the right two states, A and B, to bear out the theory? If so, then a viable path from nonlife to life starts to come into focus.

The luckiest part of the whole process, in Jlichers opinion, was not that droplets turned into cells, but that the first dropletour globule ancestorformed to begin with. Droplets require a lot of chemical material to spontaneously arise or nucleate, and its unclear how so many of the right complex macromolecules could have accumulated in the primordial soup to make it happen. But then again, Jlicher said, there was a lot of soup, and it was stewing for eons.

Its a very rare event. You have to wait a long time for it to happen, he said. And once it happens, then the next things happen more easily, and more systematically.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more: https://www.wired.com/2017/01/life-begin-dividing-droplets-hold-answer/