New Brain Maps With Unmatched Detail May Change Neuroscience

Sitting at the desk in his lower-campus office at Cold Spring Harbor Laboratory, the neuroscientist Tony Zador turned his computer monitor toward me to show off a complicated matrix-style graph. Imagine something that looks like a spreadsheet but instead of numbers it’s filled with colors of varying hues and gradations. Casually, he said: “When I tell people I figured out the connectivity of tens of thousands of neurons and show them this, they just go ‘huh?’ But when I show this to people …” He clicked a button onscreen and a transparent 3-D model of the brain popped up, spinning on its axis, filled with nodes and lines too numerous to count. “They go ‘What the _____!’”

Quanta Magazine


About

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

What Zador showed me was a map of 50,000 neurons in the cerebral cortex of a mouse. It indicated where the cell bodies of every neuron sat and where they sent their long axon branches. A neural map of this size and detail has never been made before. Forgoing the traditional method of brain mapping that involves marking neurons with fluorescence, Zador had taken an unusual approach that drew on the long tradition of molecular biology research at Cold Spring Harbor, on Long Island. He used bits of genomic information to imbue a unique RNA sequence or “bar code” into each individual neuron. He then dissected the brain into cubes like a sheet cake and fed the pieces into a DNA sequencer. The result: a 3-D rendering of 50,000 neurons in the mouse cortex (with as many more to be added soon) mapped with single cell resolution.

This work, Zador’s magnum opus, is still being refined for publication. But in a paper recently published by Nature, he and his colleagues showed that the technique, called MAPseq (Multiplexed Analysis of Projections by Sequencing), can be used to find new cell types and projection patterns never before observed. The paper also demonstrated that this new high-throughput mapping method is strongly competitive in accuracy with the fluorescent technique, which is the current gold standard but works best with small numbers of neurons.

Tony Zador, a neurophysiologist at Cold Spring Harbor Laboratory, realized that genome sequencing techniques could scale up to tame the astronomical numbers of neurons and interconnections in the brain.
jeansweep/Quanta Magazine

The project was born from Zador’s frustration during his “day job” as a neurophysiologist, as he wryly referred to it. He studies auditory decision-making in rodents: how their brain hears sounds, processes the audio information and determines a behavioral output or action. Electrophysiological recordings and the other traditional tools for addressing such questions left the mathematically inclined scientist unsatisfied. The problem, according to Zador, is that we don’t understand enough about the circuitry of the neurons, which is the reason he pursues his “second job” creating tools for imaging the brain.

The current state of the art for brain mapping is embodied by the Allen Brain Atlas, which was compiled from work in many laboratories over several years at a cost upward of $25 million. The Allen Atlas is what’s known as a bulk connectivity atlas because it traces known subpopulations of neurons and their projections as groups. It has been highly useful for researchers, but it cannot distinguish subtle differences within the groups or neuron subpopulations.

If we ever want to know how a mouse hears a high-pitched trill, processes that the sound means a refreshing drink reward is available and lays down new memories to recall the treat later, we will need to start with a map or wiring diagram for the brain. In Zador’s view, lack of knowledge about that kind of neural circuitry is partly to blame for why more progress has not been made in the treatment of psychiatric disorders, and why artificial intelligence is still not all that intelligent.

Justus Kebschull, a Stanford University neuroscientist, an author of the new Nature paper and a former graduate student in Zador’s lab, remarked that doing neuroscience without knowing about the circuitry is like “trying to understand how a computer works by looking at it from the outside, sticking an electrode in and probing what we can find. … Without ever knowing the hard drive is connected to the processor and the USB pod provides input to the whole system, it’s difficult to understand what’s happening.”

Inspiration for MAPseq struck Zador when he learned of another brain mapping technique called Brainbow. Hailing from the lab of Jeff Lichtman at Harvard University, this method was remarkable in that it genetically labeled up to 200 individual neurons simultaneously using different combinations of fluorescent dyes. The results were a tantalizing, multicolored tableau of neon-colored neurons that displayed, in detail, the complex intermingling of axons and neuron cell bodies. The groundbreaking work gave hope that mapping the connectome—the complete plan of neural connections in the brain—was soon to be a reality. Unfortunately, a limitation of the technique in practice is that through a microscope, experimenters could resolve only about five to 10 distinct colors, which was not enough to penetrate the tangle of neurons in the cortex and map many neurons at once.

That’s when the lightbulb went on in Zador’s head. He realized that the challenge of the connectome’s huge complexity might be tamed if researchers could harness the increasing speed and dwindling costs of high-throughput genomic sequencing techniques. “It’s what mathematicians call reducing it to a previously solved problem,” he explained.

In MAPseq, researchers inject an animal with genetically modified viruses that carry a variety of known RNA sequences, or “bar codes.” For a week or more, the viruses multiply inside the animal, filling each neuron with some distinctive combination of those bar codes. When the researchers then cut the brain into sections, the RNA bar codes can help them track individual neurons from slide to slide.

Zador’s insight led to the new Nature paper, in which his lab and a team at University College London led by the neuroscientist Thomas Mrsic-Flogel used MAPseq to trace the projections of almost 600 neurons in the mouse visual system. (Editor’s note: Zador and Mrsic-Flogel both receive funding from the Simons Foundation, which publishes Quanta.)

Six hundred neurons is a modest start compared with the tens of millions in the brain of a mouse. But it was ample for the specific purpose the researchers had in mind: They were looking to discern whether there is a structure to the brain’s wiring pattern that might be informative about its function. A currently popular theory is that in the visual cortex, an individual neuron gathers a specific bit of information from the eye—about the edge of an object in the field of view, or a type of movement or spatial orientation, for example. The neuron then sends a signal to a single corresponding area in the brain that specializes in processing that type of information.

To test this theory, the team first mapped a handful of neurons in mice in the traditional way by inserting a genetically encoded fluorescent dye into the individual cells. Then, with a microscope, they traced how the cells stretched from the primary visual cortex (the brain area that receives input from the eyes) to their endpoints elsewhere in the brain. They found that the neurons’ axons branched out and sent information to many areas simultaneously, overturning the one-to-one mapping theory.

Next, they asked if there were any patterns to these projections. They used MAPseq to trace the projections of 591 neurons as they branched out and innervated multiple targets. What the team observed was that the distribution of axons was structured: Some neurons always sent axons to areas A, B and C but never to D and E, for example.

These results suggest the visual system contains a dizzying level of cross-connectivity and that the pattern of those connections is more complicated than a one-to-one mapping. “Higher visual areas don’t just get information that is specifically tailored to them,” Kebschull said. Instead, they share many of the same inputs, “so their computations might be tied to each other.”

Nevertheless, the fact that certain cells do project to specific areas also means that within the visual cortex there are specialized cells that have not yet been identified. Kebschull said this map is like a blueprint that will enable later researchers to understand what these cells are doing. “MAPseq allows you to map out the hardware. … Once we know the hardware we can start to look at the software, or how the computations happen,” he said.

MAPseq’s competitive edge in speed and cost for such investigations is considerable: According to Zador, the technique should be able to scale up to handle 100,000 neurons within a week or two for only $10,000 — far faster than traditional mapping would be, at a fraction of the cost.

Such advantages will make it more feasible to map and compare the neural pathways of large numbers of brains. Studies of conditions such as schizophrenia and autism that are thought to arise from differences in brain wiring have often frustrated researchers because the available tools don’t capture enough details of the neural interconnections. It’s conceivable that researchers will be able to map mouse models of these conditions and compare them with more typical brains, sparking new rounds of research. “A lot of psychiatric disorders are caused by problems at the circuit level,” said Hongkui Zeng, executive director of the structured science division at the Allen Institute for Brain Science. “Connectivity information will tell you where to look.”

High-throughput mapping also allows scientists to gather lots of neurological data and look for patterns that reflect general principles of how the brain works. “What Tony is doing is looking at the brain in an unbiased way,” said Sreekanth Chalasani, a molecular neurobiologist at the Salk Institute. “Just as the human genome map has provided a scaffolding to test hypotheses and look for patterns in [gene] sequence and function, Tony’s method could do the same” for brain architecture.

The detailed map of the human genome didn’t immediately explain all the mysteries of how biology works, but it did provide a biomolecular parts list and open the way for a flood of transformative research. Similarly, in its present state of development, MAPseq cannot provide any information about the function or location of the cells it is tagging or show which cells are talking to one another. Yet Zador plans to add this functionality soon. He is also collaborating with scientists studying various parts the brain, such as the neural circuits that underlie fear conditioning.

“I think there are insights to be derived from connectivity. But just like genomes themselves aren’t interesting, it’s what they enable that is transformative. And that’s why I’m excited,” Zador said. “I’m hopeful it’s going to provide the scaffolding for the next generation of work in the field.”

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more: https://www.wired.com/story/new-brain-maps-with-unmatched-detail-may-change-neuroscience/

Why Winning in Rock-Paper-Scissors Isnt Everything

Rock-Paper-Scissors works great for deciding who has to take out the garbage. But have you ever noticed what happens when, instead of playing best of three, you just let the game continue round after round? At first, you play a pattern that gives you the upper hand, but then your opponent quickly catches on and turns things in her favor. As strategies evolve, a point is reached where neither side seems to be able to improve any further. Why does that happen?

Quanta Magazine


About

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

In 1950, the mathematician John Nash proved that in any kind of game with a finite number of players and a finite number of options—like Rock-Paper-Scissors—a mix of strategies always exists where no single player can do any better by changing their own strategy alone. The theory behind such stable strategy profiles, which came to be known as “Nash equilibria,” revolutionized the field of game theory, altering the course of economics and changing the way everything from political treaties to network traffic is studied and analyzed. And it earned Nash the Nobel Prize in 1994.

So, what does a Nash equilibrium look like in Rock-Paper-Scissors? Let’s model the situation with you (Player A) and your opponent (Player B) playing the game over and over. Each round, the winner earns a point, the loser loses a point, and ties count as zero.

Now, suppose Player B adopts the (silly) strategy of choosing Paper every turn. After a few rounds of winning, losing, and tying, you are likely to notice the pattern and adopt a winning counterstrategy by choosing Scissors every turn. Let’s call this strategy profile (Scissors, Paper). If every round unfolds as Scissors vs. Paper, you’ll slice your way to a perfect record.

But Player B soon sees the folly in this strategy profile. Observing your reliance on Scissors, she switches to the strategy of always choosing Rock. This strategy profile (Scissors, Rock) starts winning for Player B. But of course, you now switch to Paper. During these stretches of the game, Players A and B are employing what are known as “pure” strategies—a single strategy that is chosen and repeatedly executed.

Clearly, no equilibrium will be achieved here: For any pure strategy, like “always choose Rock,” a counterstrategy can be adopted, like “always choose Paper,” which will force another change in strategy. You and your opponent will forever be chasing each other around the circle of strategies.

But you can also try a “mixed” strategy. Let’s assume that, instead of choosing only one strategy to play, you can randomly choose one of the pure strategies each round. Instead of “always play Rock,” a mixed strategy could be to “play Rock half the time and Scissors the other half.” Nash proved that, when such mixed strategies are allowed, every game like this must have at least one equilibrium point. Let’s find it.

So, what’s a sensible mixed strategy for Rock-Paper-Scissors? A reasonable intuition would be “choose Rock, Paper or Scissors with equal probability,” denoted as (1/3,1/3,1/3). This means Rock, Paper and Scissors are each chosen with probability 1/3. Is this a good strategy?

Well, suppose your opponent’s strategy is “always choose Rock,” a pure strategy that can be represented as (1,0,0). How will the game play out under the strategy profile (1/3,1/3,1/3) for A and (1,0,0) for B?

In order to get better picture of our game, we’ll construct a table that shows the probability of each of the nine possible outcomes every round: Rock for A, Rock for B; Rock for A, Paper for B; and so on. In the chart below, the top row indicates Player B’s choice, and the leftmost column indicates Player A’s choice.

Each entry in the table shows the probability that the given pair of choices is made in any given round. This is simply the product of the probabilities that each player makes their respective choice. For example, the probability of Player A choosing Paper is 1/3, and the probability of player B choosing Rock is 1, so the probability of (Paper for A, Rock for B) is 1/3×1=1/3. But the probability of (Paper for A, Scissors for B) is 1/3×0=0, since there is a zero probability of Player B choosing Scissors.

So how does Player A fare in this strategy profile? Player A will win one-third of the time (Paper, Rock), lose one-third of the time (Scissors, Rock) and tie one-third of the time (Rock, Rock). We can compute the number of points that Player A will earn, on average, each round by computing the sum of the product of each outcome with its respective probability:

This says that, on average, Player A will earn 0 points per round. You will win, lose and tie with equal likelihood. On average, the number of wins and losses will even out, and both players are essentially headed for a draw.

But as we’ve already discussed, you can do better by changing your strategy, assuming your opponent doesn’t change theirs. If you switch to the strategy (0,1,0) (“choose Paper every time”), the probability chart will look like this

Each time you play, your Paper will wrap your opponent’s Rock, and you’ll earn one point every round.

So, this pair of strategies—(1/3,1/3,1/3) for A and (1,0,0) for B—is not a Nash equilibrium: You, as Player A, can improve your results by changing your strategy.

As we’ve seen, pure strategies don’t seem to lead to equilibrium. But what if your opponent tries a mixed strategy, like (1/2,1/4,1/4)? This is the strategy “Rock half the time; Paper and Scissors each one quarter of the time.” Here’s the associated probability chart:

Now, here’s the “payoff” chart, from Player A’s perspective; this is the number of points Player A receives for each outcome.

We put the two charts together, using multiplication, to compute how many points, on average, Player A will earn each round.

16(0)+1/12(−1)+1/12(1)+16(1)+1/12(0)+1/12(−1)+16(−1)+1/12(1)+1/12(0)=0

On average, Player A is again earning 0 points per round. Like before, this strategy profile, (1/3,1/3,1/3) for A and (1/2,1/4,1/4) for B, ends up in a draw.

But also like before, you as Player A can improve your results by switching strategies: Against Player B’s (1/2,1/4,1/4), Player A should play (1/4,1/2,1/4). This has a probability chart of

and this net result for A:

1/8(0)+1/16(−1)+1/16(1)+14(1)+1/8(0)+1/8(−1)+1/8(−1)+1/16(1)+1/16(0)=1/16

That is, under this strategy profile—(1/4,1/2,1/4) for A and (1/2,1/4,1/4) for B—Player A nets 1/16 of a point per round on average. After 100 games, Player A will be up 6.25 points. There’s a big incentive for Player A to switch strategies. So, the strategy profile of (1/3,1/3,1/3) for A and (1/2,1/4,1/4) for B is not a Nash equilibrium, either.

But now let’s consider the pair of strategies (1/3,1/3,1/3) for A and (1/3,1/3,1/3) for B. Here’s the corresponding probability chart:

Symmetry makes quick work of the net result calculation:

Again, you and your opponent are playing to draw. But the difference here is that no player has an incentive to change strategies! If Player B were to switch to any imbalanced strategy where one option—say, Rock—were played more than the others, Player A would simply alter their strategy to play Paper more frequently. This would ultimately yield a positive net result for Player A each round. This is precisely what happened when Player A adopted the strategy (1/4,1/2,1/4) against Player B’s (1/2,1/4,1/4) strategy above.

Of course, if Player A switched from (1/3,1/3,1/3) to an imbalanced strategy, Player B could take advantage in a similar manner. So, neither player can improve their results solely by changing their own individual strategy. The game has reached a Nash equilibrium.

The fact that all such games have such equilibria, as Nash proved, is important for several reasons. One of those reasons is that many real-life situations can be modeled as games. Whenever a group of individuals is caught in the tension between personal gain and collective satisfaction—like in a negotiation, or a competition for shared resources—you’ll find strategies being employed and payoffs being evaluated. The ubiquitous nature of this mathematical model is part of the reason Nash’s work has been so impactful.

Another reason is that a Nash equilibrium is, in some sense, a positive outcome for all players. When reached, no individual can do better by changing their own strategy. There might exist better collective outcomes that could be reached if all players acted in perfect cooperation, but if all you can control is yourself, ending up at a Nash equilibrium is the best you can reasonably hope to do.

And so, we might hope that “games” like economic incentive packages, tax codes, treaty parameters and network designs will end in Nash equilibria, where individuals, acting in their own interest, all end up with something to be happy about, and systems are stable. But when playing these games, is it reasonable to assume that players will naturally arrive at a Nash equilibrium?

It’s tempting to think so. In our Rock-Paper-Scissors game, we might have guessed right away that neither player could do better than playing completely randomly. But that’s in part because all player preferences are known to all other players: Everyone knows how much everyone else wins and loses for each outcome. But what if preferences were secret and more complex?

Imagine a new game in which Player B scores three points when she defeats Scissors, and one point for any other victory. This would alter the mixed strategy: Player B would play Rock more often, hoping for the triple payoff when Player A chooses Scissors. And while the difference in points wouldn’t directly affect Player A’s payoffs, the resulting change in Player B’s strategy would trigger a new counterstrategy for A.

And if every one of Player B’s payoffs was different, and secret, it would take some time for Player A to figure out what Player B’s strategy was. Many rounds would pass before Player A could get a sense of, say, how often Player B was choosing Rock, in order to figure out how often to choose Paper.

Now imagine there are 100 people playing Rock-Paper-Scissors, each with a different set of secret payoffs, each depending on how many of their 99 opponents they defeat using Rock, Paper or Scissors. How long would it take to calculate just the right frequency of Rock, Paper or Scissors you should play in order to reach an equilibrium point? Probably a long time. Maybe longer than the game will go on. Maybe even longer than the lifetime of the universe!

At the very least, it’s not obvious that even perfectly rational and reflective players, playing good strategies and acting in their own best interests, will end up at equilibrium in this game. This idea lies at the heart of a paper posted online in 2016 that proves there is no uniform approach that, in all games, would lead players to even an approximate Nash equilibrium. This is not to say that perfect players never tend toward equilibrium in games—they often do. It just means that there’s no reason to believe that just because a game is being played by perfect players, equilibrium will be achieved.

When we design a transportation network, we might hope that the players in the game, travelers each seeking the fastest way home, will collectively achieve an equilibrium where nothing is gained by taking a different route. We might hope that the invisible hand of John Nash will guide them so that their competing and cooperating interests—to take the shortest possible route yet avoid creating traffic jams—produce an equilibrium.

But our increasingly complex game of Rock-Paper-Scissors shows why such hopes may be misplaced. The invisible hand may guide some games, but others may resist its hold, trapping players in a never-ending competition for gains forever just out of reach.

Exercises

  1. Suppose Player B plays the mixed strategy (1/2,1/2,0). What mixed strategy should A play to maximize wins in the long run?
  2. Suppose Player B plays the mixed strategy (1/6,2/6,3/6). What mixed strategy should A play to maximize wins in the long run?
  3. List item

How might the dynamics of the game change if both players are awarded a point for a tie?

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more: https://www.wired.com/story/why-winning-in-rock-paper-scissors-isnt-everything/

Whisper From the First Stars Sets Off Loud Dark Matter Debate

The news about the first stars in the universe always seemed a little off. Last July, Rennan Barkana, a cosmologist at Tel Aviv University, received an email from one of his longtime collaborators, Judd Bowman. Bowman leads a small group of five astronomers who built and deployed a radio telescope in remote western Australia. Its goal: to find the whisper of the first stars. Bowman and his team had picked up a signal that didn’t quite make sense. He asked Barkana to help him think through what could possibly be going on.

Quanta Magazine


About

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

For years, as radio telescopes scanned the sky, astronomers have hoped to glimpse signs of the first stars in the universe. Those objects are too faint and, at over 13 billion light-years away, too distant to be picked up by ordinary telescopes. Instead, astronomers search for the stars’ effects on the surrounding gas. Bowman’s instrument, like the others involved in the search, attempts to pick out a particular dip in radio waves coming from the distant universe.

The measurement is exceedingly difficult to make, since the potential signal can get swamped not only by the myriad radio sources of modern society—one reason the experiment is deep in the Australian outback—but by nearby cosmic sources such as our own Milky Way galaxy. Still, after years of methodical work, Bowman and his colleagues with the Experiment to Detect the Global Epoch of Reionization Signature (EDGES) concluded not only that they had found the first stars, but that they had found evidence that the young cosmos was significantly colder than anyone had thought.

Barkana was skeptical, however. “On the one hand, it looks like a very solid measurement,” he said. “On the other hand, it is something very surprising.”

What could make the early universe appear cold? Barkana thought through the possibilities and realized that it could be a consequence of the presence of dark matter—the mysterious substance that pervades the universe yet escapes every attempt to understand what it is or how it works. He found that the EDGES result could be interpreted as a completely new way that ordinary material might be interacting with dark matter.

The EDGES group announced the details of this signal and the detection of the first stars in the March 1 issue of Nature. Accompanying their article was Barkana’s paper describing his novel dark matter idea. News outlets worldwide carried news of the discovery. “Astronomers Glimpse Cosmic Dawn, When the Stars Switched On,” the Associated Press reported, adding that “they may have detected mysterious dark matter at work, too.”

Yet in the weeks since the announcement, cosmologists around the world have expressed a mix of excitement and skepticism. Researchers who saw the EDGES result for the first time when it appeared in Nature have done their own analysis, showing that even if some kind of dark matter is responsible, as Barkana suggested, no more than a small fraction of it could be involved in producing the effect. (Barkana himself has been involved in some of these studies.) And experimental astronomers have said that while they respect the EDGES team and the careful work that they’ve done, such a measurement is too difficult to trust entirely. “If this weren’t a groundbreaking discovery, it would be a lot easier for people to just believe the results,” said Daniel Price, an astronomer at Swinburne University of Technology in Australia who works on similar experiments. “Great claims require great evidence.”

This message has echoed through the cosmology community since those Nature papers appeared.

The Source of a Whisper

The day after Bowman contacted Barkana to tell him about the surprising EDGES signal, Barkana drove with his family to his in-laws’ house. During the drive, he said, he contemplated this signal, telling his wife about the interesting puzzle Bowman had handed him.

Bowman and the EDGES team had been probing the neutral hydrogen gas that filled the universe during the first few hundred million years after the Big Bang. This gas tended to absorb ambient light, leading to what cosmologists poetically call the universe’s “dark ages.” Although the cosmos was filled with a diffuse ambient light from the cosmic microwave background (CMB)—the so-called afterglow of the Big Bang—this neutral gas absorbed it at specific wavelengths. EDGES searched for this absorption pattern.

As stars began to turn on in the universe, their energy would have heated the gas. Eventually the gas reached a high enough temperature that it no longer absorbed CMB radiation. The absorption signal disappeared, and the dark ages ended.

The absorption signal as measured by EDGES contains an immense amount of information. As the absorption pattern traveled across the expanding universe, the signal stretched. Astronomers can use that stretch to infer how long the signal has been traveling, and thus, when the first stars flicked on. In addition, the width of the detected signal corresponds to the amount of time that the gas was absorbing the CMB light. And the intensity of the signal—how much light was absorbed—relates to the temperature of the gas and the amount of light that was floating around at the time.

Many researchers find this final characteristic the most intriguing. “It’s a much stronger absorption than we had thought possible,” said Steven Furlanetto, a cosmologist at the University of California, Los Angeles, who has examined what the EDGES data would mean for the formation of the earliest galaxies.

Lucy Reading-Ikkanda/Quanta Magazine

The most obvious explanation for such a strong signal is that the neutral gas was colder than predicted, which would have allowed it to absorb even more background radiation. But how could the universe have unexpectedly cooled? “We’re talking about a period of time when stars are beginning to form,” Barkana said—the darkness before the dawn. “So everything is as cold as it can be. The question is: What could be even colder?”

As he parked at his in-laws’ house that July day, an idea came to him: Could it be dark matter? After all, dark matter doesn’t seem to interact with normal matter via the electromagnetic force — it doesn’t emit or absorb heat. So dark matter could have started out colder or been cooling much longer than normal matter at the beginning of the universe, and then continued to cool.

Over the next week, he worked on a theory of how a hypothetical form of dark matter called “millicharged” dark matter could have been responsible. Millicharged dark matter could interact with ordinary matter, but only very weakly. Intergalactic gas might then have cooled by “basically dumping heat into the dark matter sector where you can’t see it anymore,” Furlanetto explained. Barkana wrote the idea up and sent it off to Nature.

Then he began to work through the idea in more detail with several colleagues. Others did as well. As soon as the Nature papers appeared, several groups of theoretical cosmologists started to compare the behavior of this unexpected type of dark matter to what we know about the universe—the decades’ worth of CMB observations, data from supernova explosions, the results of collisions at particle accelerators like the Large Hadron Collider, and astronomers’ understanding of how the Big Bang produced hydrogen, helium and lithium during the universe’s first few minutes. If millicharged dark matter was out there, did all these other observations make sense?

Rennan Barkana, a cosmologist at Tel Aviv University, contributed the idea that a form of dark matter might explain why the early universe looked so cool in the EDGES observations. But he has also stayed skeptical about the findings.
Rennan Barkana

They did not. More precisely, these researchers found that millicharged dark matter can only make up a small fraction of the total dark matter in the universe—too small a fraction to create the observed dip in the EDGES data. “You cannot have 100 percent of dark matter interacting,” said Anastasia Fialkov, an astrophysicist at Harvard University and the first author of a paper submitted to Physical Review Letters. Another paper that Barkana and colleagues posted on the preprint site arxiv.org concludes that this dark matter has an even smaller presence: It couldn’t account for more than 1 to 2 percent of the millicharged dark matter content. Independent groups have reached similar conclusions.

If it’s not millicharged dark matter, then what might explain EDGES’ stronger-than-expected absorption signal? Another possibility is that extra background light existed during the cosmic dawn. If there were more radio waves than expected in the early universe, then “the absorption would appear stronger even though the gas itself is unchanged,” Furlanetto said. Perhaps the CMB wasn’t the only ambient light during the toddler years of our universe.

This idea doesn’t come entirely out of left field. In 2011, a balloon-lofted experiment called ARCADE 2 reported a background radio signal that was stronger than would have been expected from the CMB alone. Scientists haven’t yet been able to explain this result.

After the EDGES detection, a few groups of astronomers revisited these data. One group looked at black holes as a possible explanation, since black holes are the brightest extragalactic radio sources in the sky. Yet black holes also produce other forms of radiation, like X-rays, that haven’t been seen in the early universe. Because of this, astronomers remain skeptical that black holes are the answer.

Is It Real?

Perhaps the simplest explanation is that the data are just wrong. The measurement is incredibly difficult, after all. Yet by all accounts the EDGES team took exceptional care to cross-check all their data—Price called the experiment “exquisite”—which means that if there is a flaw in the data, it will be exceptionally hard to find.

This antenna for EDGES was deployed in 2015 at a remote location in western Australia where it would experience little radio interference.

The EDGES team deployed their radio antenna in September 2015. By December, they were seeing a signal, said Raul Monsalve, an experimental cosmologist at the University of Colorado, Boulder, and a member of the EDGES team. “We became suspicious immediately, because it was stronger than expected.”

And so they began what became a marathon of due diligence. They built a similar antenna and installed it about 150 meters away from the first one. They rotated the antennas to rule out environmental and instrumental effects. They used separate calibration and analysis techniques. “We made many, many kinds of cuts and comparisons and cross-checks to try to rule out the signal as coming from the environment or from some other source,” Monsalve said. “We didn’t believe ourselves at the beginning. We thought it was very suspicious for the signal to be this strong, and that’s why we took so long to publish.” They are convinced that they’re seeing a signal, and that the signal is unexpectedly strong.

“I do believe the result,” Price said, but he emphasized that testing for systematic errors in the data is still needed. He mentioned one area where the experiment could have overlooked a potential error: Any antenna’s sensitivity varies depending on the frequency it’s observing and the direction from which a signal is coming. Astronomers can account for these imperfections by either measuring them or modeling them. Bowman and colleagues chose to model them. Price suggests that the EDGES team members instead find a way to measure them and then reanalyze their signal with that measured effect taken into account.

The next step is for a second radio detector to see this signal, which would imply it’s from the sky and not from the EDGES antenna or model. Scientists with the Large-Aperture Experiment to Detect the Dark Ages (LEDA) project, located in California’s Owens Valley, are currently analyzing that instrument’s data. Then researchers will need to confirm that the signal is actually cosmological and not produced by our own Milky Way. This is not a simple problem. Our galaxy’s radio emission can be thousands of times stronger than cosmological signals.

On the whole, researchers regard both the EDGES measurement itself and its interpretation with a healthy skepticism, as Barkana and many others have put it. Scientists should be skeptical of a first-of-its-kind measurement—that’s how they ensure that the observation is sound, the analysis was completed accurately, and the experiment wasn’t in error. This is, ultimately, how science is supposed to work. “We ask the questions, we investigate, we exclude every wrong possibility,” said Tomer Volansky, a particle physicist at Tel Aviv University who collaborated with Barkana on one of his follow-up analyses. “We’re after the truth. If the truth is that it’s not dark matter, then it’s not dark matter.”

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more: https://www.wired.com/story/whisper-from-the-first-stars-sets-off-loud-dark-matter-debate/

In Search of Gods Mathematical Perfect Proofs

Paul Erdős, the famously eccentric, peripatetic and prolific 20th-century mathematician, was fond of the idea that God has a celestial volume containing the perfect proof of every mathematical theorem. “This one is from The Book,” he would declare when he wanted to bestow his highest praise on a beautiful proof.

Quanta Magazine


About

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Never mind that Erdős doubted God’s very existence. “You don’t have to believe in God, but you should believe in The Book,” Erdős explained to other mathematicians.

In 1994, during conversations with Erdős at the Oberwolfach Research Institute for Mathematics in Germany, the mathematician Martin Aigner came up with an idea: Why not actually try to make God’s Book—or at least an earthly shadow of it? Aigner enlisted fellow mathematician Günter Ziegler, and the two started collecting examples of exceptionally beautiful proofs, with enthusiastic contributions from Erdős himself. The resulting volume, Proofs From THE BOOK, was published in 1998, sadly too late for Erdős to see it—he had died about two years after the project commenced, at age 83.

“Many of the proofs trace directly back to him, or were initiated by his supreme insight in asking the right question or in making the right conjecture,” Aigner and Ziegler, who are now both professors at the Free University of Berlin, write in the preface.

Whether the proof is understandable and beautiful depends not only on the proof but also on the reader.

The book, which has been called “a glimpse of mathematical heaven,” presents proofs of dozens of theorems from number theory, geometry, analysis, combinatorics and graph theory. Over the two decades since it first appeared, it has gone through five editions, each with new proofs added, and has been translated into 13 languages.

In January, Ziegler traveled to San Diego for the Joint Mathematics Meetings, where he received (on his and Aigner’s behalf) the 2018 Steele Prize for Mathematical Exposition. “The density of elegant ideas per page [in the book] is extraordinarily high,” the prize citation reads.

Quanta Magazine sat down with Ziegler at the meeting to discuss beautiful (and ugly) mathematics. The interview has been edited and condensed for clarity.

You’ve said that you and Martin Aigner have a similar sense of which proofs are worthy of inclusion in THE BOOK. What goes into your aesthetic?

Aubrey Wade/Quanta Magazine

We’ve always shied away from trying to define what is a perfect proof. And I think that’s not only shyness, but actually, there is no definition and no uniform criterion. Of course, there are all these components of a beautiful proof. It can’t be too long; it has to be clear; there has to be a special idea; it might connect things that usually one wouldn’t think of as having any connection.

For some theorems, there are different perfect proofs for different types of readers. I mean, what is a proof? A proof, in the end, is something that convinces the reader of things being true. And whether the proof is understandable and beautiful depends not only on the proof but also on the reader: What do you know? What do you like? What do you find obvious?

You noted in the fifth edition that mathematicians have come up with at least 196 different proofs of the “quadratic reciprocity” theorem (concerning which numbers in “clock” arithmetics are perfect squares) and nearly 100 proofs of the fundamental theorem of algebra (concerning solutions to polynomial equations). Why do you think mathematicians keep devising new proofs for certain theorems, when they already know the theorems are true?

These are things that are central in mathematics, so it’s important to understand them from many different angles. There are theorems that have several genuinely different proofs, and each proof tells you something different about the theorem and the structures. So, it’s really valuable to explore these proofs to understand how you can go beyond the original statement of the theorem.

An example comes to mind—which is not in our book but is very fundamental—Steinitz’s theorem for polyhedra. This says that if you have a planar graph (a network of vertices and edges in the plane) that stays connected if you remove one or two vertices, then there is a convex polyhedron that has exactly the same connectivity pattern. This is a theorem that has three entirely different types of proof—the “Steinitz-type” proof, the “rubber band” proof and the “circle packing” proof. And each of these three has variations.

Any of the Steinitz-type proofs will tell you not only that there is a polyhedron but also that there’s a polyhedron with integers for the coordinates of the vertices. And the circle packing proof tells you that there’s a polyhedron that has all its edges tangent to a sphere. You don’t get that from the Steinitz-type proof, or the other way around—the circle packing proof will not prove that you can do it with integer coordinates. So, having several proofs leads you to several ways to understand the situation beyond the original basic theorem.

You’ve mentioned the element of surprise as one feature you look for in a BOOK proof. And some great proofs do leave one wondering, “How did anyone ever come up with this?” But there are other proofs that have a feeling of inevitability. I think it always depends on what you know and where you come from.

An example is László Lovász’s proof for the Kneser conjecture, which I think we put in the fourth edition. The Kneser conjecture was about a certain type of graph you can construct from the k-element subsets of an n-element set—you construct this graph where the k-element subsets are the vertices, and two k-element sets are connected by an edge if they don’t have any elements in common. And Kneser had asked, in 1955 or ’56, how many colors are required to color all the vertices if vertices that are connected must be different colors.

A proof that eats more than 10 pages cannot be a proof for our book. God—if he exists—has more patience.

It’s rather easy to show that you can color this graph with nk + 2 colors, but the problem was to show that fewer colors won’t do it. And so, it’s a graph coloring problem, but Lovász, in 1978, gave a proof that was a technical tour de force, that used a topological theorem, the Borsuk-Ulam theorem. And it was an amazing surprise—why should this topological tool prove a graph theoretic thing?

This turned into a whole industry of using topological tools to prove discrete mathematics theorems. And now it seems inevitable that you use these, and very natural and straightforward. It’s become routine, in a certain sense. But then, I think, it’s still valuable not to forget the original surprise.

Brevity is one of your other criteria for a BOOK proof. Could there be a hundred-page proof in God’s Book?

I think there could be, but no human will ever find it.

We have these results from logic that say that there are theorems that are true and that have a proof, but they don’t have a short proof. It’s a logic statement. And so, why shouldn’t there be a proof in God’s Book that goes over a hundred pages, and on each of these hundred pages, makes a brilliant new observation—and so, in that sense, it’s really a proof from The Book?

On the other hand, we are always happy if we manage to prove something with one surprising idea, and proofs with two surprising ideas are even more magical but still harder to find. So a proof that is a hundred pages long and has a hundred surprising ideas—how should a human ever find it?

But I don’t know how the experts judge Andrew Wiles’ proof of Fermat’s Last Theorem. This is a hundred pages, or many hundred pages, depending on how much number theory you assume when you start. And my understanding is that there are lots of beautiful observations and ideas in there. Perhaps Wiles’ proof, with a few simplifications, is God’s proof for Fermat’s Last Theorem.

But it’s not a proof for the readers of our book, because it’s just beyond the scope, both in technical difficulty and layers of theory. By definition, a proof that eats more than 10 pages cannot be a proof for our book. God—if he exists—has more patience.

Aubrey Wade/Quanta Magazine

Paul Erdős has been called a “priest of mathematics.” He traveled across the globe—often with no settled address—to spread the gospel of mathematics, so to speak. And he used these religious metaphors to talk about mathematical beauty.

Paul Erdős referred to his own lectures as “preaching.” But he was an atheist. He called God the “Supreme Fascist.” I think it was more important to him to be funny and to tell stories—he didn’t preach anything religious. So, this story of God and his book was part of his storytelling routine.

When you experience a beautiful proof, does it feel somehow spiritual?

The ugly proofs have their role.

It’s a powerful feeling. I remember these moments of beauty and excitement. And there’s a very powerful type of happiness that comes from it.

If I were a religious person, I would thank God for all this inspiration that I’m blessed to experience. As I’m not religious, for me, this God’s Book thing is a powerful story.

There’s a famous quote from the mathematician G. H. Hardy that says, “There is no permanent place in the world for ugly mathematics.” But ugly mathematics still has a role, right?

You know, the first step is to establish the theorem, so that you can say, “I worked hard. I got the proof. It’s 20 pages. It’s ugly. It’s lots of calculations, but it’s correct and it’s complete and I’m proud of it.”

If the result is interesting, then come the people who simplify it and put in extra ideas and make it more and more elegant and beautiful. And in the end you have, in some sense, the Book proof.

If you look at Lovász’s proof for the Kneser conjecture, people don’t read his paper anymore. It’s rather ugly, because Lovász didn’t know the topological tools at the time, so he had to reinvent a lot of things and put them together. And immediately after that, Imre Bárány had a second proof, which also used the Borsuk-Ulam theorem, and that was, I think, more elegant and more straightforward.

To do these short and surprising proofs, you need a lot of confidence. And one way to get the confidence is if you know the thing is true. If you know that something is true because so-and-so proved it, then you might also dare to say, “What would be the really nice and short and elegant way to establish this?” So, I think, in that sense, the ugly proofs have their role.

Aubrey Wade/Quanta Magazine

You’re currently preparing a sixth edition of Proofs From THE BOOK. Will there be more after that?

The third edition was perhaps the first time that we claimed that that’s it, that’s the final one. And, of course, we also claimed this in the preface of the fifth edition, but we’re currently working hard to finish the sixth edition.

When Martin Aigner talked to me about this plan to do the book, the idea was that this might be a nice project, and we’d get done with it, and that’s it. And with, I don’t know how you translate it into English, jugendlicher Leichtsinn—that’s sort of the foolery of someone being young—you think you can just do this book and then it’s done.

But it’s kept us busy from 1994 until now, with new editions and translations. Now Martin has retired, and I’ve just applied to be university president, and I think there will not be time and energy and opportunity to do more. The sixth edition will be the final one.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more: https://www.wired.com/story/in-search-of-gods-mathematical-perfect-proofs/

Brainless Embryos Suggest Bioelectricity Guides Growth

The tiny tadpole embryo looked like a bean. One day old, it didn’t even have a heart yet. The researcher in a white coat and gloves who hovered over it made a precise surgical incision where its head would form. Moments later, the brain was gone, but the embryo was still alive.

The brief procedure took Celia Herrera-Rincon, a neuroscience postdoc at the Allen Discovery Center at Tufts University, back to the country house in Spain where she had grown up, in the mountains near Madrid. When she was 11 years old, while walking her dogs in the woods, she found a snake, Vipera latastei. It was beautiful but dead. “I realized I wanted to see what was inside the head,” she recalled. She performed her first “lab test” using kitchen knives and tweezers, and she has been fascinated by the many shapes and evolutionary morphologies of the brain ever since. Her collection now holds about 1,000 brains from all kinds of creatures.

Quanta Magazine


About

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

This time, however, she was not interested in the brain itself, but in how an African clawed frog would develop without one. She and her supervisor, Michael Levin, a software engineer turned developmental biologist, are investigating whether the brain and nervous system play a crucial role in laying out the patterns that dictate the shapes and identities of emerging organs, limbs and other structures.

For the past 65 years, the focus of developmental biology has been on DNA as the carrier of biological information. Researchers have typically assumed that genetic expression patterns alone are enough to determine embryonic development.

To Levin, however, that explanation is unsatisfying. “Where does shape come from? What makes an elephant different from a snake?” he asked. DNA can make proteins inside cells, he said, but “there is nothing in the genome that directly specifies anatomy.” To develop properly, he maintains, tissues need spatial cues that must come from other sources in the embryo. At least some of that guidance, he and his team believe, is electrical.

In recent years, by working on tadpoles and other simple creatures, Levin’s laboratory has amassed evidence that the embryo is molded by bioelectrical signals, particularly ones that emanate from the young brain long before it is even a functional organ. Those results, if replicated in other organisms, may change our understanding of the roles of electrical phenomena and the nervous system in development, and perhaps more widely in biology.

“Levin’s findings will shake some rigid orthodoxy in the field,” said Sui Huang, a molecular biologist at the Institute for Systems Biology. If Levin’s work holds up, Huang continued, “I think many developmental biologists will be stunned to see that the construction of the body plan is not due to local regulation of cells … but is centrally orchestrated by the brain.”

Bioelectrical Influences in Development

The Spanish neuroscientist and Nobel laureate Santiago Ramón y Cajal once called the brain and neurons, the electrically active cells that process and transmit nerve signals, the “butterflies of the soul.” The brain is a center for information processing, memory, decision making and behavior, and electricity figures into its performance of all of those activities.

But it’s not just the brain that uses bioelectric signaling—the whole body does. All cell membranes have embedded ion channels, protein pores that act as pathways for charged molecules, or ions. Differences between the number of ions inside and outside a cell result in an electric gradient—the cell’s resting potential. Vary this potential by opening or blocking the ion channels, and you change the signals transmitted to, from and among the cells all around. Neurons do this as well, but even faster: To communicate among themselves, they use molecules called neurotransmitters that are released at synapses in response to voltage spikes, and they send ultra-rapid electrical pulses over long distances along their axons, encoding information in the pulses’ pattern, to control muscle activity.

Levin has thought about hacking networks of neurons since the mid-1980s, when he was a high school student in the suburbs near Boston, writing software for pocket money. One day, while browsing a small bookstore in Vancouver at Expo 86 with his father, he spotted a volume called The Body Electric, by Robert O. Becker and Gary Selden. He learned that scientists had been investigating bioelectricity for centuries, ever since Luigi Galvani discovered in the 1780s that nerves are animated by what he called “animal electricity.”

However, as Levin continued to read up on the subject, he realized that, even though the brain uses electricity for information processing, no one seemed to be seriously investigating the role of bioelectricity in carrying information about a body’s development. Wouldn’t it be cool, he thought, if we could comprehend “how the tissues process information and what tissues were ‘thinking about’ before they evolved nervous systems and brains?”

He started digging deeper and ended up getting a biology doctorate at Harvard University in morphogenesis—the study of the development of shapes in living things. He worked in the tradition of scientists like Emil du Bois-Reymond, a 19th-century German physician who discovered the action potential of nerves. In the 1930s and ’40s, the American biologists Harold Burr and Elmer Lund measured electric properties of various organisms during their embryonic development and studied connections between bioelectricity and the shapes animals take. They were not able to prove a link, but they were moving in the right direction, Levin said.

Before Genes Reigned Supreme

The work of Burr and Lund occurred during a time of widespread interest in embryology. Even the English mathematician Alan Turing, famed for cracking the Enigma code, was fascinated by embryology. In 1952 he published a paper suggesting that body patterns like pigmented spots and zebra stripes arise from the chemical reactions of diffusing substances, which he called morphogens.

"This electrical signal works as an environmental cue for intercellular communication, orchestrating cell behaviors during morphogenesis and regeneration."

Masayuki Yamashita

But organic explanations like morphogens and bioelectricity didn’t stay in the limelight for long. In 1953, James Watson and Francis Crick published the double helical structure of DNA, and in the decades since “the focus of developmental biology has been on DNA as the carrier of biological information, with cells thought to follow their own internal genetic programs, prompted by cues from their local environment and neighboring cells,” Huang said.

The rationale, according to Richard Nuccitelli, chief science officer at Pulse Biosciences and a former professor of molecular biology at the University of California, Davis, was that “since DNA is what is inherited, information stored in the genes must specify all that is needed to develop.” Tissues are told how to develop at the local level by neighboring tissues, it was thought, and each region patterns itself from information in the genomes of its cells.

The extreme form of this view is “to explain everything by saying ‘it is in the genes,’ or DNA, and this trend has been reinforced by the increasingly powerful and affordable DNA sequencing technologies,” Huang said. “But we need to zoom out: Before molecular biology imposed our myopic tunnel vision, biologists were much more open to organism-level principles.”

The tide now seems to be turning, according to Herrera-Rincon and others. “It’s too simplistic to consider the genome as the only source of biological information,” she said. Researchers continue to study morphogens as a source of developmental information in the nervous system, for example. Last November, Levin and Chris Fields, an independent scientist who works in the area where biology, physics and computing overlap, published a paper arguing that cells’ cytoplasm, cytoskeleton and both internal and external membranes also encode important patterning data—and serve as systems of inheritance alongside DNA.

And, crucially, bioelectricity has made a comeback as well. In the 1980s and ’90s, Nuccitelli, along with the late Lionel Jaffe at the Marine Biological Laboratory, Colin McCaig at the University of Aberdeen, and others, used applied electric fields to show that many cells are sensitive to bioelectric signals and that electricity can induce limb regeneration in nonregenerative species.

According to Masayuki Yamashita of the International University of Health and Welfare in Japan, many researchers forget that every living cell, not just neurons, generates electric potentials across the cell membrane. “This electrical signal works as an environmental cue for intercellular communication, orchestrating cell behaviors during morphogenesis and regeneration,” he said.

However, no one was really sure why or how this bioelectric signaling worked, said Levin, and most still believe that the flow of information is very local. “Applied electricity in earlier experiments directly interacts with something in cells, triggering their responses,” he said. But what it was interacting with and how the responses were triggered were mysteries.

That’s what led Levin and his colleagues to start tinkering with the resting potential of cells. By changing the voltage of cells in flatworms, over the last few years they produced worms with two heads, or with tails in unexpected places. In tadpoles, they reprogrammed the identity of large groups of cells at the level of entire organs, making frogs with extra legs and changing gut tissue into eyes—simply by hacking the local bioelectric activity that provides patterning information.

And because the brain and nervous system are so conspicuously active electrically, the researchers also began to probe their involvement in long-distance patterns of bioelectric information affecting development. In 2015, Levin, his postdoc Vaibhav Pai, and other collaborators showed experimentally that bioelectric signals from the body shape the development and patterning of the brain in its earliest stages. By changing the resting potential in the cells of tadpoles as far from the head as the gut, they appeared to disrupt the body’s “blueprint” for brain development. The resulting tadpoles’ brains were smaller or even nonexistent, and brain tissue grew where it shouldn’t.

Unlike previous experiments with applied electricity that simply provided directional cues to cells, “in our work, we know what we have modified—resting potential—and we know how it triggers responses: by changing how small signaling molecules enter and leave cells,” Levin said. The right electrical potential lets neurotransmitters go in and out of voltage-powered gates (transporters) in the membrane. Once in, they can trigger specific receptors and initiate further cellular activity, allowing researchers to reprogram identity at the level of entire organs.

Lucy Reading-Ikkanda/Quanta Magazine

This work also showed that bioelectricity works over long distances, mediated by the neurotransmitter serotonin, Levin said. (Later experiments implicated the neurotransmitter butyrate as well.) The researchers started by altering the voltage of cells near the brain, but then they went farther and farther out, “because our data from the prior papers showed that tumors could be controlled by electric properties of cells very far away,” he said. “We showed that cells at a distance mattered for brain development too.”

Then Levin and his colleagues decided to flip the experiment. Might the brain hold, if not an entire blueprint, then at least some patterning information for the rest of the body, Levin asked—and if so, might the nervous system disseminate this information bioelectrically during the earliest stages of a body’s development? He invited Herrera-Rincon to get her scalpel ready.

Making Up for a Missing Brain

Herrera-Rincon’s brainless Xenopus laevis tadpoles grew, but within just a few days they all developed highly characteristic defects—and not just near the brain, but as far away as the very end of their tails. Their muscle fibers were also shorter and their nervous systems, especially the peripheral nerves, were growing chaotically. It’s not surprising that nervous system abnormalities that impair movement can affect a developing body. But according to Levin, the changes seen in their experiment showed that the brain helps to shape the body’s development well before the nervous system is even fully developed, and long before any movement starts.

The body of a tadpole normally develops with a predictable structure (A). Removing a tadpole’s brain early in development, however, leads to abnormalities in tissues far from the head (B).

That such defects could be seen so early in the development of the tadpoles was intriguing, said Gil Carvalho, a neuroscientist at the University of Southern California. “An intense dialogue between the nervous system and the body is something we see very prominently post-development, of course,” he said. Yet the new data “show that this cross-talk starts from the very beginning. It’s a window into the inception of the brain-body dialogue, which is so central to most vertebrate life as we know it, and it’s quite beautiful.” The results also raise the possibility that these neurotransmitters may be acting at a distance, he added—by diffusing through the extracellular space, or going from cell to cell in relay fashion, after they have been triggered by a cell’s voltage changes.

Herrera-Rincon and the rest of the team didn’t stop there. They wanted to see whether they could “rescue” the developing body from these defects by using bioelectricity to mimic the effect of a brain. They decided to express a specific ion channel called HCN2, which acts differently in various cells but is sensitive to their resting potential. Levin likens the ion channel’s effect to a sharpening filter in photo-editing software, in that “it can strengthen voltage differences between adjacent tissues that help you maintain correct boundaries. It really strengthens the abilities of the embryos to set up the correct boundaries for where tissues are supposed to go.”

To make embryos express it, the researchers injected messenger RNA for HCN2 into some frog egg cells just a couple of hours after they were fertilized. A day later they removed the embryos’ brains, and over the next few days, the cells of the embryo acquired novel electrical activity from the HCN2 in their membranes.

The scientists found that this procedure rescued the brainless tadpoles from most of the usual defects. Because of the HCN2 it was as if the brain was still present, telling the body how to develop normally. It was amazing, Levin said, “to see how much rescue you can get just from very simple expression of this channel.” It was also, he added, the first clear evidence that the brain controls the development of the embryo via bioelectric cues.

As with Levin’s previous experiments with bioelectricity and regeneration, many biologists and neuroscientists hailed the findings, calling them “refreshing” and “novel.” “One cannot say that this is really a step forward because this work veers off the common path,” Huang said. But a single experiment with tadpoles’ brains is not enough, he added — it’s crucial to repeat the experiment in other organisms, including mammals, for the findings “to be considered an advance in a field and establish generality.” Still, the results open “an entire new domain of investigation and new of way of thinking,” he said.

Experiments on tadpoles reveal the influence of the immature brain on other developing tissues, which appears to be electrical, according to Levin and his colleagues. Photo A shows the appearance of normal muscle in young tadpoles. In tadpoles that lack brains, the muscles fail to develop the correct form (B). But if the cells of brainless tadpoles are made to express ion channels that can restore the right voltage to the cells, the muscles develop more normally (C).
Celia Herrera-Rincon and Michael Levin

Levin’s research demonstrates that the nervous system plays a much more important role in how organisms build themselves than previously thought, said Min Zhao, a biologist at the University of California, Davis, and an expert on the biomedical application and molecular biophysics of electric-field effects in living tissues. Despite earlier experimental and clinical evidence, “this paper is the first one to demonstrate convincingly that this also happens in [the] developing embryo.”

“The results of Mike’s lab abolish the frontier, by demonstrating that electrical signaling from the central nervous system shapes early development,” said Olivier Soriani of the Institut de Biologie de Valrose CNRS. “The bioelectrical activity can now be considered as a new type of input encoding organ patterning, allowing large range control from the central nervous system.”

Carvalho observed that the work has obvious implications for the treatment and prevention of developmental malformations and birth defects—especially since the findings suggest that interfering with the function of a single neurotransmitter may sometimes be enough to prevent developmental issues. “This indicates that a therapeutic approach to these defects may be, at least in some cases, simpler than anticipated,” he said.

Levin speculates that in the future, we may not need to micromanage multitudes of cell-signaling events; instead, we may be able to manipulate how cells communicate with each other electrically and let them fix various problems.

Another recent experiment hinted at just how significant the developing brain’s bioelectric signal might be. Herrera-Rincon soaked frog embryos in common drugs that are normally harmless and then removed their brains. The drugged, brainless embryos developed severe birth defects, such as crooked tails and spinal cords. According to Levin, these results show that the brain protects the developing body against drugs that otherwise might be dangerous teratogens (compounds that cause birth defects). “The paradigm of thinking about teratogens was that each chemical is either a teratogen or is not,” Levin said. “Now we know that this depends on how the brain is working.”

The body of a tadpole normally develops with a predictable structure (A). Removing a tadpole’s brain early in development, however, leads to abnormalities in tissues far from the head (B).

These findings are impressive, but many questions remain, said Adam Cohen, a biophysicist at Harvard who studies bioelectrical signaling in bacteria. “It is still unclear precisely how the brain is affecting developmental patterning under normal conditions, meaning when the brain is intact.” To get those answers, researchers need to design more targeted experiments; for instance, they could silence specific neurons in the brain or block the release of specific neurotransmitters during development.

Although Levin’s work is gaining recognition, the emphasis he puts on electricity in development is far from universally accepted. Epigenetics and bioelectricity are important, but so are other layers of biology, Zhao said. “They work together to produce the biology we see.” More evidence is needed to shift the paradigm, he added. “We saw some amazing and mind-blowing results in this bioelectricity field, but the fundamental mechanisms are yet to be fully understood. I do not think we are there yet.”

But Nuccitelli says that for many biologists, Levin is on to something. For example, he said, Levin’s success in inducing the growth of misplaced eyes in tadpoles simply by altering the ion flux through the local tissues “is an amazing demonstration of the power of biophysics to control pattern formation.” The abundant citations of Levin’s more than 300 papers in the scientific literature—more than 10,000 times in almost 8,000 articles—is also “a great indicator that his work is making a difference.”

The passage of time and the efforts of others carrying on Levin’s work will help his cause, suggested David Stocum, a developmental biologist and dean emeritus at Indiana University-Purdue University Indianapolis. “In my view, his ideas will eventually be shown to be correct and generally accepted as an important part of the framework of developmental biology.”

“We have demonstrated a proof of principle,” Herrera-Rincon said as she finished preparing another petri dish full of beanlike embryos. “Now we are working on understanding the underlying mechanisms, especially the meaning: What is the information content of the brain-specific information, and how much morphogenetic guidance does it provide?” She washed off the scalpel and took off her gloves and lab coat. “I have a million experiments in my mind.”

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more: https://www.wired.com/story/brainless-embryos-suggest-bioelectricity-guides-growth/

The world saw Stephen Hawking as an oracle. In fact, he was wonderfully human | Philip Ball

Like no other scientist, Hawking was romanticised by the public. His death allows us to see past the fairytale, says science writer Philip Ball

Poignantly, Stephen Hawkings death at the age of 76 humanises him again. Its not just that, as a public icon as recognisable as any A-list actor or rock star, he came to seem a permanent fixture of the cultural landscape. It was also that his physical manifestation the immobile body in customised wheelchair, the distinctive voice that pronounced with the oracular calm of HAL from 2001: A Space Odyssey gave him the aura of a different kind of being, notoriously described by the anthropologist Hlne Mialet as more machine than man.

He was, of course, not only mortal but precariously so. His survival for more than half a century after his diagnosis with motor neurone disease shortly after his 21st birthday seemed to give him only a few years to live is one of the most remarkable feats of determination and sheer medical marvels of our time. Equally astonishing was the life that Hawking wrought from that excruciatingly difficult circumstance. It was not so much a story of survival as a modern fairytale in which he, as the progress of his disease left him increasingly incapacitated, seemed only to grow in stature. He made seminal contributions to physics, wrote bestselling books, appeared in television shows, and commanded attention and awe at his every pronouncement.

Play Video
3:09

Cosmology’s brightest star Stephen Hawking dies aged 76 video

This all meant that his science was, to use a zeitgeisty word, performative. To the world at large it was not so much what he said that mattered, but the manner and miracle of its delivery. As his Reith Lectures in 2015 demonstrated, he was not in fact a natural communicator all those feeling guilty at never having finished A Brief History of Time need not feel so bad, as he was no different from many scientists in struggling to translate complex ideas into simple language. But as I sat in the audience for those lectures (delayed because of Hawkings faltering health), it felt more clear than ever that there was a ritualistic element of the whole affair. We were there not so much to learn about black holes and cosmology as to pay respects to an important cultural presence.

Without that performance, Hawking the scientist would be destined to become like any other after their death: a name in a citation, Hawking S, Nature volume 248 pages 30-31 (1971). What, then will endure?

Quite a lot. Hawkings published work, disconnected from the legend of the man, reveals him to be a physicist of the highest calibre, who will be remembered in particular for some startlingly inventive and imaginative contributions to the field of general relativity: the study of the theory of gravity first proposed by Albert Einstein in 1916. At the same time, they show that he has no real claim to being Einsteins successor. The romanticising of Hawking brings, for a scientist, the temptation to want to cut him down to size. The Nobel committee never found his work quite met the mark partly, perhaps, because it dealt in ideas that are difficult to verify, applying to objects like black holes not easy to investigate. The lack of a Nobel seemed to trouble him; but he was, without question, in with a shout for one.

That 1974 paper in Nature will be one of the most enduring, offering a memorable contribution to our understanding of black holes. These are created when massive objects such as stars undergo runaway collapse under their own gravity to become what general relativity insists is a singularity: a point of infinite density, surrounded by a gravitational field so strong that, within a certain distance called the event horizon, not even light can escape.

The very idea of black holes seemed to many astrophysicists to be an affront to reason until a renaissance of interest in general relativity in the 1960s which the young Hawking helped to boost got them taken seriously. Hawkings paper argued that black holes will radiate energy from close to the event horizon the origin of the somewhat gauche title of one of the Reith Lectures, Black holes aint as black as they are painted and that the process should cause primordial miniature black holes to explode catastrophically. Most physicists now accept the idea of Hawking radiation, although it has yet to be observed.

This work became a central pillar in research that has now linked several key, and hitherto disparate, areas of physical theory: general relativity, quantum mechanics, thermodynamics and information theory. Here Hawking, like any scientist, drew on the ideas of others and not always graciously, as when he initially disparaged the suggestion of the young physicist Jacob Bekenstein that the surface area of a black holes event horizon is related to its thermodynamic entropy. Hawkings recent efforts in this field have scarcely been decisive, but his colleagues were always eager to see what he had to say about it.

Less enduring will be his passion for a theory of everything, a notion described in his 1980 lecture in Cambridge when he became Lucasian Professor of Mathematics, the chair once occupied by Isaac Newton. It supplied a neat title for the 2014 biopic, but most physicists have fallen out of love with this ambitious project. That isnt just because it has proved so difficult, becoming mired in a theoretical quagmire involving speculative ideas such as string theory that are beyond any obvious means of testing. Its also because many see the idea as meaningless: physical theory is a hierarchy in which laws emerge at each level that cant be discerned at a more reductive one. Hawkings enthusiasm for a theory of everything highlights how he didnt share Einsteins breadth of vision in science, but focused almost exclusively on one subdiscipline of physics.

His death brings such limits into focus. His pronouncements on the death of philosophy now look naive, ill-informed and hubristic but plenty of other scientists say such things without having to cope with seeing them carved in stone and pored over. His readiness to speak out on other issues beyond his expertise has mixed results: his sparring with Jeremy Hunt over the NHS was cheering, but his vague musings about space travel, aliens and AI just got in the way of more sober debate.

As The Theory of Everything wasnt afraid to show, Hawking was human, all too human. It feels something of a relief to be able to grant him that again: to see beyond the tropes, cartoons and cliches and to find the man who lived with great fortitude and good humour inside the oracle that we made of him.

Philip Ball is a science writer. His latest book is The Water Kingdom: A Secret History of China

Read more: https://www.theguardian.com/commentisfree/2018/mar/15/stephen-hawking-oracel-wonderfully-human-scientist

Stephen Hawking, a Physicist Transcending Space and Time, Passes Away at 76

For arguably the most famous physicist on Earth, Stephen Hawking—who died Wednesday in Cambridge at 76 years old—was wrong a lot. He thought, for a while, that black holes destroyed information, which physics says is a no-no. He thought Cygnus X-1, an emitter of X-rays over 6,000 light years away, wouldn’t turn out to be a black hole. (It did.) He thought no one would ever find the Higgs boson, the particle indirectly responsible for the existence of mass in the universe. (Researchers at CERN found it in 2012.)

But Hawking was right a lot, too. He and the physicist Roger Penrose described singularities, mind-bending physical concepts where relativity and quantum mechanics collapse inward on each other—as at the heart of a black hole. It’s the sort of place that no human will ever see first-hand; the event horizon of a black hole smears matter across time and space like cosmic paste. But Hawking’s mind was singular enough to see it, or at least imagine it.

His calculations helped show that as the young universe expanded and grew through inflation, fluctuations at the quantum scale—the smallest possible gradation of matter—became the galaxies we see around us. No human will ever visit another galaxy, and the quantum realm barely waves at us in our technology, but Hawking envisioned them both. And he calculated that black holes could sometimes explode, an image that would vex even the best visual effects wizard.

More than that, he could explain it to the rest of us. Hawking was the Lucasian Chair of Mathematics at Cambridge until his retirement in 2009, the same position held by Isaac Newton, Charles Babbage, and Paul Dirac. But he was also a pre-eminent popularizer of some of the most brain-twisting concepts science has to offer. His 1988 book A Brief History of Time has sold more than 10 million copies. His image—in an electric wheelchair and speaking via a synthesizer because of complications of the degenerative disease amyotrophic lateral sclerosis, delivering nerdy zingers on TV shows like The Big Bang Theory and Star Trek: The Next Generation—defined “scientist” for the latter half of the 20th century perhaps as much as Albert Einstein’s mad hair and German accent did in the first half.

Possibly that’s because in addition to being brilliant, Hawking was funny. Or at least sly. He was a difficult student by his own account. Diagnosed with ALS in 1963 at the age of 21, he thought he’d have only two more years to live. When the disease didn’t progress that fast, Hawking is reported to have said, “I found, to my surprise, that I was enjoying life in the present more than before. I began to make progress with my research.” With his mobility limited by the use of a wheelchair, he sped in it, dangerously. He proved time travel didn't exist by throwing a party for time travelers, but not sending out invitations until the party was over. No one came. People learned about the things he got wrong because he’d bet other scientists—his skepticism that Cygnus X-1 was a black hole meant he owed Kip Thorne of Caltech a subscription to Penthouse. (In fact, as the terms of that bet hint, rumors of mistreatment of women dogged him.)

Hawking became as much a cultural icon as a scientific one. For a time police suspected his second wife and one-time nurse of abusing him; the events became the basis of an episode of Law and Order: Criminal Intent. He played himself on The Simpsons and was depicted on Family Guy and South Park. Eddie Redmayne played Hawking in a biopic.

In recent years he looked away from the depths of the universe and into humanity’s future, joining the technologist Elon Musk in warning against the dangers of intelligent computers. “Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization,” Hawking reportedly said at a talk last year. “It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy.” In an interview with WIRED UK, he said: “Someone will design AI that replicates itself. This will be a new form of life that will outperform humans.”

In 2016 he said that he thought humanity only had about 1,000 years left, thanks to AI, climate change, and other (avoidable) disasters. Last year he reduced that horizon exponentially—100 years left, he warned, unless we changed our ways.

Hawking was taking an unusual step away from cosmology, and it was easy, perhaps, to dismiss that fear—why would someone who’d help define what a singularity actually was warn people against the pseudo-singularity of Silicon Valley? Maybe Hawking will be as wrong on this one as he was about conservation of information in black holes. But Hawking always did see into realms no one else could—until he described them to the rest of us.

Hawking's Influence

Read more: https://www.wired.com/story/stephen-hawking-a-physicist-transcending-space-and-time-passes-away-at-76/

‘I haven’t achieved much recently’: Albert Einstein’s private fears revealed in sister’s archive

The celebrated scientist frets about fame and his brain going off with age in candid, soon to be auctioned correspondence with his sister, Maja

A glimpse at the private, hidden face of Albert Einstein, including the celebrated scientists thoughts on everything from his fears that his best work was behind him to his equivocal feelings about his fame, has been revealed in a cache of letters he wrote to his beloved younger sister, Maja.

The collection, which includes a previously unknown photograph of Einstein as a five-year-old and the only surviving letter written by Einstein to his father, comes from the archive of Maja Winteler-Einstein and her husband Paul Winteler. A mix of letters, postcards and photographs, many of which have not previously been published, the documents range in date from 1897 to 1951.

The
The only surviving letter from Albert Einstein to his father, estimated to sell for 2,500-3,500. Photograph: Yves Gerard/Christie’s Images Ltd 2018

Whats remarkable about them stems from the fact that he had this incredibly close relationship with his sister. Its quite clear when hes writing to her, theres no role-playing at all, said Thomas Venning at Christies, which will auction the letters at the start of May. He was very conscious of what was expected of him after he became famous, and you dont get any of that in letters to his sister. He says some things that Ive never seen him say anywhere else, and Ive catalogued many hundreds of his letters.

In 1924, nine years after he completed the general theory of relativity in 1915, Einstein would write to Maja that scientifically I havent achieved much recently the brain gradually goes off with age, although thats not so unpleasant. It also means that youre not so answerable for your later years. Ten years later, he would write to her: I am happy in my work, even if in this and in other matters I am starting to feel that the brilliance of younger years is past.

Venning said he had not seen Einstein admit this anywhere else. Its not him playing a role, you can see that thought going through his head. Which is true if Einstein had died in 1916, his fundamental legacy would have been intact. He carried on working for another 40 years without making any other great breakthroughs. So its just an extraordinary moment which we get because of how close their relationship was. He didnt have to reassure her, he said.

Tackling topics from his hobbies of sailing and playing the violin, to his difficult relationship with his first wife, the letters are unpublished snapshots of Einstein, his private face, according to Venning. In one from 1935, Einstein makes a rare acknowledgement of his achievements, writing to Maja: In our main avenues of research in physics we are in a situation of groping in the dark, where each is completely sceptical about what another is pursuing with the highest hopes. One is in a constant state of tension until the end. At least I have the comfort that my main achievements have become part of the foundations of our science.

It sounds unusually big-headed for Einstein he was an incredibly low-key, humble person, always careful not to say anything that sounded too proud. But I think he felt he could say something to Maja, said Venning.

In 1923, in a letter that Christies has valued between 6,000 and 9,000, Einstein writes to Maja of his international fame, telling his sister and her husband that I am becoming very much loved and even more envied; theres nothing to be done about it.

Hes not rejoicing in it, hes just sort of accepting it. Einstein was the first scientist to be a world celebrity. Before that it just didnt really happen to scientists, so he was in this unique position, said Venning.

Einstein
I am becoming very much loved and even more envied; theres nothing to be done about it … Einstein to Maja and Paul Winteler, 15 April 1923. Photograph: Christie’s Images 2018 Ltd

The shadow cast by the rise of the Nazis in the 1930s, and the strength Einstein drew from his work, is starkly depicted in a letter written to his sister in September 1933. Earlier that year, Einstein had renounced his German citizenship in Antwerp, fearing for his life after the Nazis branded relativity Jewish science and publicly denounced him. He took up a role at Princeton University in New Jersey in October; his sister would follow him in 1939.

What will happen if we come back from Princeton next year? Will we even be able to? What will life be like there? The only unshakeable things are the stars and mathematics, he wrote.

This is him facing up to the fact his whole life has changed. Hes going to a country he doesnt really know. And so his whole world is falling to pieces, and he says this wonderful line, said Venning.

Christies will put the letters on view to the public from 18 to 20 April, and auction the collection online from 2 to 9 May.

Read more: https://www.theguardian.com/books/2018/mar/14/albert-einstein-letters-sister-maja

Elusive Higgs-Like State Created in Exotic Materials

If you want to understand the personality of a material, study its electrons. Table salt forms cubic crystals because its atoms share electrons in that configuration; silver shines because its electrons absorb visible light and reradiate it back. Electron behavior causes nearly all material properties: hardness, conductivity, melting temperature.

Quanta Magazine


About

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Of late, physicists are intrigued by the way huge numbers of electrons can display collective quantum-mechanical behavior. In some materials, a trillion trillion electrons within a crystal can act as a unit, like fire ants clumping into a single mass to survive a flood. Physicists want to understand this collective behavior because of the potential link to exotic properties such as superconductivity, in which electricity can flow without any resistance.

Last year, two independent research groups designed crystals, known as two-dimensional antiferromagnets, whose electrons can collectively imitate the Higgs boson. By precisely studying this behavior, the researchers think they can better understand the physical laws that govern materials—and potentially discover new states of matter. It was the first time that researchers have been able to induce such “Higgs modes” in these materials. “You’re creating a little mini universe,” said David Alan Tennant, a physicist at Oak Ridge National Laboratory who led one of the groups along with Tao Hong, his colleague there.

Both groups induced electrons into Higgs-like activity by pelting their material with neutrons. During these tiny collisions, the electrons’ magnetic fields begin to fluctuate in a patterned way that mathematically resembles the Higgs boson.

Genevieve Martin/Oak Ridge National Laboratory/U.S. Dept. of Energy

The Higgs mode is not simply a mathematical curiosity. When a crystal’s structure permits its electrons to behave this way, the material most likely has other interesting properties, said Bernhard Keimer, a physicist at the Max Planck Institute for Solid State Research who coleads the other group.

That’s because when you get the Higgs mode to appear, the material should be on the brink of a so-called quantum phase transition. Its properties are about to change drastically, like a snowball on a sunny spring day. The Higgs can help you understand the character of the quantum phase transition, says Subir Sachdev, a physicist at Harvard University. These quantum effects often portend bizarre new material properties.

For example, physicists think that quantum phase transitions play a role in certain materials, known as topological insulators, that conduct electricity only on their surface and not in their interior. Researchers have also observed quantum phase transitions in high-temperature superconductors, although the significance of the phase transitions is still unclear. Whereas conventional superconductors need to be cooled to near absolute zero to observe such effects, high-temperature superconductors work at the relatively balmy conditions of liquid nitrogen, which is dozens of degrees higher.

Over the past few years, physicists have created the Higgs mode in other superconductors, but they can’t always understand exactly what’s going on. The typical materials used to study the Higgs mode have a complicated crystal structure that increases the difficulty of understanding the physics at work.

So both Keimer’s and Tennant’s groups set out to induce the Higgs mode in simpler systems. Their antiferromagnets were so-called two-dimensional materials: While each crystal exists as a 3-D chunk, those chunks are built out of stacked two-dimensional layers of atoms that act more or less independently. Somewhat paradoxically, it’s a harder experimental challenge to induce the Higgs mode in these two-dimensional materials. Physicists were unsure if it could be done.

Yet the successful experiments showed that it was possible to use existing theoretical tools to explain the evolution of the Higgs mode. Keimer’s group found that the Higgs mode parallels the behavior of the Higgs boson. Inside a particle accelerator like the Large Hadron Collider, a Higgs boson will quickly decay into other particles, such as photons. In Keimer’s antiferromagnet, the Higgs mode morphs into different collective-electron motion that resembles particles called Goldstone bosons. The group experimentally confirmed that the Higgs mode evolves according to their theoretical predictions.

Tennant’s group discovered how to make their material produce a Higgs mode that doesn’t die out. That knowledge could help them determine how to turn on other quantum properties, like superconductivity, in other materials. “What we want to understand is how to keep quantum behavior in systems,” said Tennant.

Both groups hope to go beyond the Higgs mode. Keimer aims to actually observe a quantum phase transition in his antiferromagnet, which may be accompanied by additional weird phenomena. “That happens quite a lot,” he said. “You want to study a particular quantum phase transition, and then something else pops up.”

They also just want to explore. They expect that more weird properties of matter are associated with the Higgs mode—potentially ones not yet envisioned. “Our brains don’t have a natural intuition for quantum systems,” said Tennant. “Exploring nature is full of surprises because it’s full of things we never imagined.”

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more: https://www.wired.com/story/elusive-higgs-like-state-created-in-exotic-materials/

How Cells Pack Tangled DNA Into Neat Chromosomes

A human cell carries in its nucleus two meters of spiraling DNA, split up among the 46 slender, double-helical molecules that are its chromosomes. Most of the time, that DNA looks like a tangled ball of yarn—diffuse, disordered, chaotic. But that messiness poses a problem during mitosis, when the cell has to make a copy of its genetic material and divide in two. In preparation, it tidies up by packing the DNA into dense, sausagelike rods, the chromosomes’ most familiar form. Scientists have watched that process through a microscope for decades: The DNA condenses and organizes into discrete units that gradually shorten and widen. But how the genome gets folded inside that structure—it’s clear that it doesn’t simply contract—has remained a mystery. “It’s really at the heart of genetics,” said Job Dekker, a biochemist at the University of Massachusetts Medical School, “a fundamental aspect of heredity that’s always been such a great puzzle.”

Quanta Magazine


About

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

To solve that puzzle, Dekker teamed up with Leonid Mirny, a biophysicist at the Massachusetts Institute of Technology, and William Earnshaw, a biologist at the University of Edinburgh in Scotland. They and their colleagues used a combination of imaging, modeling and genomic techniques to understand how the condensed chromosome forms during cell division. Their results, published recently in *Science and confirmed in part by experimental evidence reported by a European team in this week's issue of the journal, paint a picture in which two protein complexes sequentially organize the DNA into tight arrays of loops along a helical spine.

The researchers collected minute-by-minute data on chromosomes—using a microscope to see how they changed, as well as a technology called Hi-C, which provides a map of how frequently pairs of sequences in the genome interact with one another. They then generated sophisticated computer simulations to match that data, allowing them to calculate the three-dimensional path the chromosomes traced as they condensed.

Their models determined that in the lead-up to mitosis, a ring-shaped protein molecule called condensin II, composed of two connected motors, lands on the DNA. Each of its motors move in opposite directions along the strand while remaining attached to one another, causing a loop to form; as the motors continue to move, that loop gets larger and larger. (Mirny demonstrated the process for me by clasping a piece of his computer’s power cord with both hands, held knuckles to knuckles, through which he then proceeded to push a loop of cord.) As tens of thousands of these protein molecules do their work, a series of loops emerges. The ringlike proteins, positioned at the base of each loop, create a central scaffolding from which the loops emanate, and the entire chromosome becomes shorter and stiffer.

Those results lent support to the idea of loop extrusion, a prior proposal about how DNA is packaged. (Loop extrusion is also responsible for preventing duplicated chromosomes from becoming knotted and entangled, according to Mirny. The mechanics of the looped structure cause sister chromatids to repel each other.) But what the scientists observed next came as more of a surprise and allowed them to build further detail into the loop extrusion hypothesis.

After about 10 minutes, the nuclear envelope keeping the chromosomes together broke down, giving a second ring-shaped motor protein, condensin I, access to the DNA. Those molecules performed loop extrusion on the loops that had already formed, splitting each into around five smaller loops on average. Nesting loops in this way enabled the chromosome to become narrower and prevented the initial loops from growing large enough to mix or interact.

According to the researchers’ models, one major aspect of the chromosome’s folding process is the formation of nested loops. First, a ring-shaped motor protein (red) lands on DNA and extrudes a loop. Later, a second protein (blue) extrudes loops on top of that one. When many such molecules across the entire length of the DNA do this, the chromosome compacts.
Dr. Anton Goloborodko

After approximately 15 minutes, as these loops were forming, the Hi-C data showed something that the researchers found even more unexpected. Typically, sequences located close together along the string of DNA were most likely to interact, while those farther apart were less likely to do so. But the team’s measurements showed that “things [then] kind of came back again in a circle,” Mirny said. That is, once the distance between sequences had grown even further, they again had a higher probability of interacting. “It was obvious from the first glance at this data that we’d never seen something like this before,” he said. His model suggested that condensin II molecules assembled into a helical scaffold, as in the famous Leonardo staircase found in the Chambord Castle in France. The nested loops of DNA radiated out like steps from that spiraling scaffold, packing snuggly into the cylindrical configuration that characterizes the chromosome.

“So this single process immediately solves three problems,” Mirny said. “It creates a scaffold. It linearly orders the chromosome. And it compacts it in such a way that it becomes an elongated object.”

“That was really surprising to us,” Dekker said—not only because they’d never observed the rotation of loops along a helical axis, but because the finding taps into a more fundamental debate. Namely, are chromosomes just a series of loops, or do they spiral? And if they do spiral, is it that the entire chromosome twists into a coil, or that only the internal scaffolding does? (The new study points to the latter; the researchers attribute the former helix-related hypothesis to experimental artifacts, the result of isolating chromosomes in a way that promoted excessive spiraling.) “Our work unifies many, many observations that people have collected over the years,” Dekker said.

“This [analysis] provides a revolutionary degree of clarity,” said Nancy Kleckner, a molecular biologist at Harvard University. “It takes us into another era of understanding how chromosomes are organized at these late stages.”

This series of images illustrates how a compacted chromosome takes shape. Ring-shaped motor proteins (red) form a helical scaffold. Folded loops of DNA emanate from that spiraling axis so that they can be packed tightly into a cylindrical rod.
Dr. Anton Goloborodko

Other experts in the field found those results less surprising, instead deeming the study more noteworthy for the details it provided. Hints of the general chromosomal assembly the researchers described were already “in the air,” according to Julien Mozziconacci, a biophysicist at Sorbonne University in France. The more novel aspects of the work, he said, lay in the researchers’ collection of Hi-C data as a function of time, which allowed them to pinpoint specific constraints, such as the sizes of the loops and helical turns. “I think this is a technical tour de force that allows us to see for the first time what people have been thinking,” he said.

Still, Dekker cautioned that, although it’s been known for some time that condensins are involved in this process—and despite the fact that his group has now identified more specific roles for those “molecular hands that cells use to fold chromosomes”—scientists still don’t understand exactly how they do it.

“If condensin is organizing mitotic chromosomes in this manner, how does it do so?” said Kim Nasmyth, a biochemist at the University of Oxford and a pioneer of the loop extrusion hypothesis. “Until we know the molecular mechanism, we can’t say for sure whether condensin is indeed the one driving all this.”

That’s where Christian Häring, a biochemist at the European Molecular Biology Laboratory in Germany, and Cees Dekker, a biophysicist (unrelated to Job Dekker) at Delft University of Technology in the Netherlands, enter the picture. Last year, they and their colleagues directly demonstrated for the first time that condensin does move along DNA in a test tube—a prerequisite for loop extrusion to be true. And in this week's issue of Science, they reported witnessing an isolated condensin molecule extruding a loop of DNA in yeast, in real time. “We finally have visual proof of this happening,” Häring said.

And it happened almost exactly as Mirny and his team predicted it would for the formation of their larger loops—except that in the in vitro experiment, the loops formed asymmetrically: The condensin landed on the DNA and reeled it in from only one side, rather than in both directions as Mirny initially assumed. (Since the experiments involved condensin from yeast, and only examined a single molecule at a time, they could neither confirm nor refute the other aspects of Mirny’s models, namely the nested loops and helical scaffold.)

Once researchers have completely unpacked that biochemistry—and conducted similar studies on how chromosomes unwind themselves—Job Dekker and Mirny think their work can lend itself to a range of practical and theoretical applications. For one, the research could inform potential cancer treatments. Cancer cells divide quickly and frequently, “so anything we know about that process can help specifically target those kinds of cells,” Dekker said.

It could also provide a window into what goes on in the chromosomes of cells that aren’t dividing. “It has wider implications for, I believe, any other thing the cell does with chromosomes,” Job Dekker said. The condensins he and his colleagues are studying have close relatives, called cohesins, that help with organizing the genome and creating loops even when the DNA isn’t getting compacted. That folding process could affect gene expression. Loop extrusion basically brings pairs of loci together, however briefly, at the base of the growing or shrinking loop—something that could very well be happening during gene regulation, when a gene has to be in physical contact with a regulatory element that may be located quite a distance away along the chromosome. “We now have such a powerful system to study this process,” Dekker said.

“I think there’s an incredible amount of synergy between the things we can learn at different parts of the cell cycle,” added Geoff Fudenberg, a postdoctoral researcher at the University of California, San Francisco, who previously worked in Mirny’s lab. Understanding how chromosomes undergo such a “dramatic transition” during mitosis, he said, could also reveal a lot about what they are doing “below the surface” when cells are not dividing and certain activities and behaviors are less clear.

Mirny points out that this type of folding could also provide insights into other processes in cells that involve active changes in shape or structure. Proteins get folded largely by interactions, while motor processes create the cytoskeleton in the cytoplasm. “Now we came to realize that chromosomes may be something in between,” Mirny said. “We need to gain a better understanding of how these types of active systems self-organize to create complex patterns and vital structures.”

Before that’s possible, the researchers will have to confirm and flesh out the solution they’ve proposed to what Job Dekker called a “great puzzle.” Kleckner has high hopes as well. “This work sets the foundation for a whole new way of thinking about what might be going on,” she said.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more: https://www.wired.com/story/how-cells-pack-tangled-dna-into-neat-chromosomes/