New Brain Maps With Unmatched Detail May Change Neuroscience

Sitting at the desk in his lower-campus office at Cold Spring Harbor Laboratory, the neuroscientist Tony Zador turned his computer monitor toward me to show off a complicated matrix-style graph. Imagine something that looks like a spreadsheet but instead of numbers it’s filled with colors of varying hues and gradations. Casually, he said: “When I tell people I figured out the connectivity of tens of thousands of neurons and show them this, they just go ‘huh?’ But when I show this to people …” He clicked a button onscreen and a transparent 3-D model of the brain popped up, spinning on its axis, filled with nodes and lines too numerous to count. “They go ‘What the _____!’”

Quanta Magazine


About

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

What Zador showed me was a map of 50,000 neurons in the cerebral cortex of a mouse. It indicated where the cell bodies of every neuron sat and where they sent their long axon branches. A neural map of this size and detail has never been made before. Forgoing the traditional method of brain mapping that involves marking neurons with fluorescence, Zador had taken an unusual approach that drew on the long tradition of molecular biology research at Cold Spring Harbor, on Long Island. He used bits of genomic information to imbue a unique RNA sequence or “bar code” into each individual neuron. He then dissected the brain into cubes like a sheet cake and fed the pieces into a DNA sequencer. The result: a 3-D rendering of 50,000 neurons in the mouse cortex (with as many more to be added soon) mapped with single cell resolution.

This work, Zador’s magnum opus, is still being refined for publication. But in a paper recently published by Nature, he and his colleagues showed that the technique, called MAPseq (Multiplexed Analysis of Projections by Sequencing), can be used to find new cell types and projection patterns never before observed. The paper also demonstrated that this new high-throughput mapping method is strongly competitive in accuracy with the fluorescent technique, which is the current gold standard but works best with small numbers of neurons.

Tony Zador, a neurophysiologist at Cold Spring Harbor Laboratory, realized that genome sequencing techniques could scale up to tame the astronomical numbers of neurons and interconnections in the brain.
jeansweep/Quanta Magazine

The project was born from Zador’s frustration during his “day job” as a neurophysiologist, as he wryly referred to it. He studies auditory decision-making in rodents: how their brain hears sounds, processes the audio information and determines a behavioral output or action. Electrophysiological recordings and the other traditional tools for addressing such questions left the mathematically inclined scientist unsatisfied. The problem, according to Zador, is that we don’t understand enough about the circuitry of the neurons, which is the reason he pursues his “second job” creating tools for imaging the brain.

The current state of the art for brain mapping is embodied by the Allen Brain Atlas, which was compiled from work in many laboratories over several years at a cost upward of $25 million. The Allen Atlas is what’s known as a bulk connectivity atlas because it traces known subpopulations of neurons and their projections as groups. It has been highly useful for researchers, but it cannot distinguish subtle differences within the groups or neuron subpopulations.

If we ever want to know how a mouse hears a high-pitched trill, processes that the sound means a refreshing drink reward is available and lays down new memories to recall the treat later, we will need to start with a map or wiring diagram for the brain. In Zador’s view, lack of knowledge about that kind of neural circuitry is partly to blame for why more progress has not been made in the treatment of psychiatric disorders, and why artificial intelligence is still not all that intelligent.

Justus Kebschull, a Stanford University neuroscientist, an author of the new Nature paper and a former graduate student in Zador’s lab, remarked that doing neuroscience without knowing about the circuitry is like “trying to understand how a computer works by looking at it from the outside, sticking an electrode in and probing what we can find. … Without ever knowing the hard drive is connected to the processor and the USB pod provides input to the whole system, it’s difficult to understand what’s happening.”

Inspiration for MAPseq struck Zador when he learned of another brain mapping technique called Brainbow. Hailing from the lab of Jeff Lichtman at Harvard University, this method was remarkable in that it genetically labeled up to 200 individual neurons simultaneously using different combinations of fluorescent dyes. The results were a tantalizing, multicolored tableau of neon-colored neurons that displayed, in detail, the complex intermingling of axons and neuron cell bodies. The groundbreaking work gave hope that mapping the connectome—the complete plan of neural connections in the brain—was soon to be a reality. Unfortunately, a limitation of the technique in practice is that through a microscope, experimenters could resolve only about five to 10 distinct colors, which was not enough to penetrate the tangle of neurons in the cortex and map many neurons at once.

That’s when the lightbulb went on in Zador’s head. He realized that the challenge of the connectome’s huge complexity might be tamed if researchers could harness the increasing speed and dwindling costs of high-throughput genomic sequencing techniques. “It’s what mathematicians call reducing it to a previously solved problem,” he explained.

In MAPseq, researchers inject an animal with genetically modified viruses that carry a variety of known RNA sequences, or “bar codes.” For a week or more, the viruses multiply inside the animal, filling each neuron with some distinctive combination of those bar codes. When the researchers then cut the brain into sections, the RNA bar codes can help them track individual neurons from slide to slide.

Zador’s insight led to the new Nature paper, in which his lab and a team at University College London led by the neuroscientist Thomas Mrsic-Flogel used MAPseq to trace the projections of almost 600 neurons in the mouse visual system. (Editor’s note: Zador and Mrsic-Flogel both receive funding from the Simons Foundation, which publishes Quanta.)

Six hundred neurons is a modest start compared with the tens of millions in the brain of a mouse. But it was ample for the specific purpose the researchers had in mind: They were looking to discern whether there is a structure to the brain’s wiring pattern that might be informative about its function. A currently popular theory is that in the visual cortex, an individual neuron gathers a specific bit of information from the eye—about the edge of an object in the field of view, or a type of movement or spatial orientation, for example. The neuron then sends a signal to a single corresponding area in the brain that specializes in processing that type of information.

To test this theory, the team first mapped a handful of neurons in mice in the traditional way by inserting a genetically encoded fluorescent dye into the individual cells. Then, with a microscope, they traced how the cells stretched from the primary visual cortex (the brain area that receives input from the eyes) to their endpoints elsewhere in the brain. They found that the neurons’ axons branched out and sent information to many areas simultaneously, overturning the one-to-one mapping theory.

Next, they asked if there were any patterns to these projections. They used MAPseq to trace the projections of 591 neurons as they branched out and innervated multiple targets. What the team observed was that the distribution of axons was structured: Some neurons always sent axons to areas A, B and C but never to D and E, for example.

These results suggest the visual system contains a dizzying level of cross-connectivity and that the pattern of those connections is more complicated than a one-to-one mapping. “Higher visual areas don’t just get information that is specifically tailored to them,” Kebschull said. Instead, they share many of the same inputs, “so their computations might be tied to each other.”

Nevertheless, the fact that certain cells do project to specific areas also means that within the visual cortex there are specialized cells that have not yet been identified. Kebschull said this map is like a blueprint that will enable later researchers to understand what these cells are doing. “MAPseq allows you to map out the hardware. … Once we know the hardware we can start to look at the software, or how the computations happen,” he said.

MAPseq’s competitive edge in speed and cost for such investigations is considerable: According to Zador, the technique should be able to scale up to handle 100,000 neurons within a week or two for only $10,000 — far faster than traditional mapping would be, at a fraction of the cost.

Such advantages will make it more feasible to map and compare the neural pathways of large numbers of brains. Studies of conditions such as schizophrenia and autism that are thought to arise from differences in brain wiring have often frustrated researchers because the available tools don’t capture enough details of the neural interconnections. It’s conceivable that researchers will be able to map mouse models of these conditions and compare them with more typical brains, sparking new rounds of research. “A lot of psychiatric disorders are caused by problems at the circuit level,” said Hongkui Zeng, executive director of the structured science division at the Allen Institute for Brain Science. “Connectivity information will tell you where to look.”

High-throughput mapping also allows scientists to gather lots of neurological data and look for patterns that reflect general principles of how the brain works. “What Tony is doing is looking at the brain in an unbiased way,” said Sreekanth Chalasani, a molecular neurobiologist at the Salk Institute. “Just as the human genome map has provided a scaffolding to test hypotheses and look for patterns in [gene] sequence and function, Tony’s method could do the same” for brain architecture.

The detailed map of the human genome didn’t immediately explain all the mysteries of how biology works, but it did provide a biomolecular parts list and open the way for a flood of transformative research. Similarly, in its present state of development, MAPseq cannot provide any information about the function or location of the cells it is tagging or show which cells are talking to one another. Yet Zador plans to add this functionality soon. He is also collaborating with scientists studying various parts the brain, such as the neural circuits that underlie fear conditioning.

“I think there are insights to be derived from connectivity. But just like genomes themselves aren’t interesting, it’s what they enable that is transformative. And that’s why I’m excited,” Zador said. “I’m hopeful it’s going to provide the scaffolding for the next generation of work in the field.”

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more: https://www.wired.com/story/new-brain-maps-with-unmatched-detail-may-change-neuroscience/

Why Winning in Rock-Paper-Scissors Isnt Everything

Rock-Paper-Scissors works great for deciding who has to take out the garbage. But have you ever noticed what happens when, instead of playing best of three, you just let the game continue round after round? At first, you play a pattern that gives you the upper hand, but then your opponent quickly catches on and turns things in her favor. As strategies evolve, a point is reached where neither side seems to be able to improve any further. Why does that happen?

Quanta Magazine


About

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

In 1950, the mathematician John Nash proved that in any kind of game with a finite number of players and a finite number of options—like Rock-Paper-Scissors—a mix of strategies always exists where no single player can do any better by changing their own strategy alone. The theory behind such stable strategy profiles, which came to be known as “Nash equilibria,” revolutionized the field of game theory, altering the course of economics and changing the way everything from political treaties to network traffic is studied and analyzed. And it earned Nash the Nobel Prize in 1994.

So, what does a Nash equilibrium look like in Rock-Paper-Scissors? Let’s model the situation with you (Player A) and your opponent (Player B) playing the game over and over. Each round, the winner earns a point, the loser loses a point, and ties count as zero.

Now, suppose Player B adopts the (silly) strategy of choosing Paper every turn. After a few rounds of winning, losing, and tying, you are likely to notice the pattern and adopt a winning counterstrategy by choosing Scissors every turn. Let’s call this strategy profile (Scissors, Paper). If every round unfolds as Scissors vs. Paper, you’ll slice your way to a perfect record.

But Player B soon sees the folly in this strategy profile. Observing your reliance on Scissors, she switches to the strategy of always choosing Rock. This strategy profile (Scissors, Rock) starts winning for Player B. But of course, you now switch to Paper. During these stretches of the game, Players A and B are employing what are known as “pure” strategies—a single strategy that is chosen and repeatedly executed.

Clearly, no equilibrium will be achieved here: For any pure strategy, like “always choose Rock,” a counterstrategy can be adopted, like “always choose Paper,” which will force another change in strategy. You and your opponent will forever be chasing each other around the circle of strategies.

But you can also try a “mixed” strategy. Let’s assume that, instead of choosing only one strategy to play, you can randomly choose one of the pure strategies each round. Instead of “always play Rock,” a mixed strategy could be to “play Rock half the time and Scissors the other half.” Nash proved that, when such mixed strategies are allowed, every game like this must have at least one equilibrium point. Let’s find it.

So, what’s a sensible mixed strategy for Rock-Paper-Scissors? A reasonable intuition would be “choose Rock, Paper or Scissors with equal probability,” denoted as (1/3,1/3,1/3). This means Rock, Paper and Scissors are each chosen with probability 1/3. Is this a good strategy?

Well, suppose your opponent’s strategy is “always choose Rock,” a pure strategy that can be represented as (1,0,0). How will the game play out under the strategy profile (1/3,1/3,1/3) for A and (1,0,0) for B?

In order to get better picture of our game, we’ll construct a table that shows the probability of each of the nine possible outcomes every round: Rock for A, Rock for B; Rock for A, Paper for B; and so on. In the chart below, the top row indicates Player B’s choice, and the leftmost column indicates Player A’s choice.

Each entry in the table shows the probability that the given pair of choices is made in any given round. This is simply the product of the probabilities that each player makes their respective choice. For example, the probability of Player A choosing Paper is 1/3, and the probability of player B choosing Rock is 1, so the probability of (Paper for A, Rock for B) is 1/3×1=1/3. But the probability of (Paper for A, Scissors for B) is 1/3×0=0, since there is a zero probability of Player B choosing Scissors.

So how does Player A fare in this strategy profile? Player A will win one-third of the time (Paper, Rock), lose one-third of the time (Scissors, Rock) and tie one-third of the time (Rock, Rock). We can compute the number of points that Player A will earn, on average, each round by computing the sum of the product of each outcome with its respective probability:

This says that, on average, Player A will earn 0 points per round. You will win, lose and tie with equal likelihood. On average, the number of wins and losses will even out, and both players are essentially headed for a draw.

But as we’ve already discussed, you can do better by changing your strategy, assuming your opponent doesn’t change theirs. If you switch to the strategy (0,1,0) (“choose Paper every time”), the probability chart will look like this

Each time you play, your Paper will wrap your opponent’s Rock, and you’ll earn one point every round.

So, this pair of strategies—(1/3,1/3,1/3) for A and (1,0,0) for B—is not a Nash equilibrium: You, as Player A, can improve your results by changing your strategy.

As we’ve seen, pure strategies don’t seem to lead to equilibrium. But what if your opponent tries a mixed strategy, like (1/2,1/4,1/4)? This is the strategy “Rock half the time; Paper and Scissors each one quarter of the time.” Here’s the associated probability chart:

Now, here’s the “payoff” chart, from Player A’s perspective; this is the number of points Player A receives for each outcome.

We put the two charts together, using multiplication, to compute how many points, on average, Player A will earn each round.

16(0)+1/12(−1)+1/12(1)+16(1)+1/12(0)+1/12(−1)+16(−1)+1/12(1)+1/12(0)=0

On average, Player A is again earning 0 points per round. Like before, this strategy profile, (1/3,1/3,1/3) for A and (1/2,1/4,1/4) for B, ends up in a draw.

But also like before, you as Player A can improve your results by switching strategies: Against Player B’s (1/2,1/4,1/4), Player A should play (1/4,1/2,1/4). This has a probability chart of

and this net result for A:

1/8(0)+1/16(−1)+1/16(1)+14(1)+1/8(0)+1/8(−1)+1/8(−1)+1/16(1)+1/16(0)=1/16

That is, under this strategy profile—(1/4,1/2,1/4) for A and (1/2,1/4,1/4) for B—Player A nets 1/16 of a point per round on average. After 100 games, Player A will be up 6.25 points. There’s a big incentive for Player A to switch strategies. So, the strategy profile of (1/3,1/3,1/3) for A and (1/2,1/4,1/4) for B is not a Nash equilibrium, either.

But now let’s consider the pair of strategies (1/3,1/3,1/3) for A and (1/3,1/3,1/3) for B. Here’s the corresponding probability chart:

Symmetry makes quick work of the net result calculation:

Again, you and your opponent are playing to draw. But the difference here is that no player has an incentive to change strategies! If Player B were to switch to any imbalanced strategy where one option—say, Rock—were played more than the others, Player A would simply alter their strategy to play Paper more frequently. This would ultimately yield a positive net result for Player A each round. This is precisely what happened when Player A adopted the strategy (1/4,1/2,1/4) against Player B’s (1/2,1/4,1/4) strategy above.

Of course, if Player A switched from (1/3,1/3,1/3) to an imbalanced strategy, Player B could take advantage in a similar manner. So, neither player can improve their results solely by changing their own individual strategy. The game has reached a Nash equilibrium.

The fact that all such games have such equilibria, as Nash proved, is important for several reasons. One of those reasons is that many real-life situations can be modeled as games. Whenever a group of individuals is caught in the tension between personal gain and collective satisfaction—like in a negotiation, or a competition for shared resources—you’ll find strategies being employed and payoffs being evaluated. The ubiquitous nature of this mathematical model is part of the reason Nash’s work has been so impactful.

Another reason is that a Nash equilibrium is, in some sense, a positive outcome for all players. When reached, no individual can do better by changing their own strategy. There might exist better collective outcomes that could be reached if all players acted in perfect cooperation, but if all you can control is yourself, ending up at a Nash equilibrium is the best you can reasonably hope to do.

And so, we might hope that “games” like economic incentive packages, tax codes, treaty parameters and network designs will end in Nash equilibria, where individuals, acting in their own interest, all end up with something to be happy about, and systems are stable. But when playing these games, is it reasonable to assume that players will naturally arrive at a Nash equilibrium?

It’s tempting to think so. In our Rock-Paper-Scissors game, we might have guessed right away that neither player could do better than playing completely randomly. But that’s in part because all player preferences are known to all other players: Everyone knows how much everyone else wins and loses for each outcome. But what if preferences were secret and more complex?

Imagine a new game in which Player B scores three points when she defeats Scissors, and one point for any other victory. This would alter the mixed strategy: Player B would play Rock more often, hoping for the triple payoff when Player A chooses Scissors. And while the difference in points wouldn’t directly affect Player A’s payoffs, the resulting change in Player B’s strategy would trigger a new counterstrategy for A.

And if every one of Player B’s payoffs was different, and secret, it would take some time for Player A to figure out what Player B’s strategy was. Many rounds would pass before Player A could get a sense of, say, how often Player B was choosing Rock, in order to figure out how often to choose Paper.

Now imagine there are 100 people playing Rock-Paper-Scissors, each with a different set of secret payoffs, each depending on how many of their 99 opponents they defeat using Rock, Paper or Scissors. How long would it take to calculate just the right frequency of Rock, Paper or Scissors you should play in order to reach an equilibrium point? Probably a long time. Maybe longer than the game will go on. Maybe even longer than the lifetime of the universe!

At the very least, it’s not obvious that even perfectly rational and reflective players, playing good strategies and acting in their own best interests, will end up at equilibrium in this game. This idea lies at the heart of a paper posted online in 2016 that proves there is no uniform approach that, in all games, would lead players to even an approximate Nash equilibrium. This is not to say that perfect players never tend toward equilibrium in games—they often do. It just means that there’s no reason to believe that just because a game is being played by perfect players, equilibrium will be achieved.

When we design a transportation network, we might hope that the players in the game, travelers each seeking the fastest way home, will collectively achieve an equilibrium where nothing is gained by taking a different route. We might hope that the invisible hand of John Nash will guide them so that their competing and cooperating interests—to take the shortest possible route yet avoid creating traffic jams—produce an equilibrium.

But our increasingly complex game of Rock-Paper-Scissors shows why such hopes may be misplaced. The invisible hand may guide some games, but others may resist its hold, trapping players in a never-ending competition for gains forever just out of reach.

Exercises

  1. Suppose Player B plays the mixed strategy (1/2,1/2,0). What mixed strategy should A play to maximize wins in the long run?
  2. Suppose Player B plays the mixed strategy (1/6,2/6,3/6). What mixed strategy should A play to maximize wins in the long run?
  3. List item

How might the dynamics of the game change if both players are awarded a point for a tie?

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more: https://www.wired.com/story/why-winning-in-rock-paper-scissors-isnt-everything/

Whisper From the First Stars Sets Off Loud Dark Matter Debate

The news about the first stars in the universe always seemed a little off. Last July, Rennan Barkana, a cosmologist at Tel Aviv University, received an email from one of his longtime collaborators, Judd Bowman. Bowman leads a small group of five astronomers who built and deployed a radio telescope in remote western Australia. Its goal: to find the whisper of the first stars. Bowman and his team had picked up a signal that didn’t quite make sense. He asked Barkana to help him think through what could possibly be going on.

Quanta Magazine


About

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

For years, as radio telescopes scanned the sky, astronomers have hoped to glimpse signs of the first stars in the universe. Those objects are too faint and, at over 13 billion light-years away, too distant to be picked up by ordinary telescopes. Instead, astronomers search for the stars’ effects on the surrounding gas. Bowman’s instrument, like the others involved in the search, attempts to pick out a particular dip in radio waves coming from the distant universe.

The measurement is exceedingly difficult to make, since the potential signal can get swamped not only by the myriad radio sources of modern society—one reason the experiment is deep in the Australian outback—but by nearby cosmic sources such as our own Milky Way galaxy. Still, after years of methodical work, Bowman and his colleagues with the Experiment to Detect the Global Epoch of Reionization Signature (EDGES) concluded not only that they had found the first stars, but that they had found evidence that the young cosmos was significantly colder than anyone had thought.

Barkana was skeptical, however. “On the one hand, it looks like a very solid measurement,” he said. “On the other hand, it is something very surprising.”

What could make the early universe appear cold? Barkana thought through the possibilities and realized that it could be a consequence of the presence of dark matter—the mysterious substance that pervades the universe yet escapes every attempt to understand what it is or how it works. He found that the EDGES result could be interpreted as a completely new way that ordinary material might be interacting with dark matter.

The EDGES group announced the details of this signal and the detection of the first stars in the March 1 issue of Nature. Accompanying their article was Barkana’s paper describing his novel dark matter idea. News outlets worldwide carried news of the discovery. “Astronomers Glimpse Cosmic Dawn, When the Stars Switched On,” the Associated Press reported, adding that “they may have detected mysterious dark matter at work, too.”

Yet in the weeks since the announcement, cosmologists around the world have expressed a mix of excitement and skepticism. Researchers who saw the EDGES result for the first time when it appeared in Nature have done their own analysis, showing that even if some kind of dark matter is responsible, as Barkana suggested, no more than a small fraction of it could be involved in producing the effect. (Barkana himself has been involved in some of these studies.) And experimental astronomers have said that while they respect the EDGES team and the careful work that they’ve done, such a measurement is too difficult to trust entirely. “If this weren’t a groundbreaking discovery, it would be a lot easier for people to just believe the results,” said Daniel Price, an astronomer at Swinburne University of Technology in Australia who works on similar experiments. “Great claims require great evidence.”

This message has echoed through the cosmology community since those Nature papers appeared.

The Source of a Whisper

The day after Bowman contacted Barkana to tell him about the surprising EDGES signal, Barkana drove with his family to his in-laws’ house. During the drive, he said, he contemplated this signal, telling his wife about the interesting puzzle Bowman had handed him.

Bowman and the EDGES team had been probing the neutral hydrogen gas that filled the universe during the first few hundred million years after the Big Bang. This gas tended to absorb ambient light, leading to what cosmologists poetically call the universe’s “dark ages.” Although the cosmos was filled with a diffuse ambient light from the cosmic microwave background (CMB)—the so-called afterglow of the Big Bang—this neutral gas absorbed it at specific wavelengths. EDGES searched for this absorption pattern.

As stars began to turn on in the universe, their energy would have heated the gas. Eventually the gas reached a high enough temperature that it no longer absorbed CMB radiation. The absorption signal disappeared, and the dark ages ended.

The absorption signal as measured by EDGES contains an immense amount of information. As the absorption pattern traveled across the expanding universe, the signal stretched. Astronomers can use that stretch to infer how long the signal has been traveling, and thus, when the first stars flicked on. In addition, the width of the detected signal corresponds to the amount of time that the gas was absorbing the CMB light. And the intensity of the signal—how much light was absorbed—relates to the temperature of the gas and the amount of light that was floating around at the time.

Many researchers find this final characteristic the most intriguing. “It’s a much stronger absorption than we had thought possible,” said Steven Furlanetto, a cosmologist at the University of California, Los Angeles, who has examined what the EDGES data would mean for the formation of the earliest galaxies.

Lucy Reading-Ikkanda/Quanta Magazine

The most obvious explanation for such a strong signal is that the neutral gas was colder than predicted, which would have allowed it to absorb even more background radiation. But how could the universe have unexpectedly cooled? “We’re talking about a period of time when stars are beginning to form,” Barkana said—the darkness before the dawn. “So everything is as cold as it can be. The question is: What could be even colder?”

As he parked at his in-laws’ house that July day, an idea came to him: Could it be dark matter? After all, dark matter doesn’t seem to interact with normal matter via the electromagnetic force — it doesn’t emit or absorb heat. So dark matter could have started out colder or been cooling much longer than normal matter at the beginning of the universe, and then continued to cool.

Over the next week, he worked on a theory of how a hypothetical form of dark matter called “millicharged” dark matter could have been responsible. Millicharged dark matter could interact with ordinary matter, but only very weakly. Intergalactic gas might then have cooled by “basically dumping heat into the dark matter sector where you can’t see it anymore,” Furlanetto explained. Barkana wrote the idea up and sent it off to Nature.

Then he began to work through the idea in more detail with several colleagues. Others did as well. As soon as the Nature papers appeared, several groups of theoretical cosmologists started to compare the behavior of this unexpected type of dark matter to what we know about the universe—the decades’ worth of CMB observations, data from supernova explosions, the results of collisions at particle accelerators like the Large Hadron Collider, and astronomers’ understanding of how the Big Bang produced hydrogen, helium and lithium during the universe’s first few minutes. If millicharged dark matter was out there, did all these other observations make sense?

Rennan Barkana, a cosmologist at Tel Aviv University, contributed the idea that a form of dark matter might explain why the early universe looked so cool in the EDGES observations. But he has also stayed skeptical about the findings.
Rennan Barkana

They did not. More precisely, these researchers found that millicharged dark matter can only make up a small fraction of the total dark matter in the universe—too small a fraction to create the observed dip in the EDGES data. “You cannot have 100 percent of dark matter interacting,” said Anastasia Fialkov, an astrophysicist at Harvard University and the first author of a paper submitted to Physical Review Letters. Another paper that Barkana and colleagues posted on the preprint site arxiv.org concludes that this dark matter has an even smaller presence: It couldn’t account for more than 1 to 2 percent of the millicharged dark matter content. Independent groups have reached similar conclusions.

If it’s not millicharged dark matter, then what might explain EDGES’ stronger-than-expected absorption signal? Another possibility is that extra background light existed during the cosmic dawn. If there were more radio waves than expected in the early universe, then “the absorption would appear stronger even though the gas itself is unchanged,” Furlanetto said. Perhaps the CMB wasn’t the only ambient light during the toddler years of our universe.

This idea doesn’t come entirely out of left field. In 2011, a balloon-lofted experiment called ARCADE 2 reported a background radio signal that was stronger than would have been expected from the CMB alone. Scientists haven’t yet been able to explain this result.

After the EDGES detection, a few groups of astronomers revisited these data. One group looked at black holes as a possible explanation, since black holes are the brightest extragalactic radio sources in the sky. Yet black holes also produce other forms of radiation, like X-rays, that haven’t been seen in the early universe. Because of this, astronomers remain skeptical that black holes are the answer.

Is It Real?

Perhaps the simplest explanation is that the data are just wrong. The measurement is incredibly difficult, after all. Yet by all accounts the EDGES team took exceptional care to cross-check all their data—Price called the experiment “exquisite”—which means that if there is a flaw in the data, it will be exceptionally hard to find.

This antenna for EDGES was deployed in 2015 at a remote location in western Australia where it would experience little radio interference.

The EDGES team deployed their radio antenna in September 2015. By December, they were seeing a signal, said Raul Monsalve, an experimental cosmologist at the University of Colorado, Boulder, and a member of the EDGES team. “We became suspicious immediately, because it was stronger than expected.”

And so they began what became a marathon of due diligence. They built a similar antenna and installed it about 150 meters away from the first one. They rotated the antennas to rule out environmental and instrumental effects. They used separate calibration and analysis techniques. “We made many, many kinds of cuts and comparisons and cross-checks to try to rule out the signal as coming from the environment or from some other source,” Monsalve said. “We didn’t believe ourselves at the beginning. We thought it was very suspicious for the signal to be this strong, and that’s why we took so long to publish.” They are convinced that they’re seeing a signal, and that the signal is unexpectedly strong.

“I do believe the result,” Price said, but he emphasized that testing for systematic errors in the data is still needed. He mentioned one area where the experiment could have overlooked a potential error: Any antenna’s sensitivity varies depending on the frequency it’s observing and the direction from which a signal is coming. Astronomers can account for these imperfections by either measuring them or modeling them. Bowman and colleagues chose to model them. Price suggests that the EDGES team members instead find a way to measure them and then reanalyze their signal with that measured effect taken into account.

The next step is for a second radio detector to see this signal, which would imply it’s from the sky and not from the EDGES antenna or model. Scientists with the Large-Aperture Experiment to Detect the Dark Ages (LEDA) project, located in California’s Owens Valley, are currently analyzing that instrument’s data. Then researchers will need to confirm that the signal is actually cosmological and not produced by our own Milky Way. This is not a simple problem. Our galaxy’s radio emission can be thousands of times stronger than cosmological signals.

On the whole, researchers regard both the EDGES measurement itself and its interpretation with a healthy skepticism, as Barkana and many others have put it. Scientists should be skeptical of a first-of-its-kind measurement—that’s how they ensure that the observation is sound, the analysis was completed accurately, and the experiment wasn’t in error. This is, ultimately, how science is supposed to work. “We ask the questions, we investigate, we exclude every wrong possibility,” said Tomer Volansky, a particle physicist at Tel Aviv University who collaborated with Barkana on one of his follow-up analyses. “We’re after the truth. If the truth is that it’s not dark matter, then it’s not dark matter.”

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more: https://www.wired.com/story/whisper-from-the-first-stars-sets-off-loud-dark-matter-debate/

In Search of Gods Mathematical Perfect Proofs

Paul Erdős, the famously eccentric, peripatetic and prolific 20th-century mathematician, was fond of the idea that God has a celestial volume containing the perfect proof of every mathematical theorem. “This one is from The Book,” he would declare when he wanted to bestow his highest praise on a beautiful proof.

Quanta Magazine


About

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Never mind that Erdős doubted God’s very existence. “You don’t have to believe in God, but you should believe in The Book,” Erdős explained to other mathematicians.

In 1994, during conversations with Erdős at the Oberwolfach Research Institute for Mathematics in Germany, the mathematician Martin Aigner came up with an idea: Why not actually try to make God’s Book—or at least an earthly shadow of it? Aigner enlisted fellow mathematician Günter Ziegler, and the two started collecting examples of exceptionally beautiful proofs, with enthusiastic contributions from Erdős himself. The resulting volume, Proofs From THE BOOK, was published in 1998, sadly too late for Erdős to see it—he had died about two years after the project commenced, at age 83.

“Many of the proofs trace directly back to him, or were initiated by his supreme insight in asking the right question or in making the right conjecture,” Aigner and Ziegler, who are now both professors at the Free University of Berlin, write in the preface.

Whether the proof is understandable and beautiful depends not only on the proof but also on the reader.

The book, which has been called “a glimpse of mathematical heaven,” presents proofs of dozens of theorems from number theory, geometry, analysis, combinatorics and graph theory. Over the two decades since it first appeared, it has gone through five editions, each with new proofs added, and has been translated into 13 languages.

In January, Ziegler traveled to San Diego for the Joint Mathematics Meetings, where he received (on his and Aigner’s behalf) the 2018 Steele Prize for Mathematical Exposition. “The density of elegant ideas per page [in the book] is extraordinarily high,” the prize citation reads.

Quanta Magazine sat down with Ziegler at the meeting to discuss beautiful (and ugly) mathematics. The interview has been edited and condensed for clarity.

You’ve said that you and Martin Aigner have a similar sense of which proofs are worthy of inclusion in THE BOOK. What goes into your aesthetic?

Aubrey Wade/Quanta Magazine

We’ve always shied away from trying to define what is a perfect proof. And I think that’s not only shyness, but actually, there is no definition and no uniform criterion. Of course, there are all these components of a beautiful proof. It can’t be too long; it has to be clear; there has to be a special idea; it might connect things that usually one wouldn’t think of as having any connection.

For some theorems, there are different perfect proofs for different types of readers. I mean, what is a proof? A proof, in the end, is something that convinces the reader of things being true. And whether the proof is understandable and beautiful depends not only on the proof but also on the reader: What do you know? What do you like? What do you find obvious?

You noted in the fifth edition that mathematicians have come up with at least 196 different proofs of the “quadratic reciprocity” theorem (concerning which numbers in “clock” arithmetics are perfect squares) and nearly 100 proofs of the fundamental theorem of algebra (concerning solutions to polynomial equations). Why do you think mathematicians keep devising new proofs for certain theorems, when they already know the theorems are true?

These are things that are central in mathematics, so it’s important to understand them from many different angles. There are theorems that have several genuinely different proofs, and each proof tells you something different about the theorem and the structures. So, it’s really valuable to explore these proofs to understand how you can go beyond the original statement of the theorem.

An example comes to mind—which is not in our book but is very fundamental—Steinitz’s theorem for polyhedra. This says that if you have a planar graph (a network of vertices and edges in the plane) that stays connected if you remove one or two vertices, then there is a convex polyhedron that has exactly the same connectivity pattern. This is a theorem that has three entirely different types of proof—the “Steinitz-type” proof, the “rubber band” proof and the “circle packing” proof. And each of these three has variations.

Any of the Steinitz-type proofs will tell you not only that there is a polyhedron but also that there’s a polyhedron with integers for the coordinates of the vertices. And the circle packing proof tells you that there’s a polyhedron that has all its edges tangent to a sphere. You don’t get that from the Steinitz-type proof, or the other way around—the circle packing proof will not prove that you can do it with integer coordinates. So, having several proofs leads you to several ways to understand the situation beyond the original basic theorem.

You’ve mentioned the element of surprise as one feature you look for in a BOOK proof. And some great proofs do leave one wondering, “How did anyone ever come up with this?” But there are other proofs that have a feeling of inevitability. I think it always depends on what you know and where you come from.

An example is László Lovász’s proof for the Kneser conjecture, which I think we put in the fourth edition. The Kneser conjecture was about a certain type of graph you can construct from the k-element subsets of an n-element set—you construct this graph where the k-element subsets are the vertices, and two k-element sets are connected by an edge if they don’t have any elements in common. And Kneser had asked, in 1955 or ’56, how many colors are required to color all the vertices if vertices that are connected must be different colors.

A proof that eats more than 10 pages cannot be a proof for our book. God—if he exists—has more patience.

It’s rather easy to show that you can color this graph with nk + 2 colors, but the problem was to show that fewer colors won’t do it. And so, it’s a graph coloring problem, but Lovász, in 1978, gave a proof that was a technical tour de force, that used a topological theorem, the Borsuk-Ulam theorem. And it was an amazing surprise—why should this topological tool prove a graph theoretic thing?

This turned into a whole industry of using topological tools to prove discrete mathematics theorems. And now it seems inevitable that you use these, and very natural and straightforward. It’s become routine, in a certain sense. But then, I think, it’s still valuable not to forget the original surprise.

Brevity is one of your other criteria for a BOOK proof. Could there be a hundred-page proof in God’s Book?

I think there could be, but no human will ever find it.

We have these results from logic that say that there are theorems that are true and that have a proof, but they don’t have a short proof. It’s a logic statement. And so, why shouldn’t there be a proof in God’s Book that goes over a hundred pages, and on each of these hundred pages, makes a brilliant new observation—and so, in that sense, it’s really a proof from The Book?

On the other hand, we are always happy if we manage to prove something with one surprising idea, and proofs with two surprising ideas are even more magical but still harder to find. So a proof that is a hundred pages long and has a hundred surprising ideas—how should a human ever find it?

But I don’t know how the experts judge Andrew Wiles’ proof of Fermat’s Last Theorem. This is a hundred pages, or many hundred pages, depending on how much number theory you assume when you start. And my understanding is that there are lots of beautiful observations and ideas in there. Perhaps Wiles’ proof, with a few simplifications, is God’s proof for Fermat’s Last Theorem.

But it’s not a proof for the readers of our book, because it’s just beyond the scope, both in technical difficulty and layers of theory. By definition, a proof that eats more than 10 pages cannot be a proof for our book. God—if he exists—has more patience.

Aubrey Wade/Quanta Magazine

Paul Erdős has been called a “priest of mathematics.” He traveled across the globe—often with no settled address—to spread the gospel of mathematics, so to speak. And he used these religious metaphors to talk about mathematical beauty.

Paul Erdős referred to his own lectures as “preaching.” But he was an atheist. He called God the “Supreme Fascist.” I think it was more important to him to be funny and to tell stories—he didn’t preach anything religious. So, this story of God and his book was part of his storytelling routine.

When you experience a beautiful proof, does it feel somehow spiritual?

The ugly proofs have their role.

It’s a powerful feeling. I remember these moments of beauty and excitement. And there’s a very powerful type of happiness that comes from it.

If I were a religious person, I would thank God for all this inspiration that I’m blessed to experience. As I’m not religious, for me, this God’s Book thing is a powerful story.

There’s a famous quote from the mathematician G. H. Hardy that says, “There is no permanent place in the world for ugly mathematics.” But ugly mathematics still has a role, right?

You know, the first step is to establish the theorem, so that you can say, “I worked hard. I got the proof. It’s 20 pages. It’s ugly. It’s lots of calculations, but it’s correct and it’s complete and I’m proud of it.”

If the result is interesting, then come the people who simplify it and put in extra ideas and make it more and more elegant and beautiful. And in the end you have, in some sense, the Book proof.

If you look at Lovász’s proof for the Kneser conjecture, people don’t read his paper anymore. It’s rather ugly, because Lovász didn’t know the topological tools at the time, so he had to reinvent a lot of things and put them together. And immediately after that, Imre Bárány had a second proof, which also used the Borsuk-Ulam theorem, and that was, I think, more elegant and more straightforward.

To do these short and surprising proofs, you need a lot of confidence. And one way to get the confidence is if you know the thing is true. If you know that something is true because so-and-so proved it, then you might also dare to say, “What would be the really nice and short and elegant way to establish this?” So, I think, in that sense, the ugly proofs have their role.

Aubrey Wade/Quanta Magazine

You’re currently preparing a sixth edition of Proofs From THE BOOK. Will there be more after that?

The third edition was perhaps the first time that we claimed that that’s it, that’s the final one. And, of course, we also claimed this in the preface of the fifth edition, but we’re currently working hard to finish the sixth edition.

When Martin Aigner talked to me about this plan to do the book, the idea was that this might be a nice project, and we’d get done with it, and that’s it. And with, I don’t know how you translate it into English, jugendlicher Leichtsinn—that’s sort of the foolery of someone being young—you think you can just do this book and then it’s done.

But it’s kept us busy from 1994 until now, with new editions and translations. Now Martin has retired, and I’ve just applied to be university president, and I think there will not be time and energy and opportunity to do more. The sixth edition will be the final one.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more: https://www.wired.com/story/in-search-of-gods-mathematical-perfect-proofs/

Brainless Embryos Suggest Bioelectricity Guides Growth

The tiny tadpole embryo looked like a bean. One day old, it didn’t even have a heart yet. The researcher in a white coat and gloves who hovered over it made a precise surgical incision where its head would form. Moments later, the brain was gone, but the embryo was still alive.

The brief procedure took Celia Herrera-Rincon, a neuroscience postdoc at the Allen Discovery Center at Tufts University, back to the country house in Spain where she had grown up, in the mountains near Madrid. When she was 11 years old, while walking her dogs in the woods, she found a snake, Vipera latastei. It was beautiful but dead. “I realized I wanted to see what was inside the head,” she recalled. She performed her first “lab test” using kitchen knives and tweezers, and she has been fascinated by the many shapes and evolutionary morphologies of the brain ever since. Her collection now holds about 1,000 brains from all kinds of creatures.

Quanta Magazine


About

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

This time, however, she was not interested in the brain itself, but in how an African clawed frog would develop without one. She and her supervisor, Michael Levin, a software engineer turned developmental biologist, are investigating whether the brain and nervous system play a crucial role in laying out the patterns that dictate the shapes and identities of emerging organs, limbs and other structures.

For the past 65 years, the focus of developmental biology has been on DNA as the carrier of biological information. Researchers have typically assumed that genetic expression patterns alone are enough to determine embryonic development.

To Levin, however, that explanation is unsatisfying. “Where does shape come from? What makes an elephant different from a snake?” he asked. DNA can make proteins inside cells, he said, but “there is nothing in the genome that directly specifies anatomy.” To develop properly, he maintains, tissues need spatial cues that must come from other sources in the embryo. At least some of that guidance, he and his team believe, is electrical.

In recent years, by working on tadpoles and other simple creatures, Levin’s laboratory has amassed evidence that the embryo is molded by bioelectrical signals, particularly ones that emanate from the young brain long before it is even a functional organ. Those results, if replicated in other organisms, may change our understanding of the roles of electrical phenomena and the nervous system in development, and perhaps more widely in biology.

“Levin’s findings will shake some rigid orthodoxy in the field,” said Sui Huang, a molecular biologist at the Institute for Systems Biology. If Levin’s work holds up, Huang continued, “I think many developmental biologists will be stunned to see that the construction of the body plan is not due to local regulation of cells … but is centrally orchestrated by the brain.”

Bioelectrical Influences in Development

The Spanish neuroscientist and Nobel laureate Santiago Ramón y Cajal once called the brain and neurons, the electrically active cells that process and transmit nerve signals, the “butterflies of the soul.” The brain is a center for information processing, memory, decision making and behavior, and electricity figures into its performance of all of those activities.

But it’s not just the brain that uses bioelectric signaling—the whole body does. All cell membranes have embedded ion channels, protein pores that act as pathways for charged molecules, or ions. Differences between the number of ions inside and outside a cell result in an electric gradient—the cell’s resting potential. Vary this potential by opening or blocking the ion channels, and you change the signals transmitted to, from and among the cells all around. Neurons do this as well, but even faster: To communicate among themselves, they use molecules called neurotransmitters that are released at synapses in response to voltage spikes, and they send ultra-rapid electrical pulses over long distances along their axons, encoding information in the pulses’ pattern, to control muscle activity.

Levin has thought about hacking networks of neurons since the mid-1980s, when he was a high school student in the suburbs near Boston, writing software for pocket money. One day, while browsing a small bookstore in Vancouver at Expo 86 with his father, he spotted a volume called The Body Electric, by Robert O. Becker and Gary Selden. He learned that scientists had been investigating bioelectricity for centuries, ever since Luigi Galvani discovered in the 1780s that nerves are animated by what he called “animal electricity.”

However, as Levin continued to read up on the subject, he realized that, even though the brain uses electricity for information processing, no one seemed to be seriously investigating the role of bioelectricity in carrying information about a body’s development. Wouldn’t it be cool, he thought, if we could comprehend “how the tissues process information and what tissues were ‘thinking about’ before they evolved nervous systems and brains?”

He started digging deeper and ended up getting a biology doctorate at Harvard University in morphogenesis—the study of the development of shapes in living things. He worked in the tradition of scientists like Emil du Bois-Reymond, a 19th-century German physician who discovered the action potential of nerves. In the 1930s and ’40s, the American biologists Harold Burr and Elmer Lund measured electric properties of various organisms during their embryonic development and studied connections between bioelectricity and the shapes animals take. They were not able to prove a link, but they were moving in the right direction, Levin said.

Before Genes Reigned Supreme

The work of Burr and Lund occurred during a time of widespread interest in embryology. Even the English mathematician Alan Turing, famed for cracking the Enigma code, was fascinated by embryology. In 1952 he published a paper suggesting that body patterns like pigmented spots and zebra stripes arise from the chemical reactions of diffusing substances, which he called morphogens.

"This electrical signal works as an environmental cue for intercellular communication, orchestrating cell behaviors during morphogenesis and regeneration."

Masayuki Yamashita

But organic explanations like morphogens and bioelectricity didn’t stay in the limelight for long. In 1953, James Watson and Francis Crick published the double helical structure of DNA, and in the decades since “the focus of developmental biology has been on DNA as the carrier of biological information, with cells thought to follow their own internal genetic programs, prompted by cues from their local environment and neighboring cells,” Huang said.

The rationale, according to Richard Nuccitelli, chief science officer at Pulse Biosciences and a former professor of molecular biology at the University of California, Davis, was that “since DNA is what is inherited, information stored in the genes must specify all that is needed to develop.” Tissues are told how to develop at the local level by neighboring tissues, it was thought, and each region patterns itself from information in the genomes of its cells.

The extreme form of this view is “to explain everything by saying ‘it is in the genes,’ or DNA, and this trend has been reinforced by the increasingly powerful and affordable DNA sequencing technologies,” Huang said. “But we need to zoom out: Before molecular biology imposed our myopic tunnel vision, biologists were much more open to organism-level principles.”

The tide now seems to be turning, according to Herrera-Rincon and others. “It’s too simplistic to consider the genome as the only source of biological information,” she said. Researchers continue to study morphogens as a source of developmental information in the nervous system, for example. Last November, Levin and Chris Fields, an independent scientist who works in the area where biology, physics and computing overlap, published a paper arguing that cells’ cytoplasm, cytoskeleton and both internal and external membranes also encode important patterning data—and serve as systems of inheritance alongside DNA.

And, crucially, bioelectricity has made a comeback as well. In the 1980s and ’90s, Nuccitelli, along with the late Lionel Jaffe at the Marine Biological Laboratory, Colin McCaig at the University of Aberdeen, and others, used applied electric fields to show that many cells are sensitive to bioelectric signals and that electricity can induce limb regeneration in nonregenerative species.

According to Masayuki Yamashita of the International University of Health and Welfare in Japan, many researchers forget that every living cell, not just neurons, generates electric potentials across the cell membrane. “This electrical signal works as an environmental cue for intercellular communication, orchestrating cell behaviors during morphogenesis and regeneration,” he said.

However, no one was really sure why or how this bioelectric signaling worked, said Levin, and most still believe that the flow of information is very local. “Applied electricity in earlier experiments directly interacts with something in cells, triggering their responses,” he said. But what it was interacting with and how the responses were triggered were mysteries.

That’s what led Levin and his colleagues to start tinkering with the resting potential of cells. By changing the voltage of cells in flatworms, over the last few years they produced worms with two heads, or with tails in unexpected places. In tadpoles, they reprogrammed the identity of large groups of cells at the level of entire organs, making frogs with extra legs and changing gut tissue into eyes—simply by hacking the local bioelectric activity that provides patterning information.

And because the brain and nervous system are so conspicuously active electrically, the researchers also began to probe their involvement in long-distance patterns of bioelectric information affecting development. In 2015, Levin, his postdoc Vaibhav Pai, and other collaborators showed experimentally that bioelectric signals from the body shape the development and patterning of the brain in its earliest stages. By changing the resting potential in the cells of tadpoles as far from the head as the gut, they appeared to disrupt the body’s “blueprint” for brain development. The resulting tadpoles’ brains were smaller or even nonexistent, and brain tissue grew where it shouldn’t.

Unlike previous experiments with applied electricity that simply provided directional cues to cells, “in our work, we know what we have modified—resting potential—and we know how it triggers responses: by changing how small signaling molecules enter and leave cells,” Levin said. The right electrical potential lets neurotransmitters go in and out of voltage-powered gates (transporters) in the membrane. Once in, they can trigger specific receptors and initiate further cellular activity, allowing researchers to reprogram identity at the level of entire organs.

Lucy Reading-Ikkanda/Quanta Magazine

This work also showed that bioelectricity works over long distances, mediated by the neurotransmitter serotonin, Levin said. (Later experiments implicated the neurotransmitter butyrate as well.) The researchers started by altering the voltage of cells near the brain, but then they went farther and farther out, “because our data from the prior papers showed that tumors could be controlled by electric properties of cells very far away,” he said. “We showed that cells at a distance mattered for brain development too.”

Then Levin and his colleagues decided to flip the experiment. Might the brain hold, if not an entire blueprint, then at least some patterning information for the rest of the body, Levin asked—and if so, might the nervous system disseminate this information bioelectrically during the earliest stages of a body’s development? He invited Herrera-Rincon to get her scalpel ready.

Making Up for a Missing Brain

Herrera-Rincon’s brainless Xenopus laevis tadpoles grew, but within just a few days they all developed highly characteristic defects—and not just near the brain, but as far away as the very end of their tails. Their muscle fibers were also shorter and their nervous systems, especially the peripheral nerves, were growing chaotically. It’s not surprising that nervous system abnormalities that impair movement can affect a developing body. But according to Levin, the changes seen in their experiment showed that the brain helps to shape the body’s development well before the nervous system is even fully developed, and long before any movement starts.

The body of a tadpole normally develops with a predictable structure (A). Removing a tadpole’s brain early in development, however, leads to abnormalities in tissues far from the head (B).

That such defects could be seen so early in the development of the tadpoles was intriguing, said Gil Carvalho, a neuroscientist at the University of Southern California. “An intense dialogue between the nervous system and the body is something we see very prominently post-development, of course,” he said. Yet the new data “show that this cross-talk starts from the very beginning. It’s a window into the inception of the brain-body dialogue, which is so central to most vertebrate life as we know it, and it’s quite beautiful.” The results also raise the possibility that these neurotransmitters may be acting at a distance, he added—by diffusing through the extracellular space, or going from cell to cell in relay fashion, after they have been triggered by a cell’s voltage changes.

Herrera-Rincon and the rest of the team didn’t stop there. They wanted to see whether they could “rescue” the developing body from these defects by using bioelectricity to mimic the effect of a brain. They decided to express a specific ion channel called HCN2, which acts differently in various cells but is sensitive to their resting potential. Levin likens the ion channel’s effect to a sharpening filter in photo-editing software, in that “it can strengthen voltage differences between adjacent tissues that help you maintain correct boundaries. It really strengthens the abilities of the embryos to set up the correct boundaries for where tissues are supposed to go.”

To make embryos express it, the researchers injected messenger RNA for HCN2 into some frog egg cells just a couple of hours after they were fertilized. A day later they removed the embryos’ brains, and over the next few days, the cells of the embryo acquired novel electrical activity from the HCN2 in their membranes.

The scientists found that this procedure rescued the brainless tadpoles from most of the usual defects. Because of the HCN2 it was as if the brain was still present, telling the body how to develop normally. It was amazing, Levin said, “to see how much rescue you can get just from very simple expression of this channel.” It was also, he added, the first clear evidence that the brain controls the development of the embryo via bioelectric cues.

As with Levin’s previous experiments with bioelectricity and regeneration, many biologists and neuroscientists hailed the findings, calling them “refreshing” and “novel.” “One cannot say that this is really a step forward because this work veers off the common path,” Huang said. But a single experiment with tadpoles’ brains is not enough, he added — it’s crucial to repeat the experiment in other organisms, including mammals, for the findings “to be considered an advance in a field and establish generality.” Still, the results open “an entire new domain of investigation and new of way of thinking,” he said.

Experiments on tadpoles reveal the influence of the immature brain on other developing tissues, which appears to be electrical, according to Levin and his colleagues. Photo A shows the appearance of normal muscle in young tadpoles. In tadpoles that lack brains, the muscles fail to develop the correct form (B). But if the cells of brainless tadpoles are made to express ion channels that can restore the right voltage to the cells, the muscles develop more normally (C).
Celia Herrera-Rincon and Michael Levin

Levin’s research demonstrates that the nervous system plays a much more important role in how organisms build themselves than previously thought, said Min Zhao, a biologist at the University of California, Davis, and an expert on the biomedical application and molecular biophysics of electric-field effects in living tissues. Despite earlier experimental and clinical evidence, “this paper is the first one to demonstrate convincingly that this also happens in [the] developing embryo.”

“The results of Mike’s lab abolish the frontier, by demonstrating that electrical signaling from the central nervous system shapes early development,” said Olivier Soriani of the Institut de Biologie de Valrose CNRS. “The bioelectrical activity can now be considered as a new type of input encoding organ patterning, allowing large range control from the central nervous system.”

Carvalho observed that the work has obvious implications for the treatment and prevention of developmental malformations and birth defects—especially since the findings suggest that interfering with the function of a single neurotransmitter may sometimes be enough to prevent developmental issues. “This indicates that a therapeutic approach to these defects may be, at least in some cases, simpler than anticipated,” he said.

Levin speculates that in the future, we may not need to micromanage multitudes of cell-signaling events; instead, we may be able to manipulate how cells communicate with each other electrically and let them fix various problems.

Another recent experiment hinted at just how significant the developing brain’s bioelectric signal might be. Herrera-Rincon soaked frog embryos in common drugs that are normally harmless and then removed their brains. The drugged, brainless embryos developed severe birth defects, such as crooked tails and spinal cords. According to Levin, these results show that the brain protects the developing body against drugs that otherwise might be dangerous teratogens (compounds that cause birth defects). “The paradigm of thinking about teratogens was that each chemical is either a teratogen or is not,” Levin said. “Now we know that this depends on how the brain is working.”

The body of a tadpole normally develops with a predictable structure (A). Removing a tadpole’s brain early in development, however, leads to abnormalities in tissues far from the head (B).

These findings are impressive, but many questions remain, said Adam Cohen, a biophysicist at Harvard who studies bioelectrical signaling in bacteria. “It is still unclear precisely how the brain is affecting developmental patterning under normal conditions, meaning when the brain is intact.” To get those answers, researchers need to design more targeted experiments; for instance, they could silence specific neurons in the brain or block the release of specific neurotransmitters during development.

Although Levin’s work is gaining recognition, the emphasis he puts on electricity in development is far from universally accepted. Epigenetics and bioelectricity are important, but so are other layers of biology, Zhao said. “They work together to produce the biology we see.” More evidence is needed to shift the paradigm, he added. “We saw some amazing and mind-blowing results in this bioelectricity field, but the fundamental mechanisms are yet to be fully understood. I do not think we are there yet.”

But Nuccitelli says that for many biologists, Levin is on to something. For example, he said, Levin’s success in inducing the growth of misplaced eyes in tadpoles simply by altering the ion flux through the local tissues “is an amazing demonstration of the power of biophysics to control pattern formation.” The abundant citations of Levin’s more than 300 papers in the scientific literature—more than 10,000 times in almost 8,000 articles—is also “a great indicator that his work is making a difference.”

The passage of time and the efforts of others carrying on Levin’s work will help his cause, suggested David Stocum, a developmental biologist and dean emeritus at Indiana University-Purdue University Indianapolis. “In my view, his ideas will eventually be shown to be correct and generally accepted as an important part of the framework of developmental biology.”

“We have demonstrated a proof of principle,” Herrera-Rincon said as she finished preparing another petri dish full of beanlike embryos. “Now we are working on understanding the underlying mechanisms, especially the meaning: What is the information content of the brain-specific information, and how much morphogenetic guidance does it provide?” She washed off the scalpel and took off her gloves and lab coat. “I have a million experiments in my mind.”

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more: https://www.wired.com/story/brainless-embryos-suggest-bioelectricity-guides-growth/

Elusive Higgs-Like State Created in Exotic Materials

If you want to understand the personality of a material, study its electrons. Table salt forms cubic crystals because its atoms share electrons in that configuration; silver shines because its electrons absorb visible light and reradiate it back. Electron behavior causes nearly all material properties: hardness, conductivity, melting temperature.

Quanta Magazine


About

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Of late, physicists are intrigued by the way huge numbers of electrons can display collective quantum-mechanical behavior. In some materials, a trillion trillion electrons within a crystal can act as a unit, like fire ants clumping into a single mass to survive a flood. Physicists want to understand this collective behavior because of the potential link to exotic properties such as superconductivity, in which electricity can flow without any resistance.

Last year, two independent research groups designed crystals, known as two-dimensional antiferromagnets, whose electrons can collectively imitate the Higgs boson. By precisely studying this behavior, the researchers think they can better understand the physical laws that govern materials—and potentially discover new states of matter. It was the first time that researchers have been able to induce such “Higgs modes” in these materials. “You’re creating a little mini universe,” said David Alan Tennant, a physicist at Oak Ridge National Laboratory who led one of the groups along with Tao Hong, his colleague there.

Both groups induced electrons into Higgs-like activity by pelting their material with neutrons. During these tiny collisions, the electrons’ magnetic fields begin to fluctuate in a patterned way that mathematically resembles the Higgs boson.

Genevieve Martin/Oak Ridge National Laboratory/U.S. Dept. of Energy

The Higgs mode is not simply a mathematical curiosity. When a crystal’s structure permits its electrons to behave this way, the material most likely has other interesting properties, said Bernhard Keimer, a physicist at the Max Planck Institute for Solid State Research who coleads the other group.

That’s because when you get the Higgs mode to appear, the material should be on the brink of a so-called quantum phase transition. Its properties are about to change drastically, like a snowball on a sunny spring day. The Higgs can help you understand the character of the quantum phase transition, says Subir Sachdev, a physicist at Harvard University. These quantum effects often portend bizarre new material properties.

For example, physicists think that quantum phase transitions play a role in certain materials, known as topological insulators, that conduct electricity only on their surface and not in their interior. Researchers have also observed quantum phase transitions in high-temperature superconductors, although the significance of the phase transitions is still unclear. Whereas conventional superconductors need to be cooled to near absolute zero to observe such effects, high-temperature superconductors work at the relatively balmy conditions of liquid nitrogen, which is dozens of degrees higher.

Over the past few years, physicists have created the Higgs mode in other superconductors, but they can’t always understand exactly what’s going on. The typical materials used to study the Higgs mode have a complicated crystal structure that increases the difficulty of understanding the physics at work.

So both Keimer’s and Tennant’s groups set out to induce the Higgs mode in simpler systems. Their antiferromagnets were so-called two-dimensional materials: While each crystal exists as a 3-D chunk, those chunks are built out of stacked two-dimensional layers of atoms that act more or less independently. Somewhat paradoxically, it’s a harder experimental challenge to induce the Higgs mode in these two-dimensional materials. Physicists were unsure if it could be done.

Yet the successful experiments showed that it was possible to use existing theoretical tools to explain the evolution of the Higgs mode. Keimer’s group found that the Higgs mode parallels the behavior of the Higgs boson. Inside a particle accelerator like the Large Hadron Collider, a Higgs boson will quickly decay into other particles, such as photons. In Keimer’s antiferromagnet, the Higgs mode morphs into different collective-electron motion that resembles particles called Goldstone bosons. The group experimentally confirmed that the Higgs mode evolves according to their theoretical predictions.

Tennant’s group discovered how to make their material produce a Higgs mode that doesn’t die out. That knowledge could help them determine how to turn on other quantum properties, like superconductivity, in other materials. “What we want to understand is how to keep quantum behavior in systems,” said Tennant.

Both groups hope to go beyond the Higgs mode. Keimer aims to actually observe a quantum phase transition in his antiferromagnet, which may be accompanied by additional weird phenomena. “That happens quite a lot,” he said. “You want to study a particular quantum phase transition, and then something else pops up.”

They also just want to explore. They expect that more weird properties of matter are associated with the Higgs mode—potentially ones not yet envisioned. “Our brains don’t have a natural intuition for quantum systems,” said Tennant. “Exploring nature is full of surprises because it’s full of things we never imagined.”

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more: https://www.wired.com/story/elusive-higgs-like-state-created-in-exotic-materials/

How Cells Pack Tangled DNA Into Neat Chromosomes

A human cell carries in its nucleus two meters of spiraling DNA, split up among the 46 slender, double-helical molecules that are its chromosomes. Most of the time, that DNA looks like a tangled ball of yarn—diffuse, disordered, chaotic. But that messiness poses a problem during mitosis, when the cell has to make a copy of its genetic material and divide in two. In preparation, it tidies up by packing the DNA into dense, sausagelike rods, the chromosomes’ most familiar form. Scientists have watched that process through a microscope for decades: The DNA condenses and organizes into discrete units that gradually shorten and widen. But how the genome gets folded inside that structure—it’s clear that it doesn’t simply contract—has remained a mystery. “It’s really at the heart of genetics,” said Job Dekker, a biochemist at the University of Massachusetts Medical School, “a fundamental aspect of heredity that’s always been such a great puzzle.”

Quanta Magazine


About

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

To solve that puzzle, Dekker teamed up with Leonid Mirny, a biophysicist at the Massachusetts Institute of Technology, and William Earnshaw, a biologist at the University of Edinburgh in Scotland. They and their colleagues used a combination of imaging, modeling and genomic techniques to understand how the condensed chromosome forms during cell division. Their results, published recently in *Science and confirmed in part by experimental evidence reported by a European team in this week's issue of the journal, paint a picture in which two protein complexes sequentially organize the DNA into tight arrays of loops along a helical spine.

The researchers collected minute-by-minute data on chromosomes—using a microscope to see how they changed, as well as a technology called Hi-C, which provides a map of how frequently pairs of sequences in the genome interact with one another. They then generated sophisticated computer simulations to match that data, allowing them to calculate the three-dimensional path the chromosomes traced as they condensed.

Their models determined that in the lead-up to mitosis, a ring-shaped protein molecule called condensin II, composed of two connected motors, lands on the DNA. Each of its motors move in opposite directions along the strand while remaining attached to one another, causing a loop to form; as the motors continue to move, that loop gets larger and larger. (Mirny demonstrated the process for me by clasping a piece of his computer’s power cord with both hands, held knuckles to knuckles, through which he then proceeded to push a loop of cord.) As tens of thousands of these protein molecules do their work, a series of loops emerges. The ringlike proteins, positioned at the base of each loop, create a central scaffolding from which the loops emanate, and the entire chromosome becomes shorter and stiffer.

Those results lent support to the idea of loop extrusion, a prior proposal about how DNA is packaged. (Loop extrusion is also responsible for preventing duplicated chromosomes from becoming knotted and entangled, according to Mirny. The mechanics of the looped structure cause sister chromatids to repel each other.) But what the scientists observed next came as more of a surprise and allowed them to build further detail into the loop extrusion hypothesis.

After about 10 minutes, the nuclear envelope keeping the chromosomes together broke down, giving a second ring-shaped motor protein, condensin I, access to the DNA. Those molecules performed loop extrusion on the loops that had already formed, splitting each into around five smaller loops on average. Nesting loops in this way enabled the chromosome to become narrower and prevented the initial loops from growing large enough to mix or interact.

According to the researchers’ models, one major aspect of the chromosome’s folding process is the formation of nested loops. First, a ring-shaped motor protein (red) lands on DNA and extrudes a loop. Later, a second protein (blue) extrudes loops on top of that one. When many such molecules across the entire length of the DNA do this, the chromosome compacts.
Dr. Anton Goloborodko

After approximately 15 minutes, as these loops were forming, the Hi-C data showed something that the researchers found even more unexpected. Typically, sequences located close together along the string of DNA were most likely to interact, while those farther apart were less likely to do so. But the team’s measurements showed that “things [then] kind of came back again in a circle,” Mirny said. That is, once the distance between sequences had grown even further, they again had a higher probability of interacting. “It was obvious from the first glance at this data that we’d never seen something like this before,” he said. His model suggested that condensin II molecules assembled into a helical scaffold, as in the famous Leonardo staircase found in the Chambord Castle in France. The nested loops of DNA radiated out like steps from that spiraling scaffold, packing snuggly into the cylindrical configuration that characterizes the chromosome.

“So this single process immediately solves three problems,” Mirny said. “It creates a scaffold. It linearly orders the chromosome. And it compacts it in such a way that it becomes an elongated object.”

“That was really surprising to us,” Dekker said—not only because they’d never observed the rotation of loops along a helical axis, but because the finding taps into a more fundamental debate. Namely, are chromosomes just a series of loops, or do they spiral? And if they do spiral, is it that the entire chromosome twists into a coil, or that only the internal scaffolding does? (The new study points to the latter; the researchers attribute the former helix-related hypothesis to experimental artifacts, the result of isolating chromosomes in a way that promoted excessive spiraling.) “Our work unifies many, many observations that people have collected over the years,” Dekker said.

“This [analysis] provides a revolutionary degree of clarity,” said Nancy Kleckner, a molecular biologist at Harvard University. “It takes us into another era of understanding how chromosomes are organized at these late stages.”

This series of images illustrates how a compacted chromosome takes shape. Ring-shaped motor proteins (red) form a helical scaffold. Folded loops of DNA emanate from that spiraling axis so that they can be packed tightly into a cylindrical rod.
Dr. Anton Goloborodko

Other experts in the field found those results less surprising, instead deeming the study more noteworthy for the details it provided. Hints of the general chromosomal assembly the researchers described were already “in the air,” according to Julien Mozziconacci, a biophysicist at Sorbonne University in France. The more novel aspects of the work, he said, lay in the researchers’ collection of Hi-C data as a function of time, which allowed them to pinpoint specific constraints, such as the sizes of the loops and helical turns. “I think this is a technical tour de force that allows us to see for the first time what people have been thinking,” he said.

Still, Dekker cautioned that, although it’s been known for some time that condensins are involved in this process—and despite the fact that his group has now identified more specific roles for those “molecular hands that cells use to fold chromosomes”—scientists still don’t understand exactly how they do it.

“If condensin is organizing mitotic chromosomes in this manner, how does it do so?” said Kim Nasmyth, a biochemist at the University of Oxford and a pioneer of the loop extrusion hypothesis. “Until we know the molecular mechanism, we can’t say for sure whether condensin is indeed the one driving all this.”

That’s where Christian Häring, a biochemist at the European Molecular Biology Laboratory in Germany, and Cees Dekker, a biophysicist (unrelated to Job Dekker) at Delft University of Technology in the Netherlands, enter the picture. Last year, they and their colleagues directly demonstrated for the first time that condensin does move along DNA in a test tube—a prerequisite for loop extrusion to be true. And in this week's issue of Science, they reported witnessing an isolated condensin molecule extruding a loop of DNA in yeast, in real time. “We finally have visual proof of this happening,” Häring said.

And it happened almost exactly as Mirny and his team predicted it would for the formation of their larger loops—except that in the in vitro experiment, the loops formed asymmetrically: The condensin landed on the DNA and reeled it in from only one side, rather than in both directions as Mirny initially assumed. (Since the experiments involved condensin from yeast, and only examined a single molecule at a time, they could neither confirm nor refute the other aspects of Mirny’s models, namely the nested loops and helical scaffold.)

Once researchers have completely unpacked that biochemistry—and conducted similar studies on how chromosomes unwind themselves—Job Dekker and Mirny think their work can lend itself to a range of practical and theoretical applications. For one, the research could inform potential cancer treatments. Cancer cells divide quickly and frequently, “so anything we know about that process can help specifically target those kinds of cells,” Dekker said.

It could also provide a window into what goes on in the chromosomes of cells that aren’t dividing. “It has wider implications for, I believe, any other thing the cell does with chromosomes,” Job Dekker said. The condensins he and his colleagues are studying have close relatives, called cohesins, that help with organizing the genome and creating loops even when the DNA isn’t getting compacted. That folding process could affect gene expression. Loop extrusion basically brings pairs of loci together, however briefly, at the base of the growing or shrinking loop—something that could very well be happening during gene regulation, when a gene has to be in physical contact with a regulatory element that may be located quite a distance away along the chromosome. “We now have such a powerful system to study this process,” Dekker said.

“I think there’s an incredible amount of synergy between the things we can learn at different parts of the cell cycle,” added Geoff Fudenberg, a postdoctoral researcher at the University of California, San Francisco, who previously worked in Mirny’s lab. Understanding how chromosomes undergo such a “dramatic transition” during mitosis, he said, could also reveal a lot about what they are doing “below the surface” when cells are not dividing and certain activities and behaviors are less clear.

Mirny points out that this type of folding could also provide insights into other processes in cells that involve active changes in shape or structure. Proteins get folded largely by interactions, while motor processes create the cytoskeleton in the cytoplasm. “Now we came to realize that chromosomes may be something in between,” Mirny said. “We need to gain a better understanding of how these types of active systems self-organize to create complex patterns and vital structures.”

Before that’s possible, the researchers will have to confirm and flesh out the solution they’ve proposed to what Job Dekker called a “great puzzle.” Kleckner has high hopes as well. “This work sets the foundation for a whole new way of thinking about what might be going on,” she said.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more: https://www.wired.com/story/how-cells-pack-tangled-dna-into-neat-chromosomes/

The Ongoing Battle Between Quantum and Classical Computers

A popular misconception is that the potential—and the limits—of quantum computing must come from hardware. In the digital age, we’ve gotten used to marking advances in clock speed and memory. Likewise, the 50-qubit quantum machines now coming online from the likes of Intel and IBM have inspired predictions that we are nearing “quantum supremacy”—a nebulous frontier where quantum computers begin to do things beyond the ability of classical machines.

Quanta Magazine


About

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

But quantum supremacy is not a single, sweeping victory to be sought—a broad Rubicon to be crossed—but rather a drawn-out series of small duels. It will be established problem by problem, quantum algorithm versus classical algorithm. “With quantum computers, progress is not just about speed,” said Michael Bremner, a quantum theorist at the University of Technology Sydney. “It’s much more about the intricacy of the algorithms at play.”

Paradoxically, reports of powerful quantum computations are motivating improvements to classical ones, making it harder for quantum machines to gain an advantage. “Most of the time when people talk about quantum computing, classical computing is dismissed, like something that is past its prime,” said Cristian Calude, a mathematician and computer scientist at the University of Auckland in New Zealand. “But that is not the case. This is an ongoing competition.”

And the goalposts are shifting. “When it comes to saying where the supremacy threshold is, it depends on how good the best classical algorithms are,” said John Preskill, a theoretical physicist at the California Institute of Technology. “As they get better, we have to move that boundary.”

‘It Doesn’t Look So Easy’

Before the dream of a quantum computer took shape in the 1980s, most computer scientists took for granted that classical computing was all there was. The field’s pioneers had convincingly argued that classical computers—epitomized by the mathematical abstraction known as a Turing machine—should be able to compute everything that is computable in the physical universe, from basic arithmetic to stock trades to black hole collisions.

Classical machines couldn’t necessarily do all these computations efficiently, though. Let’s say you wanted to understand something like the chemical behavior of a molecule. This behavior depends on the behavior of the electrons in the molecule, which exist in a superposition of many classical states. Making things messier, the quantum state of each electron depends on the states of all the others—due to the quantum-mechanical phenomenon known as entanglement. Classically calculating these entangled states in even very simple molecules can become a nightmare of exponentially increasing complexity.

A quantum computer, by contrast, can deal with the intertwined fates of the electrons under study by superposing and entangling its own quantum bits. This enables the computer to process extraordinary amounts of information. Each single qubit you add doubles the states the system can simultaneously store: Two qubits can store four states, three qubits can store eight states, and so on. Thus, you might need just 50 entangled qubits to model quantum states that would require exponentially many classical bits—1.125 quadrillion to be exact—to encode.

A quantum machine could therefore make the classically intractable problem of simulating large quantum-mechanical systems tractable, or so it appeared. “Nature isn’t classical, dammit, and if you want to make a simulation of nature, you’d better make it quantum mechanical,” the physicist Richard Feynman famously quipped in 1981. “And by golly it’s a wonderful problem, because it doesn’t look so easy.”

It wasn’t, of course.

Even before anyone began tinkering with quantum hardware, theorists struggled to come up with suitable software. Early on, Feynman and David Deutsch, a physicist at the University of Oxford, learned that they could control quantum information with mathematical operations borrowed from linear algebra, which they called gates. As analogues to classical logic gates, quantum gates manipulate qubits in all sorts of ways—guiding them into a succession of superpositions and entanglements and then measuring their output. By mixing and matching gates to form circuits, the theorists could easily assemble quantum algorithms.

Richard Feynman, the physicist who came up with the idea for a quantum computer in the 1980s, quipped that “by golly, it’s a wonderful problem, because it doesn’t look so easy.”

Cynthia Johnson/Getty Images

Conceiving algorithms that promised clear computational benefits proved more difficult. By the early 2000s, mathematicians had come up with only a few good candidates. Most famously, in 1994, a young staffer at Bell Laboratories named Peter Shor proposed a quantum algorithm that factors integers exponentially faster than any known classical algorithm—an efficiency that could allow it to crack many popular encryption schemes. Two years later, Shor’s Bell Labs colleague Lov Grover devised an algorithm that speeds up the classically tedious process of searching through unsorted databases. “There were a variety of examples that indicated quantum computing power should be greater than classical,” said Richard Jozsa, a quantum information scientist at the University of Cambridge.

But Jozsa, along with other researchers, would also discover a variety of examples that indicated just the opposite. “It turns out that many beautiful quantum processes look like they should be complicated” and therefore hard to simulate on a classical computer, Jozsa said. “But with clever, subtle mathematical techniques, you can figure out what they will do.” He and his colleagues found that they could use these techniques to efficiently simulate—or “de-quantize,” as Calude would say—a surprising number of quantum circuits. For instance, circuits that omit entanglement fall into this trap, as do those that entangle only a limited number of qubits or use only certain kinds of entangling gates.

What, then, guarantees that an algorithm like Shor’s is uniquely powerful? “That’s very much an open question,” Jozsa said. “We never really succeeded in understanding why some [algorithms] are easy to simulate classically and others are not. Clearly entanglement is important, but it’s not the end of the story.” Experts began to wonder whether many of the quantum algorithms that they believed were superior might turn out to be only ordinary.

Sampling Struggle

Until recently, the pursuit of quantum power was largely an abstract one. “We weren’t really concerned with implementing our algorithms because nobody believed that in the reasonable future we’d have a quantum computer to do it,” Jozsa said. Running Shor’s algorithm for integers large enough to unlock a standard 128-bit encryption key, for instance, would require thousands of qubits—plus probably many thousands more to correct for errors. Experimentalists, meanwhile, were fumbling while trying to control more than a handful.

But by 2011, things were starting to look up. That fall, at a conference in Brussels, Preskill speculated that “the day when well-controlled quantum systems can perform tasks surpassing what can be done in the classical world” might not be far off. Recent laboratory results, he said, could soon lead to quantum machines on the order of 100 qubits. Getting them to pull off some “super-classical” feat maybe wasn’t out of the question. (Although D-Wave Systems’ commercial quantum processors could by then wrangle 128 qubits and now boast more than 2,000, they tackle only specific optimization problems; many experts doubt they can outperform classical computers.)

“I was just trying to emphasize we were getting close—that we might finally reach a real milestone in human civilization where quantum technology becomes the most powerful information technology that we have,” Preskill said. He called this milestone “quantum supremacy.” The name—and the optimism—stuck. “It took off to an extent I didn’t suspect.”

The buzz about quantum supremacy reflected a growing excitement in the field—over experimental progress, yes, but perhaps more so over a series of theoretical breakthroughs that began with a 2004 paper by the IBM physicists Barbara Terhal and David DiVincenzo. In their effort to understand quantum assets, the pair had turned their attention to rudimentary quantum puzzles known as sampling problems. In time, this class of problems would become experimentalists’ greatest hope for demonstrating an unambiguous speedup on early quantum machines.

David Deutsch, a physicist at the University of Oxford, came up with the first problem that could be solved exclusively by a quantum computer.

Sampling problems exploit the elusive nature of quantum information. Say you apply a sequence of gates to 100 qubits. This circuit may whip the qubits into a mathematical monstrosity equivalent to something on the order of 2100 classical bits. But once you measure the system, its complexity collapses to a string of only 100 bits. The system will spit out a particular string—or sample—with some probability determined by your circuit.

In a sampling problem, the goal is to produce a series of samples that look as though they came from this circuit. It’s like repeatedly tossing a coin to show that it will (on average) come up 50 percent heads and 50 percent tails. Except here, the outcome of each “toss” isn’t a single value—heads or tails—it’s a string of many values, each of which may be influenced by some (or even all) of the other values.

For a well-oiled quantum computer, this exercise is a no-brainer. It’s what it does naturally. Classical computers, on the other hand, seem to have a tougher time. In the worst circumstances, they must do the unwieldy work of computing probabilities for all possible output strings—all 2100 of them—and then randomly select samples from that distribution. “People always conjectured this was the case,” particularly for very complex quantum circuits, said Ashley Montanaro, an expert in quantum algorithms at the University of Bristol.

Terhal and DiVincenzo showed that even some simple quantum circuits should still be hard to sample by classical means. Hence, a bar was set. If experimentalists could get a quantum system to spit out these samples, they would have good reason to believe that they’d done something classically unmatchable.

Theorists soon expanded this line of thought to include other sorts of sampling problems. One of the most promising proposals came from Scott Aaronson, a computer scientist then at the Massachusetts Institute of Technology, and his doctoral student Alex Arkhipov. In work posted on the scientific preprint site arxiv.org in 2010, they described a quantum machine that sends photons through an optical circuit, which shifts and splits the light in quantum-mechanical ways, thereby generating output patterns with specific probabilities. Reproducing these patterns became known as boson sampling. Aaronson and Arkhipov reasoned that boson sampling would start to strain classical resources at around 30 photons—a plausible experimental target.

Similarly enticing were computations called instantaneous quantum polynomial, or IQP, circuits. An IQP circuit has gates that all commute, meaning they can act in any order without changing the outcome—in the same way 2 + 5 = 5 + 2. This quality makes IQP circuits mathematically pleasing. “We started studying them because they were easier to analyze,” Bremner said. But he discovered that they have other merits. In work that began in 2010 and culiminated in a 2016 paper with Montanaro and Dan Shepherd, now at the National Cyber Security Center in the U.K., Bremner explained why IQP circuits can be extremely powerful: Even for physically realistic systems of hundreds—or perhaps even dozens—of qubits, sampling would quickly become a classically thorny problem.

By 2016, boson samplers had yet to extend beyond 6 photons. Teams at Google and IBM, however, were verging on chips nearing 50 qubits; that August, Google quietly posted a draft paper laying out a road map for demonstrating quantum supremacy on these “near-term” devices.

Google’s team had considered sampling from an IQP circuit. But a closer look by Bremner and his collaborators suggested that the circuit would likely need some error correction—which would require extra gates and at least a couple hundred extra qubits—in order to unequivocally hamstring the best classical algorithms. So instead, the team used arguments akin to Aaronson’s and Bremner’s to show that circuits made of non-commuting gates, although likely harder to build and analyze than IQP circuits, would also be harder for a classical device to simulate. To make the classical computation even more challenging, the team proposed sampling from a circuit chosen at random. That way, classical competitors would be unable to exploit any familiar features of the circuit’s structure to better guess its behavior.

But there was nothing to stop the classical algorithms from getting more resourceful. In fact, in October 2017, a team at IBM showed how, with a bit of classical ingenuity, a supercomputer can simulate sampling from random circuits on as many as 56 qubits—provided the circuits don’t involve too much depth (layers of gates). Similarly, a more able algorithm has recently nudged the classical limits of boson sampling, to around 50 photons.

These upgrades, however, are still dreadfully inefficient. IBM’s simulation, for instance, took two days to do what a quantum computer is expected to do in less than one-tenth of a millisecond. Add a couple more qubits—or a little more depth—and quantum contenders could slip freely into supremacy territory. “Generally speaking, when it comes to emulating highly entangled systems, there has not been a [classical] breakthrough that has really changed the game,” Preskill said. “We’re just nibbling at the boundary rather than exploding it.”

That’s not to say there will be a clear victory. “Where the frontier is is a thing people will continue to debate,” Bremner said. Imagine this scenario: Researchers sample from a 50-qubit circuit of some depth—or maybe a slightly larger one of less depth—and claim supremacy. But the circuit is pretty noisy—the qubits are misbehaving, or the gates don’t work that well. So then some crackerjack classical theorists swoop in and simulate the quantum circuit, no sweat, because “with noise, things you think are hard become not so hard from a classical point of view,” Bremner explained. “Probably that will happen.”

What’s more certain is that the first “supreme” quantum machines, if and when they arrive, aren’t going to be cracking encryption codes or simulating novel pharmaceutical molecules. “That’s the funny thing about supremacy,” Montanaro said. “The first wave of problems we solve are ones for which we don’t really care about the answers.”

Yet these early wins, however small, will assure scientists that they are on the right track—that a new regime of computation really is possible. Then it’s anyone’s guess what the next wave of problems will be.

Correction on February 7, 2018: The original version of this article included an example of a classical version of a quantum algorithm developed by Christian Calude. Additional reporting has revealed that there is a strong debate in the quantum computing community as to whether the quasi-quantum algorithm solves the same problem that the original algorithm does. As a consequence, we have removed the mention of the classical algorithm.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more: https://www.wired.com/story/the-ongoing-battle-between-quantum-and-classical-computers/

How Long Can a Neutron Live? Depends on Who You Ask

When physicists strip neutrons from atomic nuclei, put them in a bottle, then count how many remain there after some time, they infer that neutrons radioactively decay in 14 minutes and 39 seconds, on average. But when other physicists generate beams of neutrons and tally the emerging protons—the particles that free neutrons decay into—they peg the average neutron lifetime at around 14 minutes and 48 seconds.

Quanta Magazine


About

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

The discrepancy between the “bottle” and “beam” measurements has persisted since both methods of gauging the neutron’s longevity began yielding results in the 1990s. At first, all the measurements were so imprecise that nobody worried. Gradually, though, both methods have improved, and still they disagree. Now, researchers at Los Alamos National Laboratory in New Mexico have made the most precise bottle measurement of the neutron lifetime yet, using a new type of bottle that eliminates possible sources of error in earlier designs. The result, which will soon appear in the journal Science, reinforces the discrepancy with beam experiments and increases the chance that it reflects new physics rather than mere experimental error.

But what new physics? In January, two theoretical physicists put forward a thrilling hypothesis about the cause of the discrepancy. Bartosz Fornal and Benjamin Grinstein of the University of California, San Diego, argued that neutrons might sometimes decay into dark matter—the invisible particles that seem to make up six-sevenths of the matter in the universe based on their gravitational influence, while evading decades of experimental searches. If neutrons sometimes transmogrify into dark matter particles instead of protons, then they would disappear from bottles at a faster rate than protons appear in beams, exactly as observed.

The UCNtau experiment at Los Alamos National Laboratory, which uses the “bottle method” to measure the neutron lifetime.
UCNtau

Fornal and Grinstein determined that, in the simplest scenario, the hypothetical dark matter particle’s mass must fall between 937.9 and 938.8 mega-electron volts, and that a neutron decaying into such a particle would emit a gamma ray of a specific energy. “This is a very concrete signal that experimentalists can look for,” Fornal said in an interview.

The UCNtau experimental team in Los Alamos—named for ultracold neutrons and tau, the Greek symbol for the neutron lifetime—heard about Fornal and Grinstein’s paper last month, just as they were gearing up for another experimental run. Almost immediately, Zhaowen Tang and Chris Morris, members of the collaboration, realized they could mount a germanium detector onto their bottle apparatus to measure gamma-ray emissions while neutrons decayed inside. “Zhaowen went off and built a stand, and we got together the parts for our detector and put them up next to the tank and started taking data,” Morris said.

Data analysis was similarly quick. On Feb. 7, just one month after Fornal and Grinstein’s hypothesis appeared, the UCNtau team reported the results of their experimental test on the physics preprint site arxiv.org: They claim to have ruled out the presence of the telltale gamma rays with 99 percent certainty. Commenting on the outcome, Fornal noted that the dark matter hypothesis is not entirely excluded: A second scenario exists in which the neutron decays into two dark matter particles, rather than one of them and a gamma ray. Without a clear experimental signature, this scenario will be far harder to test. (Fornal and Grinstein’s paper, and the UCNtau team’s, are now simultaneously under review for publication in Physical Review Letters.)

The proton detector at the National Institute of Standards and Technology used in the “beam method.”
NIST

So there’s no evidence of dark matter. Yet the neutron lifetime discrepancy is stronger than ever. And whether free neutrons live 14 minutes and 39 or 48 seconds, on average, actually matters.

Physicists need to know the neutron’s lifetime in order to calculate the relative abundances of hydrogen and helium that would have been produced during the universe’s first few minutes. The faster neutrons decayed to protons in that period, the fewer would have existed later to be incorporated into helium nuclei. “That balance of hydrogen and helium is first of all a very sensitive test of the dynamics of the Big Bang,” said Geoffrey Greene, a nuclear physicist at the University of Tennessee and Oak Ridge National Laboratory, “but it also tells us how stars are going to form over the next billions of years,” since galaxies with more hydrogen form more massive, and eventually more explosive, stars. Thus, the neutron lifetime affects predictions of the universe’s far future.

Furthermore, both neutrons and protons are actually composites of elementary particles called quarks that are held together by gluons. Outside of stable atomic nuclei, neutrons decay when one of their down quarks undergoes weak nuclear decay into an up quark, transforming the neutron into a positively charged proton and spitting out a negative electron and an antineutrino in compensation. Quarks and gluons can’t themselves be studied in isolation, which makes neutron decays, in Greene’s words, “our best surrogate for the elementary quark interactions.”

The lingering nine-second uncertainty in the neutron lifetime needs resolving for these reasons. But no one has a clue what’s wrong. Greene, who is a veteran of beam experiments, said, “All of us have gone over very carefully everybody’s experiment, and if we knew where the problem was we would identify it.”

The discrepancy first became a serious matter in 2005, when a group led by Anatoli Serebrov of the Petersburg Nuclear Physics Institute in Russia and physicists at the National Institute of Standards and Technology (NIST) in Gaithersburg, Maryland, reported bottle and beam measurements, respectively, that were individually very precise—the bottle measurement was estimated to be at most one second off, and the beam one at most three seconds—but which differed from each other by eight seconds.

Many design improvements, independent checks and head scratches later, the gap between the world-average bottle and beam measurements has only grown slightly—to nine seconds—while both error margins have shrunk. This leaves two possibilities, said Peter Geltenbort, a nuclear physicist at the Institut Laue-Langevin in France who was on Serebrov’s team in 2005 and is now part of UCNtau: “Either there is really some exotic new physics,” or “everyone was overestimating their precision.”

Beam practitioners at NIST and elsewhere have worked to understand and minimize the many sources of uncertainty in their experiments, including in the intensity of their neutron beam, the volume of the detector that the beam passes through, and the efficiency of the detector, which picks up protons produced by decaying neutrons along the beam’s length. For years, Greene particularly mistrusted the beam-intensity measurement, but independent checks have exonerated it. “At this point I don’t have a best candidate of a systematic effect that’s been overlooked,” he said.

On the bottle side of the story, experts suspected that neutrons might be getting absorbed into their bottles’ walls despite the surfaces being coated with a smooth and reflective material, and even after correcting for wall losses by varying the bottle size. Alternatively, the standard way of counting surviving neutrons in the bottles might have been lossy.

But the new UCNtau experiment has eliminated both explanations. Instead of storing neutrons in a material bottle, the Los Alamos scientists trapped them using magnetic fields. And rather than transporting surviving neutrons to an external detector, they employed an in situ detector that dips into the magnetic bottle and quickly absorbs all the neutrons inside. (Each absorption produces a flash of light that gets picked up by phototubes.) Yet their final answer corroborates that of previous bottle experiments.

The only option is to press on. “Everybody is moving forward,” Morris said. He and the UCNtau team are still collecting data and finishing up an analysis that includes twice as much data as in the forthcoming Science paper. They aim to eventually measure tau with an uncertainty of just 0.2 second. On the beam side, a group at NIST led by Jeffrey Nico is taking data now and expects to have results in two years, aiming for one-second uncertainty, while an experiment in Japan called J-PARC is also getting under way.

NIST and J-PARC will either corroborate UCNtau’s result, deciding the neutron lifetime once and for all, or the saga will continue.

“The tension that these two independent methods disagree is what drives the improvement in the experiments,” Greene said. If only the bottle or the beam technique had been developed, physicists might have gone forward with the wrong value for tau plugged into their calculations. “The virtue of having two independent methods is it keeps you honest. I used to work at the National Bureau of Standards, and they’d say, ‘A man with one watch knows what time it is; a man with two is never sure.’”

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more: https://www.wired.com/story/how-long-can-a-neutron-live-depends-on-who-you-ask/

The Era of Quantum Computing Is Here. Outlook: Cloudy

After decades of heavy slog with no promise of success, quantum computing is suddenly buzzing with almost feverish excitement and activity. Nearly two years ago, IBM made a quantum computer available to the world: the 5-quantum-bit (qubit) resource they now call (a little awkwardly) the IBM Q experience. That seemed more like a toy for researchers than a way of getting any serious number crunching done. But 70,000 users worldwide have registered for it, and the qubit count in this resource has now quadrupled. In the past few months, IBM and Intel have announced that they have made quantum computers with 50 and 49 qubits, respectively, and Google is thought to have one waiting in the wings. “There is a lot of energy in the community, and the recent progress is immense,” said physicist Jens Eisert of the Free University of Berlin.

Quanta Magazine


About

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

There is now talk of impending “quantum supremacy”: the moment when a quantum computer can carry out a task beyond the means of today’s best classical supercomputers. That might sound absurd when you compare the bare numbers: 50 qubits versus the billions of classical bits in your laptop. But the whole point of quantum computing is that a quantum bit counts for much, much more than a classical bit. Fifty qubits has long been considered the approximate number at which quantum computing becomes capable of calculations that would take an unfeasibly long time classically. Midway through 2017, researchers at Google announced that they hoped to have demonstrated quantum supremacy by the end of the year. (When pressed for an update, a spokesperson recently said that “we hope to announce results as soon as we can, but we’re going through all the detailed work to ensure we have a solid result before we announce.”)

It would be tempting to conclude from all this that the basic problems are solved in principle and the path to a future of ubiquitous quantum computing is now just a matter of engineering. But that would be a mistake. The fundamental physics of quantum computing is far from solved and can’t be readily disentangled from its implementation.

Even if we soon pass the quantum supremacy milestone, the next year or two might be the real crunch time for whether quantum computers will revolutionize computing. There’s still everything to play for and no guarantee of reaching the big goal.

IBM’s quantum computing center at the Thomas J. Watson Research Center in Yorktown Heights, New York, holds quantum computers in large cryogenic tanks (far right) that are cooled to a fraction of a degree above absolute zero.
Connie Zhou for IBM

Shut Up and Compute

Both the benefits and the challenges of quantum computing are inherent in the physics that permits it. The basic story has been told many times, though not always with the nuance that quantum mechanics demands. Classical computers encode and manipulate information as strings of binary digits—1 or 0. Quantum bits do the same, except that they may be placed in a so-called superposition of the states 1 and 0, which means that a measurement of the qubit’s state could elicit the answer 1 or 0 with some well-defined probability.

To perform a computation with many such qubits, they must all be sustained in interdependent superpositions of states—a “quantum-coherent” state, in which the qubits are said to be entangled. That way, a tweak to one qubit may influence all the others. This means that somehow computational operations on qubits count for more than they do for classical bits. The computational resources increase in simple proportion to the number of bits for a classical device, but adding an extra qubit potentially doubles the resources of a quantum computer. This is why the difference between a 5-qubit and a 50-qubit machine is so significant.

Note that I’ve not said—as it often is said—that a quantum computer has an advantage because the availability of superpositions hugely increases the number of states it can encode, relative to classical bits. Nor have I said that entanglement permits many calculations to be carried out in parallel. (Indeed, a strong degree of qubit entanglement isn’t essential.) There’s an element of truth in those descriptions—some of the time—but none captures the essence of quantum computing.

Inside one of IBM’s cryostats wired for a 50-qubit quantum system.
Connie Zhou for IBM

It’s hard to say qualitatively why quantum computing is so powerful precisely because it is hard to specify what quantum mechanics means at all. The equations of quantum theory certainly show that it will work: that, at least for some classes of computation such as factorization or database searches, there is tremendous speedup of the calculation. But how exactly?

Perhaps the safest way to describe quantum computing is to say that quantum mechanics somehow creates a “resource” for computation that is unavailable to classical devices. As quantum theorist Daniel Gottesman of the Perimeter Institute in Waterloo, Canada, put it, “If you have enough quantum mechanics available, in some sense, then you have speedup, and if not, you don’t.”

Some things are clear, though. To carry out a quantum computation, you need to keep all your qubits coherent. And this is very hard. Interactions of a system of quantum-coherent entities with their surrounding environment create channels through which the coherence rapidly “leaks out” in a process called decoherence. Researchers seeking to build quantum computers must stave off decoherence, which they can currently do only for a fraction of a second. That challenge gets ever greater as the number of qubits—and hence the potential to interact with the environment—increases. This is largely why, even though quantum computing was first proposed by Richard Feynman in 1982 and the theory was worked out in the early 1990s, it has taken until now to make devices that can actually perform a meaningful computation.

Quantum Errors

There’s a second fundamental reason why quantum computing is so difficult. Like just about every other process in nature, it is noisy. Random fluctuations, from heat in the qubits, say, or from fundamentally quantum-mechanical processes, will occasionally flip or randomize the state of a qubit, potentially derailing a calculation. This is a hazard in classical computing too, but it’s not hard to deal with—you just keep two or more backup copies of each bit so that a randomly flipped bit stands out as the odd one out.

Researchers working on quantum computers have created strategies for how to deal with the noise. But these strategies impose a huge debt of computational overhead—all your computing power goes to correcting errors and not to running your algorithms. “Current error rates significantly limit the lengths of computations that can be performed,” said Andrew Childs, the codirector of the Joint Center for Quantum Information and Computer Science at the University of Maryland. “We’ll have to do a lot better if we want to do something interesting.”

Andrew Childs, a quantum theorist at the University of Maryland, cautions that error rates are a fundamental concern for quantum computers.
Photo by John T. Consoli/University of Maryland

A lot of research on the fundamentals of quantum computing has been devoted to error correction. Part of the difficulty stems from another of the key properties of quantum systems: Superpositions can only be sustained as long as you don’t measure the qubit’s value. If you make a measurement, the superposition collapses to a definite value: 1 or 0. So how can you find out if a qubit has an error if you don’t know what state it is in?

One ingenious scheme involves looking indirectly, by coupling the qubit to another “ancilla” qubit that doesn’t take part in the calculation but that can be probed without collapsing the state of the main qubit itself. It’s complicated to implement, though. Such solutions mean that, to construct a genuine “logical qubit” on which computation with error correction can be performed, you need many physical qubits.

How many? Quantum theorist Alán Aspuru-Guzik of Harvard University estimates that around 10,000 of today’s physical qubits would be needed to make a single logical qubit—a totally impractical number. If the qubits get much better, he said, this number could come down to a few thousand or even hundreds. Eisert is less pessimistic, saying that on the order of 800 physical qubits might already be enough, but even so he agrees that “the overhead is heavy,” and for the moment we need to find ways of coping with error-prone qubits.

An alternative to correcting errors is avoiding them or canceling out their influence: so-called error mitigation. Researchers at IBM, for example, are developing schemes for figuring out mathematically how much error is likely to have been incurred in a computation and then extrapolating the output of a computation to the “zero noise” limit.

Some researchers think that the problem of error correction will prove intractable and will prevent quantum computers from achieving the grand goals predicted for them. “The task of creating quantum error-correcting codes is harder than the task of demonstrating quantum supremacy,” said mathematician Gil Kalai of the Hebrew University of Jerusalem in Israel. And he adds that “devices without error correction are computationally very primitive, and primitive-based supremacy is not possible.” In other words, you’ll never do better than classical computers while you’ve still got errors.

Others believe the problem will be cracked eventually. According to Jay Gambetta, a quantum information scientist at IBM’s Thomas J. Watson Research Center, “Our recent experiments at IBM have demonstrated the basic elements of quantum error correction on small devices, paving the way towards larger-scale devices where qubits can reliably store quantum information for a long period of time in the presence of noise.” Even so, he admits that “a universal fault-tolerant quantum computer, which has to use logical qubits, is still a long way off.” Such developments make Childs cautiously optimistic. “I’m sure we’ll see improved experimental demonstrations of [error correction], but I think it will be quite a while before we see it used for a real computation,” he said.

Living With Errors

For the time being, quantum computers are going to be error-prone, and the question is how to live with that. At IBM, researchers are talking about “approximate quantum computing” as the way the field will look in the near term: finding ways of accommodating the noise.

This calls for algorithms that tolerate errors, getting the correct result despite them. It’s a bit like working out the outcome of an election regardless of a few wrongly counted ballot papers. “A sufficiently large and high-fidelity quantum computation should have some advantage [over a classical computation] even if it is not fully fault-tolerant,” said Gambetta.

Lucy Reading-Ikkanda/Quanta Magazine

One of the most immediate error-tolerant applications seems likely to be of more value to scientists than to the world at large: to simulate stuff at the atomic level. (This, in fact, was the motivation that led Feynman to propose quantum computing in the first place.) The equations of quantum mechanics prescribe a way to calculate the properties—such as stability and chemical reactivity—of a molecule such as a drug. But they can’t be solved classically without making lots of simplifications.

In contrast, the quantum behavior of electrons and atoms, said Childs, “is relatively close to the native behavior of a quantum computer.” So one could then construct an exact computer model of such a molecule. “Many in the community, including me, believe that quantum chemistry and materials science will be one of the first useful applications of such devices,” said Aspuru-Guzik, who has been at the forefront of efforts to push quantum computing in this direction.

Quantum simulations are proving their worth even on the very small quantum computers available so far. A team of researchers including Aspuru-Guzik has developed an algorithm that they call the variational quantum eigensolver (VQE), which can efficiently find the lowest-energy states of molecules even with noisy qubits. So far it can only handle very small molecules with few electrons, which classical computers can already simulate accurately. But the capabilities are getting better, as Gambetta and coworkers showed last September when they used a 6-qubit device at IBM to calculate the electronic structures of molecules, including lithium hydride and beryllium hydride. The work was “a significant leap forward for the quantum regime,” according to physical chemist Markus Reiher of the Swiss Federal Institute of Technology in Zurich, Switzerland. “The use of the VQE for the simulation of small molecules is a great example of the possibility of near-term heuristic algorithms,” said Gambetta.

But even for this application, Aspuru-Guzik confesses that logical qubits with error correction will probably be needed before quantum computers truly begin to surpass classical devices. “I would be really excited when error-corrected quantum computing begins to become a reality,” he said.

“If we had more than 200 logical qubits, we could do things in quantum chemistry beyond standard approaches,” Reiher adds. “And if we had about 5,000 such qubits, then the quantum computer would be transformative in this field.”

What’s Your Volume?

Despite the challenges of reaching those goals, the fast growth of quantum computers from 5 to 50 qubits in barely more than a year has raised hopes. But we shouldn’t get too fixated on these numbers, because they tell only part of the story. What matters is not just—or even mainly—how many qubits you have, but how good they are, and how efficient your algorithms are.

Any quantum computation has to be completed before decoherence kicks in and scrambles the qubits. Typically, the groups of qubits assembled so far have decoherence times of a few microseconds. The number of logic operations you can carry out during that fleeting moment depends on how quickly the quantum gates can be switched—if this time is too slow, it really doesn’t matter how many qubits you have at your disposal. The number of gate operations needed for a calculation is called its depth: Low-depth (shallow) algorithms are more feasible than high-depth ones, but the question is whether they can be used to perform useful calculations.

What’s more, not all qubits are equally noisy. In theory it should be possible to make very low-noise qubits from so-called topological electronic states of certain materials, in which the “shape” of the electron states used for encoding binary information confers a kind of protection against random noise. Researchers at Microsoft, most prominently, are seeking such topological states in exotic quantum materials, but there’s no guarantee that they’ll be found or will be controllable.

Researchers at IBM have suggested that the power of a quantum computation on a given device be expressed as a number called the “quantum volume,” which bundles up all the relevant factors: number and connectivity of qubits, depth of algorithm, and other measures of the gate quality, such as noisiness. It’s really this quantum volume that characterizes the power of a quantum computation, and Gambetta said that the best way forward right now is to develop quantum-computational hardware that increases the available quantum volume.

This is one reason why the much vaunted notion of quantum supremacy is more slippery than it seems. The image of a 50-qubit (or so) quantum computer outperforming a state-of-the-art supercomputer sounds alluring, but it leaves a lot of questions hanging. Outperforming for which problem? How do you know the quantum computer has got the right answer if you can’t check it with a tried-and-tested classical device? And how can you be sure that the classical machine wouldn’t do better if you could find the right algorithm?

So quantum supremacy is a concept to handle with care. Some researchers prefer now to talk about “quantum advantage,” which refers to the speedup that quantum devices offer without making definitive claims about what is best. An aversion to the word “supremacy” has also arisen because of the racial and political implications.

Whatever you choose to call it, a demonstration that quantum computers can do things beyond current classical means would be psychologically significant for the field. “Demonstrating an unambiguous quantum advantage will be an important milestone,” said Eisert—it would prove that quantum computers really can extend what is technologically possible.

That might still be more of a symbolic gesture than a transformation in useful computing resources. But such things may matter, because if quantum computing is going to succeed, it won’t be simply by the likes of IBM and Google suddenly offering their classy new machines for sale. Rather, it’ll happen through an interactive and perhaps messy collaboration between developers and users, and the skill set will evolve in the latter only if they have sufficient faith that the effort is worth it. This is why both IBM and Google are keen to make their devices available as soon as they’re ready. As well as a 16-qubit IBM Q experience offered to anyone who registers online, IBM now has a 20-qubit version for corporate clients, including JP Morgan Chase, Daimler, Honda, Samsung and the University of Oxford. Not only will that help clients discover what’s in it for them; it should create a quantum-literate community of programmers who will devise resources and solve problems beyond what any individual company could muster.

“For quantum computing to take traction and blossom, we must enable the world to use and to learn it,” said Gambetta. “This period is for the world of scientists and industry to focus on getting quantum-ready.”

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more: https://www.wired.com/story/the-era-of-quantum-computing-is-here-outlook-cloudy/