Whisper From the First Stars Sets Off Loud Dark Matter Debate

The news about the first stars in the universe always seemed a little off. Last July, Rennan Barkana, a cosmologist at Tel Aviv University, received an email from one of his longtime collaborators, Judd Bowman. Bowman leads a small group of five astronomers who built and deployed a radio telescope in remote western Australia. Its goal: to find the whisper of the first stars. Bowman and his team had picked up a signal that didn’t quite make sense. He asked Barkana to help him think through what could possibly be going on.

Quanta Magazine


Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

For years, as radio telescopes scanned the sky, astronomers have hoped to glimpse signs of the first stars in the universe. Those objects are too faint and, at over 13 billion light-years away, too distant to be picked up by ordinary telescopes. Instead, astronomers search for the stars’ effects on the surrounding gas. Bowman’s instrument, like the others involved in the search, attempts to pick out a particular dip in radio waves coming from the distant universe.

The measurement is exceedingly difficult to make, since the potential signal can get swamped not only by the myriad radio sources of modern society—one reason the experiment is deep in the Australian outback—but by nearby cosmic sources such as our own Milky Way galaxy. Still, after years of methodical work, Bowman and his colleagues with the Experiment to Detect the Global Epoch of Reionization Signature (EDGES) concluded not only that they had found the first stars, but that they had found evidence that the young cosmos was significantly colder than anyone had thought.

Barkana was skeptical, however. “On the one hand, it looks like a very solid measurement,” he said. “On the other hand, it is something very surprising.”

What could make the early universe appear cold? Barkana thought through the possibilities and realized that it could be a consequence of the presence of dark matter—the mysterious substance that pervades the universe yet escapes every attempt to understand what it is or how it works. He found that the EDGES result could be interpreted as a completely new way that ordinary material might be interacting with dark matter.

The EDGES group announced the details of this signal and the detection of the first stars in the March 1 issue of Nature. Accompanying their article was Barkana’s paper describing his novel dark matter idea. News outlets worldwide carried news of the discovery. “Astronomers Glimpse Cosmic Dawn, When the Stars Switched On,” the Associated Press reported, adding that “they may have detected mysterious dark matter at work, too.”

Yet in the weeks since the announcement, cosmologists around the world have expressed a mix of excitement and skepticism. Researchers who saw the EDGES result for the first time when it appeared in Nature have done their own analysis, showing that even if some kind of dark matter is responsible, as Barkana suggested, no more than a small fraction of it could be involved in producing the effect. (Barkana himself has been involved in some of these studies.) And experimental astronomers have said that while they respect the EDGES team and the careful work that they’ve done, such a measurement is too difficult to trust entirely. “If this weren’t a groundbreaking discovery, it would be a lot easier for people to just believe the results,” said Daniel Price, an astronomer at Swinburne University of Technology in Australia who works on similar experiments. “Great claims require great evidence.”

This message has echoed through the cosmology community since those Nature papers appeared.

The Source of a Whisper

The day after Bowman contacted Barkana to tell him about the surprising EDGES signal, Barkana drove with his family to his in-laws’ house. During the drive, he said, he contemplated this signal, telling his wife about the interesting puzzle Bowman had handed him.

Bowman and the EDGES team had been probing the neutral hydrogen gas that filled the universe during the first few hundred million years after the Big Bang. This gas tended to absorb ambient light, leading to what cosmologists poetically call the universe’s “dark ages.” Although the cosmos was filled with a diffuse ambient light from the cosmic microwave background (CMB)—the so-called afterglow of the Big Bang—this neutral gas absorbed it at specific wavelengths. EDGES searched for this absorption pattern.

As stars began to turn on in the universe, their energy would have heated the gas. Eventually the gas reached a high enough temperature that it no longer absorbed CMB radiation. The absorption signal disappeared, and the dark ages ended.

The absorption signal as measured by EDGES contains an immense amount of information. As the absorption pattern traveled across the expanding universe, the signal stretched. Astronomers can use that stretch to infer how long the signal has been traveling, and thus, when the first stars flicked on. In addition, the width of the detected signal corresponds to the amount of time that the gas was absorbing the CMB light. And the intensity of the signal—how much light was absorbed—relates to the temperature of the gas and the amount of light that was floating around at the time.

Many researchers find this final characteristic the most intriguing. “It’s a much stronger absorption than we had thought possible,” said Steven Furlanetto, a cosmologist at the University of California, Los Angeles, who has examined what the EDGES data would mean for the formation of the earliest galaxies.

Lucy Reading-Ikkanda/Quanta Magazine

The most obvious explanation for such a strong signal is that the neutral gas was colder than predicted, which would have allowed it to absorb even more background radiation. But how could the universe have unexpectedly cooled? “We’re talking about a period of time when stars are beginning to form,” Barkana said—the darkness before the dawn. “So everything is as cold as it can be. The question is: What could be even colder?”

As he parked at his in-laws’ house that July day, an idea came to him: Could it be dark matter? After all, dark matter doesn’t seem to interact with normal matter via the electromagnetic force — it doesn’t emit or absorb heat. So dark matter could have started out colder or been cooling much longer than normal matter at the beginning of the universe, and then continued to cool.

Over the next week, he worked on a theory of how a hypothetical form of dark matter called “millicharged” dark matter could have been responsible. Millicharged dark matter could interact with ordinary matter, but only very weakly. Intergalactic gas might then have cooled by “basically dumping heat into the dark matter sector where you can’t see it anymore,” Furlanetto explained. Barkana wrote the idea up and sent it off to Nature.

Then he began to work through the idea in more detail with several colleagues. Others did as well. As soon as the Nature papers appeared, several groups of theoretical cosmologists started to compare the behavior of this unexpected type of dark matter to what we know about the universe—the decades’ worth of CMB observations, data from supernova explosions, the results of collisions at particle accelerators like the Large Hadron Collider, and astronomers’ understanding of how the Big Bang produced hydrogen, helium and lithium during the universe’s first few minutes. If millicharged dark matter was out there, did all these other observations make sense?

Rennan Barkana, a cosmologist at Tel Aviv University, contributed the idea that a form of dark matter might explain why the early universe looked so cool in the EDGES observations. But he has also stayed skeptical about the findings.
Rennan Barkana

They did not. More precisely, these researchers found that millicharged dark matter can only make up a small fraction of the total dark matter in the universe—too small a fraction to create the observed dip in the EDGES data. “You cannot have 100 percent of dark matter interacting,” said Anastasia Fialkov, an astrophysicist at Harvard University and the first author of a paper submitted to Physical Review Letters. Another paper that Barkana and colleagues posted on the preprint site arxiv.org concludes that this dark matter has an even smaller presence: It couldn’t account for more than 1 to 2 percent of the millicharged dark matter content. Independent groups have reached similar conclusions.

If it’s not millicharged dark matter, then what might explain EDGES’ stronger-than-expected absorption signal? Another possibility is that extra background light existed during the cosmic dawn. If there were more radio waves than expected in the early universe, then “the absorption would appear stronger even though the gas itself is unchanged,” Furlanetto said. Perhaps the CMB wasn’t the only ambient light during the toddler years of our universe.

This idea doesn’t come entirely out of left field. In 2011, a balloon-lofted experiment called ARCADE 2 reported a background radio signal that was stronger than would have been expected from the CMB alone. Scientists haven’t yet been able to explain this result.

After the EDGES detection, a few groups of astronomers revisited these data. One group looked at black holes as a possible explanation, since black holes are the brightest extragalactic radio sources in the sky. Yet black holes also produce other forms of radiation, like X-rays, that haven’t been seen in the early universe. Because of this, astronomers remain skeptical that black holes are the answer.

Is It Real?

Perhaps the simplest explanation is that the data are just wrong. The measurement is incredibly difficult, after all. Yet by all accounts the EDGES team took exceptional care to cross-check all their data—Price called the experiment “exquisite”—which means that if there is a flaw in the data, it will be exceptionally hard to find.

This antenna for EDGES was deployed in 2015 at a remote location in western Australia where it would experience little radio interference.

The EDGES team deployed their radio antenna in September 2015. By December, they were seeing a signal, said Raul Monsalve, an experimental cosmologist at the University of Colorado, Boulder, and a member of the EDGES team. “We became suspicious immediately, because it was stronger than expected.”

And so they began what became a marathon of due diligence. They built a similar antenna and installed it about 150 meters away from the first one. They rotated the antennas to rule out environmental and instrumental effects. They used separate calibration and analysis techniques. “We made many, many kinds of cuts and comparisons and cross-checks to try to rule out the signal as coming from the environment or from some other source,” Monsalve said. “We didn’t believe ourselves at the beginning. We thought it was very suspicious for the signal to be this strong, and that’s why we took so long to publish.” They are convinced that they’re seeing a signal, and that the signal is unexpectedly strong.

“I do believe the result,” Price said, but he emphasized that testing for systematic errors in the data is still needed. He mentioned one area where the experiment could have overlooked a potential error: Any antenna’s sensitivity varies depending on the frequency it’s observing and the direction from which a signal is coming. Astronomers can account for these imperfections by either measuring them or modeling them. Bowman and colleagues chose to model them. Price suggests that the EDGES team members instead find a way to measure them and then reanalyze their signal with that measured effect taken into account.

The next step is for a second radio detector to see this signal, which would imply it’s from the sky and not from the EDGES antenna or model. Scientists with the Large-Aperture Experiment to Detect the Dark Ages (LEDA) project, located in California’s Owens Valley, are currently analyzing that instrument’s data. Then researchers will need to confirm that the signal is actually cosmological and not produced by our own Milky Way. This is not a simple problem. Our galaxy’s radio emission can be thousands of times stronger than cosmological signals.

On the whole, researchers regard both the EDGES measurement itself and its interpretation with a healthy skepticism, as Barkana and many others have put it. Scientists should be skeptical of a first-of-its-kind measurement—that’s how they ensure that the observation is sound, the analysis was completed accurately, and the experiment wasn’t in error. This is, ultimately, how science is supposed to work. “We ask the questions, we investigate, we exclude every wrong possibility,” said Tomer Volansky, a particle physicist at Tel Aviv University who collaborated with Barkana on one of his follow-up analyses. “We’re after the truth. If the truth is that it’s not dark matter, then it’s not dark matter.”

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more: https://www.wired.com/story/whisper-from-the-first-stars-sets-off-loud-dark-matter-debate/

The world saw Stephen Hawking as an oracle. In fact, he was wonderfully human | Philip Ball

Like no other scientist, Hawking was romanticised by the public. His death allows us to see past the fairytale, says science writer Philip Ball

Poignantly, Stephen Hawkings death at the age of 76 humanises him again. Its not just that, as a public icon as recognisable as any A-list actor or rock star, he came to seem a permanent fixture of the cultural landscape. It was also that his physical manifestation the immobile body in customised wheelchair, the distinctive voice that pronounced with the oracular calm of HAL from 2001: A Space Odyssey gave him the aura of a different kind of being, notoriously described by the anthropologist Hlne Mialet as more machine than man.

He was, of course, not only mortal but precariously so. His survival for more than half a century after his diagnosis with motor neurone disease shortly after his 21st birthday seemed to give him only a few years to live is one of the most remarkable feats of determination and sheer medical marvels of our time. Equally astonishing was the life that Hawking wrought from that excruciatingly difficult circumstance. It was not so much a story of survival as a modern fairytale in which he, as the progress of his disease left him increasingly incapacitated, seemed only to grow in stature. He made seminal contributions to physics, wrote bestselling books, appeared in television shows, and commanded attention and awe at his every pronouncement.

Play Video

Cosmology’s brightest star Stephen Hawking dies aged 76 video

This all meant that his science was, to use a zeitgeisty word, performative. To the world at large it was not so much what he said that mattered, but the manner and miracle of its delivery. As his Reith Lectures in 2015 demonstrated, he was not in fact a natural communicator all those feeling guilty at never having finished A Brief History of Time need not feel so bad, as he was no different from many scientists in struggling to translate complex ideas into simple language. But as I sat in the audience for those lectures (delayed because of Hawkings faltering health), it felt more clear than ever that there was a ritualistic element of the whole affair. We were there not so much to learn about black holes and cosmology as to pay respects to an important cultural presence.

Without that performance, Hawking the scientist would be destined to become like any other after their death: a name in a citation, Hawking S, Nature volume 248 pages 30-31 (1971). What, then will endure?

Quite a lot. Hawkings published work, disconnected from the legend of the man, reveals him to be a physicist of the highest calibre, who will be remembered in particular for some startlingly inventive and imaginative contributions to the field of general relativity: the study of the theory of gravity first proposed by Albert Einstein in 1916. At the same time, they show that he has no real claim to being Einsteins successor. The romanticising of Hawking brings, for a scientist, the temptation to want to cut him down to size. The Nobel committee never found his work quite met the mark partly, perhaps, because it dealt in ideas that are difficult to verify, applying to objects like black holes not easy to investigate. The lack of a Nobel seemed to trouble him; but he was, without question, in with a shout for one.

That 1974 paper in Nature will be one of the most enduring, offering a memorable contribution to our understanding of black holes. These are created when massive objects such as stars undergo runaway collapse under their own gravity to become what general relativity insists is a singularity: a point of infinite density, surrounded by a gravitational field so strong that, within a certain distance called the event horizon, not even light can escape.

The very idea of black holes seemed to many astrophysicists to be an affront to reason until a renaissance of interest in general relativity in the 1960s which the young Hawking helped to boost got them taken seriously. Hawkings paper argued that black holes will radiate energy from close to the event horizon the origin of the somewhat gauche title of one of the Reith Lectures, Black holes aint as black as they are painted and that the process should cause primordial miniature black holes to explode catastrophically. Most physicists now accept the idea of Hawking radiation, although it has yet to be observed.

This work became a central pillar in research that has now linked several key, and hitherto disparate, areas of physical theory: general relativity, quantum mechanics, thermodynamics and information theory. Here Hawking, like any scientist, drew on the ideas of others and not always graciously, as when he initially disparaged the suggestion of the young physicist Jacob Bekenstein that the surface area of a black holes event horizon is related to its thermodynamic entropy. Hawkings recent efforts in this field have scarcely been decisive, but his colleagues were always eager to see what he had to say about it.

Less enduring will be his passion for a theory of everything, a notion described in his 1980 lecture in Cambridge when he became Lucasian Professor of Mathematics, the chair once occupied by Isaac Newton. It supplied a neat title for the 2014 biopic, but most physicists have fallen out of love with this ambitious project. That isnt just because it has proved so difficult, becoming mired in a theoretical quagmire involving speculative ideas such as string theory that are beyond any obvious means of testing. Its also because many see the idea as meaningless: physical theory is a hierarchy in which laws emerge at each level that cant be discerned at a more reductive one. Hawkings enthusiasm for a theory of everything highlights how he didnt share Einsteins breadth of vision in science, but focused almost exclusively on one subdiscipline of physics.

His death brings such limits into focus. His pronouncements on the death of philosophy now look naive, ill-informed and hubristic but plenty of other scientists say such things without having to cope with seeing them carved in stone and pored over. His readiness to speak out on other issues beyond his expertise has mixed results: his sparring with Jeremy Hunt over the NHS was cheering, but his vague musings about space travel, aliens and AI just got in the way of more sober debate.

As The Theory of Everything wasnt afraid to show, Hawking was human, all too human. It feels something of a relief to be able to grant him that again: to see beyond the tropes, cartoons and cliches and to find the man who lived with great fortitude and good humour inside the oracle that we made of him.

Philip Ball is a science writer. His latest book is The Water Kingdom: A Secret History of China

Read more: https://www.theguardian.com/commentisfree/2018/mar/15/stephen-hawking-oracel-wonderfully-human-scientist

Stephen Hawking, a Physicist Transcending Space and Time, Passes Away at 76

For arguably the most famous physicist on Earth, Stephen Hawking—who died Wednesday in Cambridge at 76 years old—was wrong a lot. He thought, for a while, that black holes destroyed information, which physics says is a no-no. He thought Cygnus X-1, an emitter of X-rays over 6,000 light years away, wouldn’t turn out to be a black hole. (It did.) He thought no one would ever find the Higgs boson, the particle indirectly responsible for the existence of mass in the universe. (Researchers at CERN found it in 2012.)

But Hawking was right a lot, too. He and the physicist Roger Penrose described singularities, mind-bending physical concepts where relativity and quantum mechanics collapse inward on each other—as at the heart of a black hole. It’s the sort of place that no human will ever see first-hand; the event horizon of a black hole smears matter across time and space like cosmic paste. But Hawking’s mind was singular enough to see it, or at least imagine it.

His calculations helped show that as the young universe expanded and grew through inflation, fluctuations at the quantum scale—the smallest possible gradation of matter—became the galaxies we see around us. No human will ever visit another galaxy, and the quantum realm barely waves at us in our technology, but Hawking envisioned them both. And he calculated that black holes could sometimes explode, an image that would vex even the best visual effects wizard.

More than that, he could explain it to the rest of us. Hawking was the Lucasian Chair of Mathematics at Cambridge until his retirement in 2009, the same position held by Isaac Newton, Charles Babbage, and Paul Dirac. But he was also a pre-eminent popularizer of some of the most brain-twisting concepts science has to offer. His 1988 book A Brief History of Time has sold more than 10 million copies. His image—in an electric wheelchair and speaking via a synthesizer because of complications of the degenerative disease amyotrophic lateral sclerosis, delivering nerdy zingers on TV shows like The Big Bang Theory and Star Trek: The Next Generation—defined “scientist” for the latter half of the 20th century perhaps as much as Albert Einstein’s mad hair and German accent did in the first half.

Possibly that’s because in addition to being brilliant, Hawking was funny. Or at least sly. He was a difficult student by his own account. Diagnosed with ALS in 1963 at the age of 21, he thought he’d have only two more years to live. When the disease didn’t progress that fast, Hawking is reported to have said, “I found, to my surprise, that I was enjoying life in the present more than before. I began to make progress with my research.” With his mobility limited by the use of a wheelchair, he sped in it, dangerously. He proved time travel didn't exist by throwing a party for time travelers, but not sending out invitations until the party was over. No one came. People learned about the things he got wrong because he’d bet other scientists—his skepticism that Cygnus X-1 was a black hole meant he owed Kip Thorne of Caltech a subscription to Penthouse. (In fact, as the terms of that bet hint, rumors of mistreatment of women dogged him.)

Hawking became as much a cultural icon as a scientific one. For a time police suspected his second wife and one-time nurse of abusing him; the events became the basis of an episode of Law and Order: Criminal Intent. He played himself on The Simpsons and was depicted on Family Guy and South Park. Eddie Redmayne played Hawking in a biopic.

In recent years he looked away from the depths of the universe and into humanity’s future, joining the technologist Elon Musk in warning against the dangers of intelligent computers. “Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization,” Hawking reportedly said at a talk last year. “It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy.” In an interview with WIRED UK, he said: “Someone will design AI that replicates itself. This will be a new form of life that will outperform humans.”

In 2016 he said that he thought humanity only had about 1,000 years left, thanks to AI, climate change, and other (avoidable) disasters. Last year he reduced that horizon exponentially—100 years left, he warned, unless we changed our ways.

Hawking was taking an unusual step away from cosmology, and it was easy, perhaps, to dismiss that fear—why would someone who’d help define what a singularity actually was warn people against the pseudo-singularity of Silicon Valley? Maybe Hawking will be as wrong on this one as he was about conservation of information in black holes. But Hawking always did see into realms no one else could—until he described them to the rest of us.

Hawking's Influence

Read more: https://www.wired.com/story/stephen-hawking-a-physicist-transcending-space-and-time-passes-away-at-76/

Elusive Higgs-Like State Created in Exotic Materials

If you want to understand the personality of a material, study its electrons. Table salt forms cubic crystals because its atoms share electrons in that configuration; silver shines because its electrons absorb visible light and reradiate it back. Electron behavior causes nearly all material properties: hardness, conductivity, melting temperature.

Quanta Magazine


Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Of late, physicists are intrigued by the way huge numbers of electrons can display collective quantum-mechanical behavior. In some materials, a trillion trillion electrons within a crystal can act as a unit, like fire ants clumping into a single mass to survive a flood. Physicists want to understand this collective behavior because of the potential link to exotic properties such as superconductivity, in which electricity can flow without any resistance.

Last year, two independent research groups designed crystals, known as two-dimensional antiferromagnets, whose electrons can collectively imitate the Higgs boson. By precisely studying this behavior, the researchers think they can better understand the physical laws that govern materials—and potentially discover new states of matter. It was the first time that researchers have been able to induce such “Higgs modes” in these materials. “You’re creating a little mini universe,” said David Alan Tennant, a physicist at Oak Ridge National Laboratory who led one of the groups along with Tao Hong, his colleague there.

Both groups induced electrons into Higgs-like activity by pelting their material with neutrons. During these tiny collisions, the electrons’ magnetic fields begin to fluctuate in a patterned way that mathematically resembles the Higgs boson.

Genevieve Martin/Oak Ridge National Laboratory/U.S. Dept. of Energy

The Higgs mode is not simply a mathematical curiosity. When a crystal’s structure permits its electrons to behave this way, the material most likely has other interesting properties, said Bernhard Keimer, a physicist at the Max Planck Institute for Solid State Research who coleads the other group.

That’s because when you get the Higgs mode to appear, the material should be on the brink of a so-called quantum phase transition. Its properties are about to change drastically, like a snowball on a sunny spring day. The Higgs can help you understand the character of the quantum phase transition, says Subir Sachdev, a physicist at Harvard University. These quantum effects often portend bizarre new material properties.

For example, physicists think that quantum phase transitions play a role in certain materials, known as topological insulators, that conduct electricity only on their surface and not in their interior. Researchers have also observed quantum phase transitions in high-temperature superconductors, although the significance of the phase transitions is still unclear. Whereas conventional superconductors need to be cooled to near absolute zero to observe such effects, high-temperature superconductors work at the relatively balmy conditions of liquid nitrogen, which is dozens of degrees higher.

Over the past few years, physicists have created the Higgs mode in other superconductors, but they can’t always understand exactly what’s going on. The typical materials used to study the Higgs mode have a complicated crystal structure that increases the difficulty of understanding the physics at work.

So both Keimer’s and Tennant’s groups set out to induce the Higgs mode in simpler systems. Their antiferromagnets were so-called two-dimensional materials: While each crystal exists as a 3-D chunk, those chunks are built out of stacked two-dimensional layers of atoms that act more or less independently. Somewhat paradoxically, it’s a harder experimental challenge to induce the Higgs mode in these two-dimensional materials. Physicists were unsure if it could be done.

Yet the successful experiments showed that it was possible to use existing theoretical tools to explain the evolution of the Higgs mode. Keimer’s group found that the Higgs mode parallels the behavior of the Higgs boson. Inside a particle accelerator like the Large Hadron Collider, a Higgs boson will quickly decay into other particles, such as photons. In Keimer’s antiferromagnet, the Higgs mode morphs into different collective-electron motion that resembles particles called Goldstone bosons. The group experimentally confirmed that the Higgs mode evolves according to their theoretical predictions.

Tennant’s group discovered how to make their material produce a Higgs mode that doesn’t die out. That knowledge could help them determine how to turn on other quantum properties, like superconductivity, in other materials. “What we want to understand is how to keep quantum behavior in systems,” said Tennant.

Both groups hope to go beyond the Higgs mode. Keimer aims to actually observe a quantum phase transition in his antiferromagnet, which may be accompanied by additional weird phenomena. “That happens quite a lot,” he said. “You want to study a particular quantum phase transition, and then something else pops up.”

They also just want to explore. They expect that more weird properties of matter are associated with the Higgs mode—potentially ones not yet envisioned. “Our brains don’t have a natural intuition for quantum systems,” said Tennant. “Exploring nature is full of surprises because it’s full of things we never imagined.”

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more: https://www.wired.com/story/elusive-higgs-like-state-created-in-exotic-materials/

How Long Can a Neutron Live? Depends on Who You Ask

When physicists strip neutrons from atomic nuclei, put them in a bottle, then count how many remain there after some time, they infer that neutrons radioactively decay in 14 minutes and 39 seconds, on average. But when other physicists generate beams of neutrons and tally the emerging protons—the particles that free neutrons decay into—they peg the average neutron lifetime at around 14 minutes and 48 seconds.

Quanta Magazine


Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

The discrepancy between the “bottle” and “beam” measurements has persisted since both methods of gauging the neutron’s longevity began yielding results in the 1990s. At first, all the measurements were so imprecise that nobody worried. Gradually, though, both methods have improved, and still they disagree. Now, researchers at Los Alamos National Laboratory in New Mexico have made the most precise bottle measurement of the neutron lifetime yet, using a new type of bottle that eliminates possible sources of error in earlier designs. The result, which will soon appear in the journal Science, reinforces the discrepancy with beam experiments and increases the chance that it reflects new physics rather than mere experimental error.

But what new physics? In January, two theoretical physicists put forward a thrilling hypothesis about the cause of the discrepancy. Bartosz Fornal and Benjamin Grinstein of the University of California, San Diego, argued that neutrons might sometimes decay into dark matter—the invisible particles that seem to make up six-sevenths of the matter in the universe based on their gravitational influence, while evading decades of experimental searches. If neutrons sometimes transmogrify into dark matter particles instead of protons, then they would disappear from bottles at a faster rate than protons appear in beams, exactly as observed.

The UCNtau experiment at Los Alamos National Laboratory, which uses the “bottle method” to measure the neutron lifetime.

Fornal and Grinstein determined that, in the simplest scenario, the hypothetical dark matter particle’s mass must fall between 937.9 and 938.8 mega-electron volts, and that a neutron decaying into such a particle would emit a gamma ray of a specific energy. “This is a very concrete signal that experimentalists can look for,” Fornal said in an interview.

The UCNtau experimental team in Los Alamos—named for ultracold neutrons and tau, the Greek symbol for the neutron lifetime—heard about Fornal and Grinstein’s paper last month, just as they were gearing up for another experimental run. Almost immediately, Zhaowen Tang and Chris Morris, members of the collaboration, realized they could mount a germanium detector onto their bottle apparatus to measure gamma-ray emissions while neutrons decayed inside. “Zhaowen went off and built a stand, and we got together the parts for our detector and put them up next to the tank and started taking data,” Morris said.

Data analysis was similarly quick. On Feb. 7, just one month after Fornal and Grinstein’s hypothesis appeared, the UCNtau team reported the results of their experimental test on the physics preprint site arxiv.org: They claim to have ruled out the presence of the telltale gamma rays with 99 percent certainty. Commenting on the outcome, Fornal noted that the dark matter hypothesis is not entirely excluded: A second scenario exists in which the neutron decays into two dark matter particles, rather than one of them and a gamma ray. Without a clear experimental signature, this scenario will be far harder to test. (Fornal and Grinstein’s paper, and the UCNtau team’s, are now simultaneously under review for publication in Physical Review Letters.)

The proton detector at the National Institute of Standards and Technology used in the “beam method.”

So there’s no evidence of dark matter. Yet the neutron lifetime discrepancy is stronger than ever. And whether free neutrons live 14 minutes and 39 or 48 seconds, on average, actually matters.

Physicists need to know the neutron’s lifetime in order to calculate the relative abundances of hydrogen and helium that would have been produced during the universe’s first few minutes. The faster neutrons decayed to protons in that period, the fewer would have existed later to be incorporated into helium nuclei. “That balance of hydrogen and helium is first of all a very sensitive test of the dynamics of the Big Bang,” said Geoffrey Greene, a nuclear physicist at the University of Tennessee and Oak Ridge National Laboratory, “but it also tells us how stars are going to form over the next billions of years,” since galaxies with more hydrogen form more massive, and eventually more explosive, stars. Thus, the neutron lifetime affects predictions of the universe’s far future.

Furthermore, both neutrons and protons are actually composites of elementary particles called quarks that are held together by gluons. Outside of stable atomic nuclei, neutrons decay when one of their down quarks undergoes weak nuclear decay into an up quark, transforming the neutron into a positively charged proton and spitting out a negative electron and an antineutrino in compensation. Quarks and gluons can’t themselves be studied in isolation, which makes neutron decays, in Greene’s words, “our best surrogate for the elementary quark interactions.”

The lingering nine-second uncertainty in the neutron lifetime needs resolving for these reasons. But no one has a clue what’s wrong. Greene, who is a veteran of beam experiments, said, “All of us have gone over very carefully everybody’s experiment, and if we knew where the problem was we would identify it.”

The discrepancy first became a serious matter in 2005, when a group led by Anatoli Serebrov of the Petersburg Nuclear Physics Institute in Russia and physicists at the National Institute of Standards and Technology (NIST) in Gaithersburg, Maryland, reported bottle and beam measurements, respectively, that were individually very precise—the bottle measurement was estimated to be at most one second off, and the beam one at most three seconds—but which differed from each other by eight seconds.

Many design improvements, independent checks and head scratches later, the gap between the world-average bottle and beam measurements has only grown slightly—to nine seconds—while both error margins have shrunk. This leaves two possibilities, said Peter Geltenbort, a nuclear physicist at the Institut Laue-Langevin in France who was on Serebrov’s team in 2005 and is now part of UCNtau: “Either there is really some exotic new physics,” or “everyone was overestimating their precision.”

Beam practitioners at NIST and elsewhere have worked to understand and minimize the many sources of uncertainty in their experiments, including in the intensity of their neutron beam, the volume of the detector that the beam passes through, and the efficiency of the detector, which picks up protons produced by decaying neutrons along the beam’s length. For years, Greene particularly mistrusted the beam-intensity measurement, but independent checks have exonerated it. “At this point I don’t have a best candidate of a systematic effect that’s been overlooked,” he said.

On the bottle side of the story, experts suspected that neutrons might be getting absorbed into their bottles’ walls despite the surfaces being coated with a smooth and reflective material, and even after correcting for wall losses by varying the bottle size. Alternatively, the standard way of counting surviving neutrons in the bottles might have been lossy.

But the new UCNtau experiment has eliminated both explanations. Instead of storing neutrons in a material bottle, the Los Alamos scientists trapped them using magnetic fields. And rather than transporting surviving neutrons to an external detector, they employed an in situ detector that dips into the magnetic bottle and quickly absorbs all the neutrons inside. (Each absorption produces a flash of light that gets picked up by phototubes.) Yet their final answer corroborates that of previous bottle experiments.

The only option is to press on. “Everybody is moving forward,” Morris said. He and the UCNtau team are still collecting data and finishing up an analysis that includes twice as much data as in the forthcoming Science paper. They aim to eventually measure tau with an uncertainty of just 0.2 second. On the beam side, a group at NIST led by Jeffrey Nico is taking data now and expects to have results in two years, aiming for one-second uncertainty, while an experiment in Japan called J-PARC is also getting under way.

NIST and J-PARC will either corroborate UCNtau’s result, deciding the neutron lifetime once and for all, or the saga will continue.

“The tension that these two independent methods disagree is what drives the improvement in the experiments,” Greene said. If only the bottle or the beam technique had been developed, physicists might have gone forward with the wrong value for tau plugged into their calculations. “The virtue of having two independent methods is it keeps you honest. I used to work at the National Bureau of Standards, and they’d say, ‘A man with one watch knows what time it is; a man with two is never sure.’”

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more: https://www.wired.com/story/how-long-can-a-neutron-live-depends-on-who-you-ask/

Researchers share $22m Breakthrough prize as science gets rock star treatment

Glitzy ceremony honours work including that on mapping post-big bang primordial light, cell biology, plant science and neurodegenerative diseases

The most glitzy event on the scientific calendar took place on Sunday night when the Breakthrough Foundation gave away $22m (16.3m) in prizes to dozens of physicists, biologists and mathematicians at a ceremony in Silicon Valley.

The winners this year include five researchers who won $3m (2.2m) each for their work on cell biology, plant science and neurodegenerative diseases, two mathematicians, and a team of 27 physicists who mapped the primordial light that warmed the universe moments after the big bang 13.8 billion years ago.

Now in their sixth year, the Breakthrough prizes are backed by Yuri Milner, a Silicon Valley tech investor, Mark Zuckerberg of Facebook and his wife Priscilla Chan, Anne Wojcicki from the DNA testing company 23andMe, and Googles Sergey Brin. Launched by Milner in 2012, the awards aim to make rock stars of scientists and raise their profile in the public consciousness.

The annual ceremony at Nasas Ames Research Center in California provides a rare opportunity for some of the worlds leading minds to rub shoulders with celebrities, who this year included Morgan Freeman as host, fellow actors Kerry Washington and Mila Kunis, and Miss USA 2017 Kra McCullough. When Joe Polchinski at the University of California in Santa Barbara shared the physics prize last year, he conceded his nieces and nephews would know more about the A-list attendees than he would.

Oxford University geneticist Kim Nasmyth won for his work on chromosomes but said he had not worked out what to do with the windfall. Its a wonderful bonus, but not something you expect, he said. Its a huge amount of money, I havent had time to think it through. On being recognised for what amounts to his lifes work, he added: You have to do science because you want to know, not because you want to get recognition. If you do what it takes to please other people, youll lose your moral compass. Nasmyth has won lucrative awards before and channelled some of his winnings into Gregor Mendels former monastery in Brno.

Another life sciences prizewinner, Joanne Chory at the Salk Institute in San Diego, was honoured for three decades of painstaking research into the genetic programs that flip into action when plants find themselves plunged into shade. Her work revealed that plants can sense when a nearby competitor is about to steal their light, sparking a growth spurt in response. The plants detect threatening neighbours by sensing a surge in the particular wavelengths of red light that are given off by vegetation.

Chory now has ambitious plans to breed plants that can suck vast quantities of carbon dioxide out of the atmosphere in a bid to combat climate change. She believes that crops could be selected to absorb 20 times more of the greenhouse gas than they do today, and convert it into suberin, a waxy material found in roots and bark that breaks down incredibly slowly in soil. If we can do this on 5% of the landmass people are growing crops on, we can take out 50% of global human emissions, she said.

Three other life sciences prizes went to Kazutoshi Mori at Kyoto University and Peter Walter for their work on quality control mechanisms that keep cells healthy, and to Don Cleveland at the University of California, San Diego, for his research on motor neurone disease.

The $3m Breakthrough prize in mathematics was shared by two British-born mathematicians, Christopher Hacon at the University of Utah and James McKernan at the University of California in San Diego. The pair made major contributions to a field of mathematics known as birational algebraic geometry, which sets the rules for projecting abstract objects with more than 1,000 dimensions onto lower-dimensional surfaces. It gets very technical, very quickly, said McKernan.

Speaking before the ceremony, Hacon was feeling a little unnerved. Its really not a mathematician kind of thing, but Ill probably survive, he said. Ive got a tux ready, but Im not keen on wearing it. Asked what he might do with his share of the winnings, Hacon was nothing if not realistic. Ill start by paying taxes, he said. And I have six kids, so the rest will evaporate.

Chuck Bennett, an astrophysicist at Johns Hopkins University in Baltimore, led a Nasa mission known as the Wilkinson Microwave Anisotropy Probe (WMAP) to map the faint afterglow of the big bangs radiation that now permeates the universe. The achievement, now more than a decade old, won the 27-strong science team the $3m Breakthrough prize in fundamental physics. When we made our first maps of the sky, I thought these are beautiful, Bennett told the Guardian. It is still absolutely amazing to me. We can look directly back in time.

Bennett believes that the prizes may help raise the profile of science at a time when it is sorely needed. The point is not to make rock stars of us, but of the science itself, he said. I dont think people realise how big a role science plays in their lives. In everything you do, from the moment you wake up to the moment you go to sleep, theres something about what youre doing that involves scientific advances. I dont think people think about that at all.

Read more: https://www.theguardian.com/science/2017/dec/04/researchers-share-22m-breakthrough-prize-as-science-gets-rock-star-treatment

A Hidden Supercluster Could Solve the Mystery of the Milky Way

Glance at the night sky from a clear vantage point, and the thick band of the Milky Way will slash across the sky. But the stars and dust that paint our galaxy’s disk are an unwelcome sight to astronomers who study all the galaxies that lie beyond our own. It’s like a thick stripe of fog across a windshield, a blur that renders our knowledge of the greater universe incomplete. Astronomers call it the Zone of Avoidance.

Quanta Magazine


Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Renée Kraan-Korteweg has spent her career trying to uncover what lies beyond the zone. She first caught a whiff of something spectacular in the background when, in the 1980s, she found hints of a potential cluster of objects on old photographic survey plates. Over the next few decades, the hints of a large-scale structure kept coming.

Late last year, Kraan-Korteweg and colleagues announced that they had discovered an enormous cosmic structure: a “supercluster” of thousands upon thousands of galaxies. The collection spans 300 million light years, stretching both above and below the galactic plane like an ogre hiding behind a lamppost. The astronomers call it the Vela Supercluster, for its approximate position around the constellation Vela.

Renée Kraan-Korteweg, an astronomer at the University of Cape Town, has spent decades trying to peer through the Zone of Avoidance.
University of Cape Town

Milky Way Movers

The Milky Way, just like every galaxy in the cosmos, moves. While everything in the universe is constantly moving because the universe itself is expanding, since the 1970s astronomers have known of an additional motion, called peculiar velocity. This is a different sort of flow that we seem to be caught in. The Local Group of galaxies—a collection that includes the Milky Way, Andromeda and a few dozen smaller galactic companions—moves at about 600 kilometers per second with respect to the leftover radiation from the Big Bang.

Over the past few decades, astronomers have tallied up all the things that could be pulling and pushing on the Local Group — nearby galaxy clusters, superclusters, walls of clusters and cosmic voids that exert a non-negligible gravitational pull on our own neighborhood.

The biggest tugboat is the Shapley Supercluster, a behemoth of 50 million billion solar masses that resides about 500 million light years away from Earth (and not too far away in the sky from the Vela Supercluster). It accounts for between a quarter and half of the Local Group’s peculiar velocity.

The Milky Way as seen by the Gaia satellite shows the dark clouds of dust that obscure the view of galaxies in the universe beyond.

The remaining motion can’t be accounted for by structures astronomers have already found. So astronomers keep looking farther out into the universe, tallying increasingly distant objects that contribute to the net gravitational pull on the Milky Way. Gravitational pull decreases with increasing distance, but the effect is partly offset by the increasing size of these structures. “As the maps have gone outward,” said Mike Hudson, a cosmologist at the University of Waterloo in Canada, “people continue to identify bigger and bigger things at the edge of the survey. We’re looking out farther, but there’s always a bigger mountain just out of sight.” So far astronomers have only been able to account for about 450 to 500 kilometers per second of the Local Group’s motion.

Astronomers still haven’t fully scoured the Zone of Avoidance to those same depths, however. And the Vela Supercluster discovery shows that something big can be out there, just out of reach.

In February 2014, Kraan-Korteweg and Michelle Cluver, an astronomer at the University of Western Cape in South Africa, set out to map the Vela Supercluster over a six-night observing run at the Anglo-Australian Telescope in Australia. Kraan-Korteweg, of the University of Cape Town, knew where the gas and dust in the Zone of Avoidance was thickest; she targeted individual spots where they had the best chance of seeing through the zone. The goal was to create a “skeleton,” as she calls it, of the structure. Cluver, who had prior experience with the instrument, would read off the distances to individual galaxies.

That project allowed them to conclude that the Vela Supercluster is real, and that it extends 20 by 25 degrees across the sky. But they still don’t understand what’s going on in the core of the supercluster. “We see walls crossing the Zone of Avoidance, but where they cross, we don’t have data at the moment because of the dust,” Kraan-Korteweg said. How are those walls interacting? Have they started to merge? Is there a denser core, hidden by the Milky Way’s glow?

And most important, what is the Vela’s Supercluster’s mass? After all, it is mass that governs the pull of gravity, the buildup of structure.

How to See Through the Haze

While the Zone’s dust and stars block out light in optical and infrared wavelengths, radio waves can pierce through the region. With that in mind, Kraan-Korteweg has a plan to use a type of cosmic radio beacon to map out everything behind the thickest parts of the Zone of Avoidance.

The plan hinges on hydrogen, the simplest and most abundant gas in the universe. Atomic hydrogen is made of a single proton and an electron. Both the proton and the electron have a quantum property called spin, which can be thought of as a little arrow attached to each particle. In hydrogen, these spins can line up parallel to each other, with both pointing in the same direction, or antiparallel, pointing in opposite directions. Occasionally a spin will flip—a parallel atom will switch to antiparallel. When this happens, the atom will release a photon of light with a particular wavelength.

One of the 64 antenna dishes that will make up the MeerKAT telescope in South Africa.
SKA South Africa

The likelihood of one hydrogen atom’s emitting this radio wave is low, but gather a lot of neutral hydrogen gas together, and the chance of detecting it increases. Luckily for Kraan-Korteweg and her colleagues, many of Vela’s member galaxies have a lot of this gas.

During that 2014 observing session, she and Cluver saw indications that many of their identified galaxies host young stars. “And if you have young stars, it means they recently formed, it means there’s gas,” Kraan-Korteweg said, because gas is the raw material that makes stars.

The Milky Way has some of this hydrogen, too—another foreground haze to interfere with observations. But the expansion of the universe can be used to identify hydrogen coming from the Vela structure. As the universe expands, it pulls away galaxies that lie outside our Local Group and shifts the radio light toward the red end of the spectrum. “Those emission lines separate, so you can pick them out,” said Thomas Jarrett, an astronomer at the University of Cape Town and part of the Vela Supercluster discovery team.

While Kraan-Korteweg’s work over her career has dug up some 5,000 galaxies in the Vela Supercluster, she is confident that a sensitive enough radio survey of this neutral hydrogen gas will triple that number and reveal structures that lie behind the densest part of the Milky Way’s disk.

That’s where the MeerKAT radio telescope enters the picture. Located near the small desert town of Carnarvon, South Africa, the instrument will be more sensitive than any radio telescope on Earth. Its 64th and final antenna dish was installed in October, although some dishes still need to be linked together and tested. A half array of 32 dishes should be operating by the end of this year, with the full array following early next year.

Kraan-Korteweg has been pushing over the past year for observing time in this half-array stage, but if she isn’t awarded her requested 200 hours, she’s hoping for 50 hours on the full array. Both options provide the same sensitivity, which she and her colleagues need to detect the radio signals of neutral hydrogen in thousands of individual galaxies hundreds of light years away. Armed with that data, they’ll be able to map what the full structure actually looks like.

Cosmic Basins

Hélène Courtois, an astronomer at the University of Lyon, is taking a different approach to mapping Vela. She makes maps of the universe that she compares to watersheds, or basins. In certain areas of the sky, galaxies migrate toward a common point, just as all the rain in a watershed flows into a single lake or stream. She and her colleagues look for the boundaries, the tipping points of where matter flows toward one basin or another.

Hélène Courtois, an astronomer at the University of Lyon, maps cosmic structure by examining the flow of galaxies.
Eric Leroux, University Lyon Claude Bernard Lyon 1.

A few years ago, Courtois and colleagues used this method to attempt to define our local large-scale structure, which they call Laniakea. The emphasis on defining is important, Courtois explains, because while we have definitions of galaxies and galaxy clusters, there’s no commonly agreed-upon definition for larger-scale structures in the universe such as superclusters and walls.

Part of the problem is that there just aren’t enough superclusters to arrive at a statistically rigorous definition. We can list the ones we know about, but as aggregate structures filled with thousands of galaxies, superclusters show an unknown amount of variation.

Now Courtois and colleagues are turning their attention farther out. “Vela is the most intriguing,” Courtois said. “I want to try to measure the basin of attraction, the boundary, the frontier of Vela.” She is using her own data to find the flows that move toward Vela, and from that she can infer how much mass is pulling on those flows. By comparing those flow lines to Kraan-Korteweg’s map showing where the galaxies physically cluster together, they can try to address how dense of a supercluster Vela is and how far it extends. “The two methods are totally complementary,” Courtois added.

The two astronomers are now collaborating on a map of Vela. When it’s complete, the astronomers hope that they can use it to nail down Vela’s mass, and thus the puzzle of the remaining piece of the Local Group’s motion—“that discrepancy that has been haunting us for 25 years,” Kraan-Korteweg said. And even if the supercluster isn’t responsible for that remaining motion, collecting signals through the Zone of Avoidance from whatever is back there will help resolve our place in the universe.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more: https://www.wired.com/story/a-hidden-supercluster-could-solve-the-mystery-of-the-milky-way/

Study reveals why so many met a sticky end in Boston’s Great Molasses Flood

In 1919, a tank holding 2.3m gallons of molasses burst, causing tragedy. Scientists now understand why the syrup tsunami was so deadly

It may sound like the fantastical plot of a childrens story but Bostons Great Molasses Flood was one of the most destructive and sombre events in the citys history.

On 15 January 1919, a muffled roar heard by residents was the only indication that an industrial-sized tank of syrup had burst open, unleashing a tsunami of sugary liquid through the North End district near the citys docks.

As the 15-foot (5-metre) wave swept through at around 35mph (56km/h), buildings were wrecked, wagons toppled, 21 people were left dead and about 150 were injured.

Now scientists have revisited the incident, providing new insights into why the physical properties of molasses proved so deadly.

Presenting the findings last weekend at the American Association for the Advancement of Science annual meeting in Boston, they said a key factor was that the viscosity of molasses increases dramatically as it cools.

This meant that the roughly 2.3m US gallons of molasses (8.7m litres) became more difficult to escape from as the evening drew in.

Speaking at the conference, Nicole Sharp, an aerospace engineer and author of the blog Fuck Yeah Fluid Dynamics said: The sun started going down and the rescue workers were still struggling to get to people and rescue them. At the same time the molasses is getting harder and harder to move through, its getting harder and harder for people who are in the wreckage to keep their heads clear so they can keep breathing.

As the lake of syrup slowly dispersed, victims were left like gnats in amber, awaiting their cold, grisly death. One man, trapped in the rubble of a collapsed fire station, succumbed when he simply became too tired to sweep the molasses away from his face one last time.

Its horrible in that the more tired they get its getting colder and literally more difficult for them to move the molasses, said Sharp.

Leading up to the disaster, there had been a cold snap in Boston and temperatures were as low as -16C (3F). The steel tank in the harbour, which had been built half as thick as model specifications, had already been showing signs of strain.

Two days before the disaster the tank was about 70% full, when a fresh shipment of warm molasses arrived from the Caribbean and the tank was filled to the top.

One of the things people described would happen whenever they had a new molasses shipment was that the tank would rumble and groan, said Sharp. People described being unnerved by the noises the tank would make after it got filled.

Ominously, the tank had also been leaking, which the company responded to by painting the tank brown.

There were a lot of bad signs in this, said Sharp.

Sharp, and a team of scientists at Harvard University, performed experiments in a large refrigerator to model how corn syrup (standing in for molasses) behaves as temperature varies, confirming contemporary accounts of the disaster.

Historical estimates said that the initial wave would have moved at 56km/h [35mph], said Sharp. When we take models … and then we put in the parameters for molasses, we get numbers that are on a par with that. Horses werent able to run away from it. Horses and people and everything were all caught up in it.

The giant molasses wave follows the physical laws of a phenomenon known as a gravity current, in which a dense fluid expands mostly horizontally into a less dense fluid. Its what lava flows are, its what avalanches are, its that awful draught that comes underneath your door in the wintertime, said Sharp.

The team used a geophysical model, developed by Professor Herbert Huppert of the University of Cambridge, whose work focuses on gravity currents in processes such as lava flows and shifting Antarctic ice sheets.

The model suggests that the molasses incident would have followed three main stages.

The current first goes through a so-called slumping regime, said Huppert, outlining how the molasses would have lurched out of the tank in a giant looming mass.

Then theres a regime where inertia plays a major role, he said. In this stage, the volume of fluid released is the most important factor determining how rapidly the front of the wave sweeps forward.

Then the viscous regime generally follows, he concluded. This is what dictates how slowly the fluid spreads out and explains the grim consequences of the Boston disaster.

It made a difference in how difficult it would be to rescue people and how difficult it would be to survive until you were rescued, said Sharp.

Read more: https://www.theguardian.com/science/2017/feb/25/study-reveals-why-so-many-met-a-sticky-end-in-bostons-great-molasses-flood

How Life (and Death) Spring From Disorder

Whats the difference between physics and biology? Take a golf ball and a cannonball and drop them off the Tower of Pisa. The laws of physics allow you to predict their trajectories pretty much as accurately as you could wish for.

Quanta Magazine


Original storyreprinted with permission from Quanta Magazine, an editorially independent division of theSimons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences

Now do the same experiment again, but replace the cannonball with a pigeon.

Biological systems dont defy physical laws, of coursebut neither do they seem to be predicted by them. In contrast, they are goal-directed: survive and reproduce. We can say that they have a purposeor what philosophers have traditionally called a teleologythat guides their behavior.

By the same token, physics now lets us predict, starting from the state of the universe a billionth of a second after the Big Bang, what it looks like today. But no one imagines that the appearance of the first primitive cells on Earth led predictably to the human race. Laws do not, it seems, dictate the course of evolution.

The teleology and historical contingency of biology, said the evolutionary biologist Ernst Mayr, make it unique among the sciences. Both of these features stem from perhaps biologys only general guiding principle: evolution. It depends on chance and randomness, but natural selection gives it the appearance of intention and purpose. Animals are drawn to water not by some magnetic attraction, but because of their instinct, their intention, to survive. Legs serve the purpose of, among other things, taking us to the water.

Mayr claimed that these features make biology exceptionala law unto itself. But recent developments in nonequilibrium physics, complex systems science and information theory are challenging that view.

Once we regard living things as agents performing a computationcollecting and storing information about an unpredictable environmentcapacities and considerations such as replication, adaptation, agency, purpose and meaning can be understood as arising not from evolutionary improvisation, but as inevitable corollaries of physical laws. In other words, there appears to be a kind of physics of things doing stuff, and evolving to do stuff. Meaning and intentionthought to be the defining characteristics of living systemsmay then emerge naturally through the laws of thermodynamics and statistical mechanics.

This past November, physicists, mathematicians and computer scientists came together with evolutionary and molecular biologists to talkand sometimes argueabout these ideas at a workshop at the Santa Fe Institute in New Mexico, the mecca for the science of complex systems. They asked: Just how special (or not) is biology?

Its hardly surprising that there was no consensus. But one message that emerged very clearly was that, if theres a kind of physics behind biological teleology and agency, it has something to do with the same concept that seems to have become installed at the heart of fundamental physics itself: information.

Disorder and Demons

The first attempt to bring information and intention into the laws of thermodynamics came in the middle of the 19th century, when statistical mechanics was being invented by the Scottish scientist James Clerk Maxwell. Maxwell showed how introducing these two ingredients seemed to make it possible to do things that thermodynamics proclaimed impossible.

Maxwell had already shown how the predictable and reliable mathematical relationships between the properties of a gaspressure, volume and temperaturecould be derived from the random and unknowable motions of countless molecules jiggling frantically with thermal energy. In other words, thermodynamicsthe new science of heat flow, which united large-scale properties of matter like pressure and temperaturewas the outcome of statistical mechanics on the microscopic scale of molecules and atoms.

According to thermodynamics, the capacity to extract useful work from the energy resources of the universe is always diminishing. Pockets of energy are declining, concentrations of heat are being smoothed away. In every physical process, some energy is inevitably dissipated as useless heat, lost among the random motions of molecules. This randomness is equated with the thermodynamic quantity called entropya measurement of disorderwhich is always increasing. That is the second law of thermodynamics. Eventually all the universe will be reduced to a uniform, boring jumble: a state of equilibrium, wherein entropy is maximized and nothing meaningful will ever happen again.

Are we really doomed to that dreary fate? Maxwell was reluctant to believe it, and in 1867 he set out to, as he put it, pick a hole in the second law. His aim was to start with a disordered box of randomly jiggling molecules, then separate the fast molecules from the slow ones, reducing entropy in the process.

Imagine some little creaturethe physicist William Thomson later called it, rather to Maxwells dismay, a demonthat can see each individual molecule in the box. The demon separates the box into two compartments, with a sliding door in the wall between them. Every time he sees a particularly energetic molecule approaching the door from the right-hand compartment, he opens it to let it through. And every time a slow, cold molecule approaches from the left, he lets that through, too. Eventually, he has a compartment of cold gas on the right and hot gas on the left: a heat reservoir that can be tapped to do work.

This is only possible for two reasons. First, the demon has more information than we do: It can see all of the molecules individually, rather than just statistical averages. And second, it has intention: a plan to separate the hot from the cold. By exploiting its knowledge with intent, it can defy the laws of thermodynamics.

At least, so it seemed. It took a hundred years to understand why Maxwells demon cant in fact defeat the second law and avert the inexorable slide toward deathly, universal equilibrium. And the reason shows that there is a deep connection between thermodynamics and the processing of informationor in other words, computation. The German-American physicist Rolf Landauer showed that even if the demon can gather information and move the (frictionless) door at no energy cost, a penalty must eventually be paid. Because it cant have unlimited memory of every molecular motion, it must occasionally wipe its memory cleanforget what it has seen and start againbefore it can continue harvesting energy. This act of information erasure has an unavoidable price: It dissipates energy, and therefore increases entropy. All the gains against the second law made by the demons nifty handiwork are canceled by Landauers limit: the finite cost of information erasure (or more generally, of converting information from one form to another).

Living organisms seem rather like Maxwells demon. Whereas a beaker full of reacting chemicals will eventually expend its energy and fall into boring stasis and equilibrium, living systems have collectively been avoiding the lifeless equilibrium state since the origin of life about three and a half billion years ago. They harvest energy from their surroundings to sustain this nonequilibrium state, and they do it with intention. Even simple bacteria move with purpose toward sources of heat and nutrition. In his 1944 book What is Life?, the physicist Erwin Schrdinger expressed this by saying that living organisms feed on negative entropy.

They achieve it, Schrdinger said, by capturing and storing information. Some of that information is encoded in their genes and passed on from one generation to the next: a set of instructions for reaping negative entropy. Schrdinger didnt know where the information is kept or how it is encoded, but his intuition that it is written into what he called an aperiodic crystal inspired Francis Crick, himself trained as a physicist, and James Watson when in 1953 they figured out how genetic information can be encoded in the molecular structure of the DNA molecule.

A genome, then, is at least in part a record of the useful knowledge that has enabled an organisms ancestorsright back to the distant pastto survive on our planet. According to David Wolpert, a mathematician and physicist at the Santa Fe Institute who convened the recent workshop, and his colleague Artemy Kolchinsky, the key point is that well-adapted organisms are correlated with that environment. If a bacterium swims dependably toward the left or the right when there is a food source in that direction, it is better adapted, and will flourish more, than one that swims in random directions and so only finds the food by chance. A correlation between the state of the organism and that of its environment implies that they share information in common. Wolpert and Kolchinsky say that its this information that helps the organism stay out of equilibriumbecause, like Maxwells demon, it can then tailor its behavior to extract work from fluctuations in its surroundings. If it did not acquire this information, the organism would gradually revert to equilibrium: It would die.

Looked at this way, life can be considered as a computation that aims to optimize the storage and use of meaningful information. And life turns out to be extremely good at it. Landauers resolution of the conundrum of Maxwells demon set an absolute lower limit on the amount of energy a finite-memory computation requires: namely, the energetic cost of forgetting. The best computers today are far, far more wasteful of energy than that, typically consuming and dissipating more than a million times more. But according to Wolpert, a very conservative estimate of the thermodynamic efficiency of the total computation done by a cell is that it is only 10 or so times more than the Landauer limit.

The implication, he said, is that natural selection has been hugely concerned with minimizing the thermodynamic cost of computation. It will do all it can to reduce the total amount of computation a cell must perform. In other words, biology (possibly excepting ourselves) seems to take great care not to overthink the problem of survival. This issue of the costs and benefits of computing ones way through life, he said, has been largely overlooked in biology so far.

Inanimate Darwinism

So living organisms can be regarded as entities that attune to their environment by using information to harvest energy and evade equilibrium. Sure, its a bit of a mouthful. But notice that it said nothing about genes and evolution, on which Mayr, like many biologists, assumed that biological intention and purpose depend.

How far can this picture then take us? Genes honed by natural selection are undoubtedly central to biology. But could it be that evolution by natural selection is itself just a particular case of a more general imperative toward function and apparent purpose that exists in the purely physical universe? It is starting to look that way.

Adaptation has long been seen as the hallmark of Darwinian evolution. But Jeremy England at the Massachusetts Institute of Technology has argued that adaptation to the environment can happen even in complex nonliving systems.

Adaptation here has a more specific meaning than the usual Darwinian picture of an organism well-equipped for survival. One difficulty with the Darwinian view is that theres no way of defining a well-adapted organism except in retrospect. The fittest are those that turned out to be better at survival and replication, but you cant predict what fitness entails. Whales and plankton are well-adapted to marine life, but in ways that bear little obvious relation to one another.

Englands definition of adaptation is closer to Schrdingers, and indeed to Maxwells: A well-adapted entity can absorb energy efficiently from an unpredictable, fluctuating environment. It is like the person who keeps his footing on a pitching ship while others fall over because shes better at adjusting to the fluctuations of the deck. Using the concepts and methods of statistical mechanics in a nonequilibrium setting, England and his colleagues argue that these well-adapted systems are the ones that absorb and dissipate the energy of the environment, generating entropy in the process.

Complex systems tend to settle into these well-adapted states with surprising ease, said England: Thermally fluctuating matter often gets spontaneously beaten into shapes that are good at absorbing work from the time-varying environment.

There is nothing in this process that involves the gradual accommodation to the surroundings through the Darwinian mechanisms of replication, mutation and inheritance of traits. Theres no replication at all. What is exciting about this is that it means that when we give a physical account of the origins of some of the adapted-looking structures we see, they dont necessarily have to have had parents in the usual biological sense, said England. You can explain evolutionary adaptation using thermodynamics, even in intriguing cases where there are no self-replicators and Darwinian logic breaks downso long as the system in question is complex, versatile and sensitive enough to respond to fluctuations in its environment.

But neither is there any conflict between physical and Darwinian adaptation. In fact, the latter can be seen as a particular case of the former. If replication is present, then natural selection becomes the route by which systems acquire the ability to absorb workSchrdingers negative entropyfrom the environment. Self-replication is, in fact, an especially good mechanism for stabilizing complex systems, and so its no surprise that this is what biology uses. But in the nonliving world where replication doesnt usually happen, the well-adapted dissipative structures tend to be ones that are highly organized, like sand ripples and dunes crystallizing from the random dance of windblown sand. Looked at this way, Darwinian evolution can be regarded as a specific instance of a more general physical principle governing nonequilibrium systems.

Prediction Machines

This picture of complex structures adapting to a fluctuating environment allows us also to deduce something about how these structures store information. In short, so long as such structureswhether living or notare compelled to use the available energy efficiently, they are likely to become prediction machines.

Its almost a defining characteristic of life that biological systems change their state in response to some driving signal from the environment. Something happens; you respond. Plants grow toward the light; they produce toxins in response to pathogens. These environmental signals are typically unpredictable, but living systems learn from experience, storing up information about their environment and using it to guide future behavior. (Genes, in this picture, just give you the basic, general-purpose essentials.)

Prediction isnt optional, though. According to the work of Susanne Still at the University of Hawaii, Gavin Crooks, formerly at the Lawrence Berkeley National Laboratory in California, and their colleagues, predicting the future seems to be essential for any energy-efficient system in a random, fluctuating environment.

Theres a thermodynamic cost to storing information about the past that has no predictive value for the future, Still and colleagues show. To be maximally efficient, a system has to be selective. If it indiscriminately remembers everything that happened, it incurs a large energy cost. On the other hand, if it doesnt bother storing any information about its environment at all, it will be constantly struggling to cope with the unexpected. A thermodynamically optimal machine must balance memory against prediction by minimizing its nostalgiathe useless information about the past, said a co-author, David Sivak, now at Simon Fraser University in Burnaby, British Columbia. In short, it must become good at harvesting meaningful informationthat which is likely to be useful for future survival.

Youd expect natural selection to favor organisms that use energy efficiently. But even individual biomolecular devices like the pumps and motors in our cells should, in some important way, learn from the past to anticipate the future. To acquire their remarkable efficiency, Still said, these devices must implicitly construct concise representations of the world they have encountered so far, enabling them to anticipate whats to come.

The Thermodynamics of Death

Even if some of these basic information-processing features of living systems are already prompted, in the absence of evolution or replication, by nonequilibrium thermodynamics, you might imagine that more complex traitstool use, say, or social cooperationmust be supplied by evolution.

Well, dont count on it. These behaviors, commonly thought to be the exclusive domain of the highly advanced evolutionary niche that includes primates and birds, can be mimicked in a simple model consisting of a system of interacting particles. The trick is that the system is guided by a constraint: It acts in a way that maximizes the amount of entropy (in this case, defined in terms of the different possible paths the particles could take) it generates within a given timespan.

Entropy maximization has long been thought to be a trait of nonequilibrium systems. But the system in this model obeys a rule that lets it maximize entropy over a fixed time window that stretches into the future. In other words, it has foresight. In effect, the model looks at all the paths the particles could take and compels them to adopt the path that produces the greatest entropy. Crudely speaking, this tends to be the path that keeps open the largest number of options for how the particles might move subsequently.

You might say that the system of particles experiences a kind of urge to preserve freedom of future action, and that this urge guides its behavior at any moment. The researchers who developed the modelAlexander Wissner-Gross at Harvard University and Cameron Freer, a mathematician at the Massachusetts Institute of Technologycall this a causal entropic force. In computer simulations of configurations of disk-shaped particles moving around in particular settings, this force creates outcomes that are eerily suggestive of intelligence.

In one case, a large disk was able to use a small disk to extract a second small disk from a narrow tubea process that looked like tool use. Freeing the disk increased the entropy of the system. In another example, two disks in separate compartments synchronized their behavior to pull a larger disk down so that they could interact with it, giving the appearance of social cooperation.

Of course, these simple interacting agents get the benefit of a glimpse into the future. Life, as a general rule, does not. So how relevant is this for biology? Thats not clear, although Wissner-Gross said that he is now working to establish a practical, biologically plausible, mechanism for causal entropic forces. In the meantime, he thinks that the approach could have practical spinoffs, offering a shortcut to artificial intelligence. I predict that a faster way to achieve it will be to discover such behavior first and then work backward from the physical principles and constraints, rather than working forward from particular calculation or prediction techniques, he said. In other words, first find a system that does what you want it to do and then figure out how it does it.

Aging, too, has conventionally been seen as a trait dictated by evolution. Organisms have a lifespan that creates opportunities to reproduce, the story goes, without inhibiting the survival prospects of offspring by the parents sticking around too long and competing for resources. That seems surely to be part of the story, but Hildegard Meyer-Ortmanns, a physicist at Jacobs University in Bremen, Germany, thinks that ultimately aging is a physical process, not a biological one, governed by the thermodynamics of information.

Its certainly not simply a matter of things wearing out. Most of the soft material we are made of is renewed before it has the chance to age, Meyer-Ortmanns said. But this renewal process isnt perfect. The thermodynamics of information copying dictates that there must be a trade-off between precision and energy. An organism has a finite supply of energy, so errors necessarily accumulate over time. The organism then has to spend an increasingly large amount of energy to repair these errors. The renewal process eventually yields copies too flawed to function properly; death follows.

Empirical evidence seems to bear that out. It has been long known that cultured human cells seem able to replicate no more than 40 to 60 times (called the Hayflick limit) before they stop and become senescent. And recent observations of human longevity have suggested that there may be some fundamental reason why humans cant survive much beyond age 100.

Theres a corollary to this apparent urge for energy-efficient, organized, predictive systems to appear in a fluctuating nonequilibrium environment. We ourselves are such a system, as are all our ancestors back to the first primitive cell. And nonequilibrium thermodynamics seems to be telling us that this is just what matter does under such circumstances. In other words, the appearance of life on a planet like the early Earth, imbued with energy sources such as sunlight and volcanic activity that keep things churning out of equilibrium, starts to seem not an extremely unlikely event, as many scientists have assumed, but virtually inevitable. In 2006, Eric Smith and the late Harold Morowitz at the Santa Fe Institute argued that the thermodynamics of nonequilibrium systems makes the emergence of organized, complex systems much more likely on a prebiotic Earth far from equilibrium than it would be if the raw chemical ingredients were just sitting in a warm little pond (as Charles Darwin put it) stewing gently.

In the decade since that argument was first made, researchers have added detail and insight to the analysis. Those qualities that Ernst Mayr thought essential to biologymeaning and intentionmay emerge as a natural consequence of statistics and thermodynamics. And those general properties may in turn lead naturally to something like life.

At the same time, astronomers have shown us just how many worlds there areby some estimates stretching into the billionsorbiting other stars in our galaxy. Many are far from equilibrium, and at least a few are Earth-like. And the same rules are surely playing out there, too.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more: https://www.wired.com/2017/02/life-death-spring-disorder/

How Did Life Begin? Dividing Droplets Could Hold the Answer

A collaboration of physicists and biologists in Germany has found a simple mechanism that might have enabled liquid droplets to evolve into living cells in early Earths primordial soup.

Origin-of-life researchers have praised the minimalism of the idea. Ramin Golestanian, a professor of theoretical physics at the University of Oxford who was not involved in the research, called it a big achievement that suggests that the general phenomenology of life formation is a lot easier than one might think.

Quanta Magazine


Original storyreprinted with permission from Quanta Magazine, an editorially independent division of theSimons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences

The central question about the origin of life has been how the first cells arose from primitive precursors. What were those precursors, dubbed protocells, and how did they come alive? Proponents of the membrane-first hypothesis have argued that a fatty-acid membrane was needed to corral the chemicals of life and incubate biological complexity. But how could something as complex as a membrane start to self-replicate and proliferate, allowing evolution to act on it?

In 1924, Alexander Oparin, the Russian biochemist who first envisioned a hot, briny primordial soup as the source of lifes humble beginnings, proposed that the mystery protocells might have been liquid dropletsnaturally forming, membrane-free containers that concentrate chemicals and thereby foster reactions. In recent years, droplets have been found to perform a range of essential functions inside modern cells, reviving Oparins long-forgotten speculation about their role in evolutionary history. But neither he nor anyone else could explain how droplets might have proliferated, growing and dividing and, in the process, evolving into the first cells.

Now, the new work by David Zwicker and collaborators at the Max Planck Institute for the Physics of Complex Systems and the Max Planck Institute of Molecular Cell Biology and Genetics, both in Dresden, suggests an answer. The scientists studied the physics of chemically active droplets, which cycle chemicals in and out of the surrounding fluid, and discovered that these droplets tend to grow to cell size and divide, just like cells. This active droplet behavior differs from the passive and more familiar tendencies of oil droplets in water, which glom together into bigger and bigger droplets without ever dividing.

If chemically active droplets can grow to a set size and divide of their own accord, then it makes it more plausible that there could have been spontaneous emergence of life from nonliving soup, said Frank Jlicher, a biophysicist in Dresden and a co-author of the new paper.

The findings, reported in Nature Physics last month, paint a possible picture of lifes start by explaining how cells made daughters, said Zwicker, who is now a postdoctoral researcher at Harvard University. This is, of course, key if you want to think about evolution.

Luca Giomi, a theoretical biophysicist at Leiden University in the Netherlands who studies the possible physical mechanisms behind the origin of life, said the new proposal is significantly simpler than other mechanisms of protocell division that have been considered, calling it a very promising direction.

However, David Deamer, a biochemist at the University of California, Santa Cruz, and a longtime champion of the membrane-first hypothesis, argues that while the newfound mechanism of droplet division is interesting, its relevance to the origin of life remains to be seen. The mechanism is a far cry, he noted, from the complicated, multistep process by which modern cells divide.

Could simple dividing droplets have evolved into the teeming menagerie of modern life, from amoebas to zebras? Physicists and biologists familiar with the new work say its plausible. As a next step, experiments are under way in Dresden to try to observe the growth and division of active droplets made of synthetic polymers that are modeled after the droplets found in living cells. After that, the scientists hope to observe biological droplets dividing in the same way.

Clifford Brangwynne, a biophysicist at Princeton University who was part of the Dresden-based team that identified the first subcellular droplets eight years agotiny liquid aggregates of protein and RNA in cells of the worm C. elegansexplained that it would not be surprising if these were vestiges of evolutionary history. Just as mitochondria, organelles that have their own DNA, came from ancient bacteria that infected cells and developed a symbiotic relationship with them, the condensed liquid phases that we see in living cells might reflect, in a similar sense, a sort of fossil record of the physicochemical driving forces that helped set up cells in the first place, he said.

This Nature Physics paper takes that to the next level, by revealing the features that droplets would have needed to play a role as protocells, Brangwynne added.

Droplets in Dresden

The Dresden droplet discoveries began in 2009, when Brangwynne and collaborators demystified the nature of little dots known as P granules in C. elegans germline cells, which undergo division into sperm and egg cells. During this division process, the researchers observed that P granules grow, shrink and move across the cells via diffusion. The discovery that they are liquid droplets, reported in Science, prompted a wave of activity as other subcellular structures were also identified as droplets. It didnt take long for Brangwynne and Tony Hyman, head of the Dresden biology lab where the initial experiments took place, to make the connection to Oparins 1924 protocell theory. In a 2012 essay about Oparins life and seminal book, The Origin of Life, Brangwynne and Hyman wrote that the droplets he theorized about may still be alive and well, safe within our cells, like flies in lifes evolving amber.

Oparin most famously hypothesized that lightning strikes or geothermal activity on early Earth could have triggered the synthesis of organic macromolecules necessary for lifea conjecture later made independently by the British scientist John Haldane and triumphantly confirmed by the Miller-Urey experiment in the 1950s. Another of Oparins ideas, that liquid aggregates of these macromolecules might have served as protocells, was less celebrated, in part because he had no clue as to how the droplets might have reproduced, thereby enabling evolution. The Dresden group studying P granules didnt know either.

In the wake of their discovery, Jlicher assigned his new student, Zwicker, the task of unraveling the physics of centrosomes, organelles involved in animal cell division that also seemed to behave like droplets. Zwicker modeled the centrosomes as out-of-equilibrium systems that are chemically active, continuously cycling constituent proteins into and out of the surrounding liquid cytoplasm. In his model, these proteins have two chemical states. Proteins in state A dissolve in the surrounding liquid, while those in state B are insoluble, aggregating inside a droplet. Sometimes, proteins in state B spontaneously switch to state A and flow out of the droplet. An energy source can trigger the reverse reaction, causing a protein in state A to overcome a chemical barrier and transform into state B; when this insoluble protein bumps into a droplet, it slinks easily inside, like a raindrop in a puddle. Thus, as long as theres an energy source, molecules flow in and out of an active droplet. In the context of early Earth, sunlight would be the driving force, Jlicher said.

Zwicker discovered that this chemical influx and efflux will exactly counterbalance each other when an active droplet reaches a certain volume, causing the droplet to stop growing. Typical droplets in Zwickers simulations grew to tens or hundreds of microns across depending on their propertiesthe scale of cells.

Lucy Reading-Ikkanda/Quanta Magazine

The next discovery was even more unexpected. Although active droplets have a stable size, Zwicker found that they are unstable with respect to shape: When a surplus of B molecules enters a droplet on one part of its surface, causing it to bulge slightly in that direction, the extra surface area from the bulging further accelerates the droplets growth as more molecules can diffuse inside. The droplet elongates further and pinches in at the middle, which has low surface area. Eventually, it splits into a pair of droplets, which then grow to the characteristic size. When Jlicher saw simulations of Zwickers equations, he immediately jumped on it and said, That looks very much like division, Zwicker said. And then this whole protocell idea emerged quickly.

Zwicker, Jlicher and their collaborators, Rabea Seyboldt, Christoph Weber and Tony Hyman, developed their theory over the next three years, extending Oparins vision. If you just think about droplets like Oparin did, then its not clear how evolution could act on these droplets, Zwicker said. For evolution, you have to make copies of yourself with slight modifications, and then natural selection decides how things get more complex.

Globule Ancestor

Last spring, Jlicher began meeting with Dora Tang, head of a biology lab at the Max Planck Institute of Molecular Cell Biology and Genetics, to discuss plans to try to observe active-droplet division in action.

Tangs lab synthesizes artificial cells made of polymers, lipids and proteins that resemble biochemical molecules. Over the next few months, she and her team will look for division of liquid droplets made of polymers that are physically similar to the proteins in P granules and centrosomes. The next step, which will be made in collaboration with Hymans lab, is to try to observe centrosomes or other biological droplets dividing, and to determine if they utilize the mechanism identified in the paper by Zwicker and colleagues. That would be a big deal, said Giomi, the Leiden biophysicist.

When Deamer, the membrane-first proponent, read the new paper, he recalled having once observed something like the predicted behavior in hydrocarbon droplets he had extracted from a meteorite. When he illuminated the droplets in near-ultraviolet light, they began moving and dividing. (He sent footage of the phenomenon to Jlicher.) Nonetheless, Deamer isnt convinced of the effects significance. There is no obvious way for the mechanism of division they reported to evolve into the complex process by which living cells actually divide, he said.

Other researchers disagree, including Tang. She says that once droplets started to divide, they could easily have gained the ability to transfer genetic information, essentially divvying up a batch of protein-coding RNA or DNA into equal parcels for their daughter cells. If this genetic material coded for useful proteins that increased the rate of droplet division, natural selection would favor the behavior. Protocells, fueled by sunlight and the law of increasing entropy, would gradually have grown more complex.

Jlicher and colleagues argue that somewhere along the way, protocell droplets could have acquired membranes. Droplets naturally collect crusts of lipids that prefer to lie at the interface between the droplets and the surrounding liquid. Somehow, genes might have started coding for these membranes as a kind of protection. When this idea was put to Deamer, he said, I can go along with that, noting that he would define protocells as the first droplets that had membranes.

The primordial plotline hinges, of course, on the outcome of future experiments, which will determine how robust and relevant the predicted droplet division mechanism really is. Can chemicals be found with the right two states, A and B, to bear out the theory? If so, then a viable path from nonlife to life starts to come into focus.

The luckiest part of the whole process, in Jlichers opinion, was not that droplets turned into cells, but that the first dropletour globule ancestorformed to begin with. Droplets require a lot of chemical material to spontaneously arise or nucleate, and its unclear how so many of the right complex macromolecules could have accumulated in the primordial soup to make it happen. But then again, Jlicher said, there was a lot of soup, and it was stewing for eons.

Its a very rare event. You have to wait a long time for it to happen, he said. And once it happens, then the next things happen more easily, and more systematically.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more: https://www.wired.com/2017/01/life-begin-dividing-droplets-hold-answer/