Study reveals why so many met a sticky end in Boston’s Great Molasses Flood

In 1919, a tank holding 2.3m gallons of molasses burst, causing tragedy. Scientists now understand why the syrup tsunami was so deadly

It may sound like the fantastical plot of a childrens story but Bostons Great Molasses Flood was one of the most destructive and sombre events in the citys history.

On 15 January 1919, a muffled roar heard by residents was the only indication that an industrial-sized tank of syrup had burst open, unleashing a tsunami of sugary liquid through the North End district near the citys docks.

As the 15-foot (5-metre) wave swept through at around 35mph (56km/h), buildings were wrecked, wagons toppled, 21 people were left dead and about 150 were injured.

Now scientists have revisited the incident, providing new insights into why the physical properties of molasses proved so deadly.

Presenting the findings last weekend at the American Association for the Advancement of Science annual meeting in Boston, they said a key factor was that the viscosity of molasses increases dramatically as it cools.

This meant that the roughly 2.3m US gallons of molasses (8.7m litres) became more difficult to escape from as the evening drew in.

Speaking at the conference, Nicole Sharp, an aerospace engineer and author of the blog Fuck Yeah Fluid Dynamics said: The sun started going down and the rescue workers were still struggling to get to people and rescue them. At the same time the molasses is getting harder and harder to move through, its getting harder and harder for people who are in the wreckage to keep their heads clear so they can keep breathing.

As the lake of syrup slowly dispersed, victims were left like gnats in amber, awaiting their cold, grisly death. One man, trapped in the rubble of a collapsed fire station, succumbed when he simply became too tired to sweep the molasses away from his face one last time.

Its horrible in that the more tired they get its getting colder and literally more difficult for them to move the molasses, said Sharp.

Leading up to the disaster, there had been a cold snap in Boston and temperatures were as low as -16C (3F). The steel tank in the harbour, which had been built half as thick as model specifications, had already been showing signs of strain.

Two days before the disaster the tank was about 70% full, when a fresh shipment of warm molasses arrived from the Caribbean and the tank was filled to the top.

One of the things people described would happen whenever they had a new molasses shipment was that the tank would rumble and groan, said Sharp. People described being unnerved by the noises the tank would make after it got filled.

Ominously, the tank had also been leaking, which the company responded to by painting the tank brown.

There were a lot of bad signs in this, said Sharp.

Sharp, and a team of scientists at Harvard University, performed experiments in a large refrigerator to model how corn syrup (standing in for molasses) behaves as temperature varies, confirming contemporary accounts of the disaster.

Historical estimates said that the initial wave would have moved at 56km/h [35mph], said Sharp. When we take models … and then we put in the parameters for molasses, we get numbers that are on a par with that. Horses werent able to run away from it. Horses and people and everything were all caught up in it.

The giant molasses wave follows the physical laws of a phenomenon known as a gravity current, in which a dense fluid expands mostly horizontally into a less dense fluid. Its what lava flows are, its what avalanches are, its that awful draught that comes underneath your door in the wintertime, said Sharp.

The team used a geophysical model, developed by Professor Herbert Huppert of the University of Cambridge, whose work focuses on gravity currents in processes such as lava flows and shifting Antarctic ice sheets.

The model suggests that the molasses incident would have followed three main stages.

The current first goes through a so-called slumping regime, said Huppert, outlining how the molasses would have lurched out of the tank in a giant looming mass.

Then theres a regime where inertia plays a major role, he said. In this stage, the volume of fluid released is the most important factor determining how rapidly the front of the wave sweeps forward.

Then the viscous regime generally follows, he concluded. This is what dictates how slowly the fluid spreads out and explains the grim consequences of the Boston disaster.

It made a difference in how difficult it would be to rescue people and how difficult it would be to survive until you were rescued, said Sharp.

Read more: https://www.theguardian.com/science/2017/feb/25/study-reveals-why-so-many-met-a-sticky-end-in-bostons-great-molasses-flood

How Life (and Death) Spring From Disorder

Whats the difference between physics and biology? Take a golf ball and a cannonball and drop them off the Tower of Pisa. The laws of physics allow you to predict their trajectories pretty much as accurately as you could wish for.

Quanta Magazine


About

Original storyreprinted with permission from Quanta Magazine, an editorially independent division of theSimons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences


Now do the same experiment again, but replace the cannonball with a pigeon.

Biological systems dont defy physical laws, of coursebut neither do they seem to be predicted by them. In contrast, they are goal-directed: survive and reproduce. We can say that they have a purposeor what philosophers have traditionally called a teleologythat guides their behavior.

By the same token, physics now lets us predict, starting from the state of the universe a billionth of a second after the Big Bang, what it looks like today. But no one imagines that the appearance of the first primitive cells on Earth led predictably to the human race. Laws do not, it seems, dictate the course of evolution.

The teleology and historical contingency of biology, said the evolutionary biologist Ernst Mayr, make it unique among the sciences. Both of these features stem from perhaps biologys only general guiding principle: evolution. It depends on chance and randomness, but natural selection gives it the appearance of intention and purpose. Animals are drawn to water not by some magnetic attraction, but because of their instinct, their intention, to survive. Legs serve the purpose of, among other things, taking us to the water.

Mayr claimed that these features make biology exceptionala law unto itself. But recent developments in nonequilibrium physics, complex systems science and information theory are challenging that view.

Once we regard living things as agents performing a computationcollecting and storing information about an unpredictable environmentcapacities and considerations such as replication, adaptation, agency, purpose and meaning can be understood as arising not from evolutionary improvisation, but as inevitable corollaries of physical laws. In other words, there appears to be a kind of physics of things doing stuff, and evolving to do stuff. Meaning and intentionthought to be the defining characteristics of living systemsmay then emerge naturally through the laws of thermodynamics and statistical mechanics.

This past November, physicists, mathematicians and computer scientists came together with evolutionary and molecular biologists to talkand sometimes argueabout these ideas at a workshop at the Santa Fe Institute in New Mexico, the mecca for the science of complex systems. They asked: Just how special (or not) is biology?

Its hardly surprising that there was no consensus. But one message that emerged very clearly was that, if theres a kind of physics behind biological teleology and agency, it has something to do with the same concept that seems to have become installed at the heart of fundamental physics itself: information.

Disorder and Demons

The first attempt to bring information and intention into the laws of thermodynamics came in the middle of the 19th century, when statistical mechanics was being invented by the Scottish scientist James Clerk Maxwell. Maxwell showed how introducing these two ingredients seemed to make it possible to do things that thermodynamics proclaimed impossible.

Maxwell had already shown how the predictable and reliable mathematical relationships between the properties of a gaspressure, volume and temperaturecould be derived from the random and unknowable motions of countless molecules jiggling frantically with thermal energy. In other words, thermodynamicsthe new science of heat flow, which united large-scale properties of matter like pressure and temperaturewas the outcome of statistical mechanics on the microscopic scale of molecules and atoms.

According to thermodynamics, the capacity to extract useful work from the energy resources of the universe is always diminishing. Pockets of energy are declining, concentrations of heat are being smoothed away. In every physical process, some energy is inevitably dissipated as useless heat, lost among the random motions of molecules. This randomness is equated with the thermodynamic quantity called entropya measurement of disorderwhich is always increasing. That is the second law of thermodynamics. Eventually all the universe will be reduced to a uniform, boring jumble: a state of equilibrium, wherein entropy is maximized and nothing meaningful will ever happen again.

Are we really doomed to that dreary fate? Maxwell was reluctant to believe it, and in 1867 he set out to, as he put it, pick a hole in the second law. His aim was to start with a disordered box of randomly jiggling molecules, then separate the fast molecules from the slow ones, reducing entropy in the process.

Imagine some little creaturethe physicist William Thomson later called it, rather to Maxwells dismay, a demonthat can see each individual molecule in the box. The demon separates the box into two compartments, with a sliding door in the wall between them. Every time he sees a particularly energetic molecule approaching the door from the right-hand compartment, he opens it to let it through. And every time a slow, cold molecule approaches from the left, he lets that through, too. Eventually, he has a compartment of cold gas on the right and hot gas on the left: a heat reservoir that can be tapped to do work.

This is only possible for two reasons. First, the demon has more information than we do: It can see all of the molecules individually, rather than just statistical averages. And second, it has intention: a plan to separate the hot from the cold. By exploiting its knowledge with intent, it can defy the laws of thermodynamics.

At least, so it seemed. It took a hundred years to understand why Maxwells demon cant in fact defeat the second law and avert the inexorable slide toward deathly, universal equilibrium. And the reason shows that there is a deep connection between thermodynamics and the processing of informationor in other words, computation. The German-American physicist Rolf Landauer showed that even if the demon can gather information and move the (frictionless) door at no energy cost, a penalty must eventually be paid. Because it cant have unlimited memory of every molecular motion, it must occasionally wipe its memory cleanforget what it has seen and start againbefore it can continue harvesting energy. This act of information erasure has an unavoidable price: It dissipates energy, and therefore increases entropy. All the gains against the second law made by the demons nifty handiwork are canceled by Landauers limit: the finite cost of information erasure (or more generally, of converting information from one form to another).

Living organisms seem rather like Maxwells demon. Whereas a beaker full of reacting chemicals will eventually expend its energy and fall into boring stasis and equilibrium, living systems have collectively been avoiding the lifeless equilibrium state since the origin of life about three and a half billion years ago. They harvest energy from their surroundings to sustain this nonequilibrium state, and they do it with intention. Even simple bacteria move with purpose toward sources of heat and nutrition. In his 1944 book What is Life?, the physicist Erwin Schrdinger expressed this by saying that living organisms feed on negative entropy.

They achieve it, Schrdinger said, by capturing and storing information. Some of that information is encoded in their genes and passed on from one generation to the next: a set of instructions for reaping negative entropy. Schrdinger didnt know where the information is kept or how it is encoded, but his intuition that it is written into what he called an aperiodic crystal inspired Francis Crick, himself trained as a physicist, and James Watson when in 1953 they figured out how genetic information can be encoded in the molecular structure of the DNA molecule.

A genome, then, is at least in part a record of the useful knowledge that has enabled an organisms ancestorsright back to the distant pastto survive on our planet. According to David Wolpert, a mathematician and physicist at the Santa Fe Institute who convened the recent workshop, and his colleague Artemy Kolchinsky, the key point is that well-adapted organisms are correlated with that environment. If a bacterium swims dependably toward the left or the right when there is a food source in that direction, it is better adapted, and will flourish more, than one that swims in random directions and so only finds the food by chance. A correlation between the state of the organism and that of its environment implies that they share information in common. Wolpert and Kolchinsky say that its this information that helps the organism stay out of equilibriumbecause, like Maxwells demon, it can then tailor its behavior to extract work from fluctuations in its surroundings. If it did not acquire this information, the organism would gradually revert to equilibrium: It would die.

Looked at this way, life can be considered as a computation that aims to optimize the storage and use of meaningful information. And life turns out to be extremely good at it. Landauers resolution of the conundrum of Maxwells demon set an absolute lower limit on the amount of energy a finite-memory computation requires: namely, the energetic cost of forgetting. The best computers today are far, far more wasteful of energy than that, typically consuming and dissipating more than a million times more. But according to Wolpert, a very conservative estimate of the thermodynamic efficiency of the total computation done by a cell is that it is only 10 or so times more than the Landauer limit.

The implication, he said, is that natural selection has been hugely concerned with minimizing the thermodynamic cost of computation. It will do all it can to reduce the total amount of computation a cell must perform. In other words, biology (possibly excepting ourselves) seems to take great care not to overthink the problem of survival. This issue of the costs and benefits of computing ones way through life, he said, has been largely overlooked in biology so far.

Inanimate Darwinism

So living organisms can be regarded as entities that attune to their environment by using information to harvest energy and evade equilibrium. Sure, its a bit of a mouthful. But notice that it said nothing about genes and evolution, on which Mayr, like many biologists, assumed that biological intention and purpose depend.

How far can this picture then take us? Genes honed by natural selection are undoubtedly central to biology. But could it be that evolution by natural selection is itself just a particular case of a more general imperative toward function and apparent purpose that exists in the purely physical universe? It is starting to look that way.

Adaptation has long been seen as the hallmark of Darwinian evolution. But Jeremy England at the Massachusetts Institute of Technology has argued that adaptation to the environment can happen even in complex nonliving systems.

Adaptation here has a more specific meaning than the usual Darwinian picture of an organism well-equipped for survival. One difficulty with the Darwinian view is that theres no way of defining a well-adapted organism except in retrospect. The fittest are those that turned out to be better at survival and replication, but you cant predict what fitness entails. Whales and plankton are well-adapted to marine life, but in ways that bear little obvious relation to one another.

Englands definition of adaptation is closer to Schrdingers, and indeed to Maxwells: A well-adapted entity can absorb energy efficiently from an unpredictable, fluctuating environment. It is like the person who keeps his footing on a pitching ship while others fall over because shes better at adjusting to the fluctuations of the deck. Using the concepts and methods of statistical mechanics in a nonequilibrium setting, England and his colleagues argue that these well-adapted systems are the ones that absorb and dissipate the energy of the environment, generating entropy in the process.

Complex systems tend to settle into these well-adapted states with surprising ease, said England: Thermally fluctuating matter often gets spontaneously beaten into shapes that are good at absorbing work from the time-varying environment.

There is nothing in this process that involves the gradual accommodation to the surroundings through the Darwinian mechanisms of replication, mutation and inheritance of traits. Theres no replication at all. What is exciting about this is that it means that when we give a physical account of the origins of some of the adapted-looking structures we see, they dont necessarily have to have had parents in the usual biological sense, said England. You can explain evolutionary adaptation using thermodynamics, even in intriguing cases where there are no self-replicators and Darwinian logic breaks downso long as the system in question is complex, versatile and sensitive enough to respond to fluctuations in its environment.

But neither is there any conflict between physical and Darwinian adaptation. In fact, the latter can be seen as a particular case of the former. If replication is present, then natural selection becomes the route by which systems acquire the ability to absorb workSchrdingers negative entropyfrom the environment. Self-replication is, in fact, an especially good mechanism for stabilizing complex systems, and so its no surprise that this is what biology uses. But in the nonliving world where replication doesnt usually happen, the well-adapted dissipative structures tend to be ones that are highly organized, like sand ripples and dunes crystallizing from the random dance of windblown sand. Looked at this way, Darwinian evolution can be regarded as a specific instance of a more general physical principle governing nonequilibrium systems.

Prediction Machines

This picture of complex structures adapting to a fluctuating environment allows us also to deduce something about how these structures store information. In short, so long as such structureswhether living or notare compelled to use the available energy efficiently, they are likely to become prediction machines.

Its almost a defining characteristic of life that biological systems change their state in response to some driving signal from the environment. Something happens; you respond. Plants grow toward the light; they produce toxins in response to pathogens. These environmental signals are typically unpredictable, but living systems learn from experience, storing up information about their environment and using it to guide future behavior. (Genes, in this picture, just give you the basic, general-purpose essentials.)

Prediction isnt optional, though. According to the work of Susanne Still at the University of Hawaii, Gavin Crooks, formerly at the Lawrence Berkeley National Laboratory in California, and their colleagues, predicting the future seems to be essential for any energy-efficient system in a random, fluctuating environment.

Theres a thermodynamic cost to storing information about the past that has no predictive value for the future, Still and colleagues show. To be maximally efficient, a system has to be selective. If it indiscriminately remembers everything that happened, it incurs a large energy cost. On the other hand, if it doesnt bother storing any information about its environment at all, it will be constantly struggling to cope with the unexpected. A thermodynamically optimal machine must balance memory against prediction by minimizing its nostalgiathe useless information about the past, said a co-author, David Sivak, now at Simon Fraser University in Burnaby, British Columbia. In short, it must become good at harvesting meaningful informationthat which is likely to be useful for future survival.

Youd expect natural selection to favor organisms that use energy efficiently. But even individual biomolecular devices like the pumps and motors in our cells should, in some important way, learn from the past to anticipate the future. To acquire their remarkable efficiency, Still said, these devices must implicitly construct concise representations of the world they have encountered so far, enabling them to anticipate whats to come.

The Thermodynamics of Death

Even if some of these basic information-processing features of living systems are already prompted, in the absence of evolution or replication, by nonequilibrium thermodynamics, you might imagine that more complex traitstool use, say, or social cooperationmust be supplied by evolution.

Well, dont count on it. These behaviors, commonly thought to be the exclusive domain of the highly advanced evolutionary niche that includes primates and birds, can be mimicked in a simple model consisting of a system of interacting particles. The trick is that the system is guided by a constraint: It acts in a way that maximizes the amount of entropy (in this case, defined in terms of the different possible paths the particles could take) it generates within a given timespan.

Entropy maximization has long been thought to be a trait of nonequilibrium systems. But the system in this model obeys a rule that lets it maximize entropy over a fixed time window that stretches into the future. In other words, it has foresight. In effect, the model looks at all the paths the particles could take and compels them to adopt the path that produces the greatest entropy. Crudely speaking, this tends to be the path that keeps open the largest number of options for how the particles might move subsequently.

You might say that the system of particles experiences a kind of urge to preserve freedom of future action, and that this urge guides its behavior at any moment. The researchers who developed the modelAlexander Wissner-Gross at Harvard University and Cameron Freer, a mathematician at the Massachusetts Institute of Technologycall this a causal entropic force. In computer simulations of configurations of disk-shaped particles moving around in particular settings, this force creates outcomes that are eerily suggestive of intelligence.

In one case, a large disk was able to use a small disk to extract a second small disk from a narrow tubea process that looked like tool use. Freeing the disk increased the entropy of the system. In another example, two disks in separate compartments synchronized their behavior to pull a larger disk down so that they could interact with it, giving the appearance of social cooperation.

Of course, these simple interacting agents get the benefit of a glimpse into the future. Life, as a general rule, does not. So how relevant is this for biology? Thats not clear, although Wissner-Gross said that he is now working to establish a practical, biologically plausible, mechanism for causal entropic forces. In the meantime, he thinks that the approach could have practical spinoffs, offering a shortcut to artificial intelligence. I predict that a faster way to achieve it will be to discover such behavior first and then work backward from the physical principles and constraints, rather than working forward from particular calculation or prediction techniques, he said. In other words, first find a system that does what you want it to do and then figure out how it does it.

Aging, too, has conventionally been seen as a trait dictated by evolution. Organisms have a lifespan that creates opportunities to reproduce, the story goes, without inhibiting the survival prospects of offspring by the parents sticking around too long and competing for resources. That seems surely to be part of the story, but Hildegard Meyer-Ortmanns, a physicist at Jacobs University in Bremen, Germany, thinks that ultimately aging is a physical process, not a biological one, governed by the thermodynamics of information.

Its certainly not simply a matter of things wearing out. Most of the soft material we are made of is renewed before it has the chance to age, Meyer-Ortmanns said. But this renewal process isnt perfect. The thermodynamics of information copying dictates that there must be a trade-off between precision and energy. An organism has a finite supply of energy, so errors necessarily accumulate over time. The organism then has to spend an increasingly large amount of energy to repair these errors. The renewal process eventually yields copies too flawed to function properly; death follows.

Empirical evidence seems to bear that out. It has been long known that cultured human cells seem able to replicate no more than 40 to 60 times (called the Hayflick limit) before they stop and become senescent. And recent observations of human longevity have suggested that there may be some fundamental reason why humans cant survive much beyond age 100.

Theres a corollary to this apparent urge for energy-efficient, organized, predictive systems to appear in a fluctuating nonequilibrium environment. We ourselves are such a system, as are all our ancestors back to the first primitive cell. And nonequilibrium thermodynamics seems to be telling us that this is just what matter does under such circumstances. In other words, the appearance of life on a planet like the early Earth, imbued with energy sources such as sunlight and volcanic activity that keep things churning out of equilibrium, starts to seem not an extremely unlikely event, as many scientists have assumed, but virtually inevitable. In 2006, Eric Smith and the late Harold Morowitz at the Santa Fe Institute argued that the thermodynamics of nonequilibrium systems makes the emergence of organized, complex systems much more likely on a prebiotic Earth far from equilibrium than it would be if the raw chemical ingredients were just sitting in a warm little pond (as Charles Darwin put it) stewing gently.

In the decade since that argument was first made, researchers have added detail and insight to the analysis. Those qualities that Ernst Mayr thought essential to biologymeaning and intentionmay emerge as a natural consequence of statistics and thermodynamics. And those general properties may in turn lead naturally to something like life.

At the same time, astronomers have shown us just how many worlds there areby some estimates stretching into the billionsorbiting other stars in our galaxy. Many are far from equilibrium, and at least a few are Earth-like. And the same rules are surely playing out there, too.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more: https://www.wired.com/2017/02/life-death-spring-disorder/

How Did Life Begin? Dividing Droplets Could Hold the Answer

A collaboration of physicists and biologists in Germany has found a simple mechanism that might have enabled liquid droplets to evolve into living cells in early Earths primordial soup.

Origin-of-life researchers have praised the minimalism of the idea. Ramin Golestanian, a professor of theoretical physics at the University of Oxford who was not involved in the research, called it a big achievement that suggests that the general phenomenology of life formation is a lot easier than one might think.

Quanta Magazine


About

Original storyreprinted with permission from Quanta Magazine, an editorially independent division of theSimons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences


The central question about the origin of life has been how the first cells arose from primitive precursors. What were those precursors, dubbed protocells, and how did they come alive? Proponents of the membrane-first hypothesis have argued that a fatty-acid membrane was needed to corral the chemicals of life and incubate biological complexity. But how could something as complex as a membrane start to self-replicate and proliferate, allowing evolution to act on it?

In 1924, Alexander Oparin, the Russian biochemist who first envisioned a hot, briny primordial soup as the source of lifes humble beginnings, proposed that the mystery protocells might have been liquid dropletsnaturally forming, membrane-free containers that concentrate chemicals and thereby foster reactions. In recent years, droplets have been found to perform a range of essential functions inside modern cells, reviving Oparins long-forgotten speculation about their role in evolutionary history. But neither he nor anyone else could explain how droplets might have proliferated, growing and dividing and, in the process, evolving into the first cells.

Now, the new work by David Zwicker and collaborators at the Max Planck Institute for the Physics of Complex Systems and the Max Planck Institute of Molecular Cell Biology and Genetics, both in Dresden, suggests an answer. The scientists studied the physics of chemically active droplets, which cycle chemicals in and out of the surrounding fluid, and discovered that these droplets tend to grow to cell size and divide, just like cells. This active droplet behavior differs from the passive and more familiar tendencies of oil droplets in water, which glom together into bigger and bigger droplets without ever dividing.

If chemically active droplets can grow to a set size and divide of their own accord, then it makes it more plausible that there could have been spontaneous emergence of life from nonliving soup, said Frank Jlicher, a biophysicist in Dresden and a co-author of the new paper.

The findings, reported in Nature Physics last month, paint a possible picture of lifes start by explaining how cells made daughters, said Zwicker, who is now a postdoctoral researcher at Harvard University. This is, of course, key if you want to think about evolution.

Luca Giomi, a theoretical biophysicist at Leiden University in the Netherlands who studies the possible physical mechanisms behind the origin of life, said the new proposal is significantly simpler than other mechanisms of protocell division that have been considered, calling it a very promising direction.

However, David Deamer, a biochemist at the University of California, Santa Cruz, and a longtime champion of the membrane-first hypothesis, argues that while the newfound mechanism of droplet division is interesting, its relevance to the origin of life remains to be seen. The mechanism is a far cry, he noted, from the complicated, multistep process by which modern cells divide.

Could simple dividing droplets have evolved into the teeming menagerie of modern life, from amoebas to zebras? Physicists and biologists familiar with the new work say its plausible. As a next step, experiments are under way in Dresden to try to observe the growth and division of active droplets made of synthetic polymers that are modeled after the droplets found in living cells. After that, the scientists hope to observe biological droplets dividing in the same way.

Clifford Brangwynne, a biophysicist at Princeton University who was part of the Dresden-based team that identified the first subcellular droplets eight years agotiny liquid aggregates of protein and RNA in cells of the worm C. elegansexplained that it would not be surprising if these were vestiges of evolutionary history. Just as mitochondria, organelles that have their own DNA, came from ancient bacteria that infected cells and developed a symbiotic relationship with them, the condensed liquid phases that we see in living cells might reflect, in a similar sense, a sort of fossil record of the physicochemical driving forces that helped set up cells in the first place, he said.

This Nature Physics paper takes that to the next level, by revealing the features that droplets would have needed to play a role as protocells, Brangwynne added.

Droplets in Dresden

The Dresden droplet discoveries began in 2009, when Brangwynne and collaborators demystified the nature of little dots known as P granules in C. elegans germline cells, which undergo division into sperm and egg cells. During this division process, the researchers observed that P granules grow, shrink and move across the cells via diffusion. The discovery that they are liquid droplets, reported in Science, prompted a wave of activity as other subcellular structures were also identified as droplets. It didnt take long for Brangwynne and Tony Hyman, head of the Dresden biology lab where the initial experiments took place, to make the connection to Oparins 1924 protocell theory. In a 2012 essay about Oparins life and seminal book, The Origin of Life, Brangwynne and Hyman wrote that the droplets he theorized about may still be alive and well, safe within our cells, like flies in lifes evolving amber.

Oparin most famously hypothesized that lightning strikes or geothermal activity on early Earth could have triggered the synthesis of organic macromolecules necessary for lifea conjecture later made independently by the British scientist John Haldane and triumphantly confirmed by the Miller-Urey experiment in the 1950s. Another of Oparins ideas, that liquid aggregates of these macromolecules might have served as protocells, was less celebrated, in part because he had no clue as to how the droplets might have reproduced, thereby enabling evolution. The Dresden group studying P granules didnt know either.

In the wake of their discovery, Jlicher assigned his new student, Zwicker, the task of unraveling the physics of centrosomes, organelles involved in animal cell division that also seemed to behave like droplets. Zwicker modeled the centrosomes as out-of-equilibrium systems that are chemically active, continuously cycling constituent proteins into and out of the surrounding liquid cytoplasm. In his model, these proteins have two chemical states. Proteins in state A dissolve in the surrounding liquid, while those in state B are insoluble, aggregating inside a droplet. Sometimes, proteins in state B spontaneously switch to state A and flow out of the droplet. An energy source can trigger the reverse reaction, causing a protein in state A to overcome a chemical barrier and transform into state B; when this insoluble protein bumps into a droplet, it slinks easily inside, like a raindrop in a puddle. Thus, as long as theres an energy source, molecules flow in and out of an active droplet. In the context of early Earth, sunlight would be the driving force, Jlicher said.

Zwicker discovered that this chemical influx and efflux will exactly counterbalance each other when an active droplet reaches a certain volume, causing the droplet to stop growing. Typical droplets in Zwickers simulations grew to tens or hundreds of microns across depending on their propertiesthe scale of cells.

Lucy Reading-Ikkanda/Quanta Magazine

The next discovery was even more unexpected. Although active droplets have a stable size, Zwicker found that they are unstable with respect to shape: When a surplus of B molecules enters a droplet on one part of its surface, causing it to bulge slightly in that direction, the extra surface area from the bulging further accelerates the droplets growth as more molecules can diffuse inside. The droplet elongates further and pinches in at the middle, which has low surface area. Eventually, it splits into a pair of droplets, which then grow to the characteristic size. When Jlicher saw simulations of Zwickers equations, he immediately jumped on it and said, That looks very much like division, Zwicker said. And then this whole protocell idea emerged quickly.

Zwicker, Jlicher and their collaborators, Rabea Seyboldt, Christoph Weber and Tony Hyman, developed their theory over the next three years, extending Oparins vision. If you just think about droplets like Oparin did, then its not clear how evolution could act on these droplets, Zwicker said. For evolution, you have to make copies of yourself with slight modifications, and then natural selection decides how things get more complex.

Globule Ancestor

Last spring, Jlicher began meeting with Dora Tang, head of a biology lab at the Max Planck Institute of Molecular Cell Biology and Genetics, to discuss plans to try to observe active-droplet division in action.

Tangs lab synthesizes artificial cells made of polymers, lipids and proteins that resemble biochemical molecules. Over the next few months, she and her team will look for division of liquid droplets made of polymers that are physically similar to the proteins in P granules and centrosomes. The next step, which will be made in collaboration with Hymans lab, is to try to observe centrosomes or other biological droplets dividing, and to determine if they utilize the mechanism identified in the paper by Zwicker and colleagues. That would be a big deal, said Giomi, the Leiden biophysicist.

When Deamer, the membrane-first proponent, read the new paper, he recalled having once observed something like the predicted behavior in hydrocarbon droplets he had extracted from a meteorite. When he illuminated the droplets in near-ultraviolet light, they began moving and dividing. (He sent footage of the phenomenon to Jlicher.) Nonetheless, Deamer isnt convinced of the effects significance. There is no obvious way for the mechanism of division they reported to evolve into the complex process by which living cells actually divide, he said.

Other researchers disagree, including Tang. She says that once droplets started to divide, they could easily have gained the ability to transfer genetic information, essentially divvying up a batch of protein-coding RNA or DNA into equal parcels for their daughter cells. If this genetic material coded for useful proteins that increased the rate of droplet division, natural selection would favor the behavior. Protocells, fueled by sunlight and the law of increasing entropy, would gradually have grown more complex.

Jlicher and colleagues argue that somewhere along the way, protocell droplets could have acquired membranes. Droplets naturally collect crusts of lipids that prefer to lie at the interface between the droplets and the surrounding liquid. Somehow, genes might have started coding for these membranes as a kind of protection. When this idea was put to Deamer, he said, I can go along with that, noting that he would define protocells as the first droplets that had membranes.

The primordial plotline hinges, of course, on the outcome of future experiments, which will determine how robust and relevant the predicted droplet division mechanism really is. Can chemicals be found with the right two states, A and B, to bear out the theory? If so, then a viable path from nonlife to life starts to come into focus.

The luckiest part of the whole process, in Jlichers opinion, was not that droplets turned into cells, but that the first dropletour globule ancestorformed to begin with. Droplets require a lot of chemical material to spontaneously arise or nucleate, and its unclear how so many of the right complex macromolecules could have accumulated in the primordial soup to make it happen. But then again, Jlicher said, there was a lot of soup, and it was stewing for eons.

Its a very rare event. You have to wait a long time for it to happen, he said. And once it happens, then the next things happen more easily, and more systematically.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more: https://www.wired.com/2017/01/life-begin-dividing-droplets-hold-answer/

Move Over, CodersPhysicists Will Soon Rule Silicon Valley

It’s a bad time to be a physicist.

At least, thats what Oscar Boykin says. He majored in physics at the Georgia Institute of Technology and in 2002 he finished a physics PhD at UCLA. But four years ago, physicists at the Large Hadron Collider in Switzerland discovered the Higgs boson, a subatomic particle first predicted in the 1960s. As Boykin points out, everyone expected it. The Higgs didn’t mess with the theoretical models of the universe. It didn’t change anything or give physcists anything new to strive for. “Physicists are excited when there’s something wrong with physics, and we’re in a situation now where there’s not a lot that’s wrong,” he says. “It’s a disheartening place for a physicist to be in.” Plus, the pay isn’t too good.

Boykin is no longer a physicist. He’s a Silicon Valley software engineer. And it’s a very good time to be one of those.

Boykin works at Stripe, a $9-billion startup that helps businesses accept payments online. He helps build and operate software systems that collect data from across the company’s services, and he works to predict the future of these services, including when, where, and how the fraudulent transactions will come. As a physicist, he’s ideally suited to the job, which requires both extreme math and abstract thought. And yet, unlike a physicist, he’s working in a field that now offers endless challenges and possibilities. Plus, the pay is great.

If physics and software engineering were subatomic particles, Silicon Valley has turned into the place where the fields collide. Boykin works with three other physicists at Stripe. In December, when General Electric acquired the machine learning startup Wise.io, CEO Jeff Immelt boasted that he had just grabbed a company packed with physicists, most notably UC Berkeley astrophysicist Joshua Bloom. The open source machine learning software H20, used by 70,000 data scientists across the globe, was built with help from Swiss physicist Arno Candel, who once worked at the SLAC National Accelerator Laboratory. Vijay Narayanan, Microsoft’s head of data science, is an astrophysicist, and several other physicists work under him.

Its not on purpose, exactly. “We didn’t go into the physics kindergarten and steal a basket of children,” says Stripe president and co-founder John Collison. “It just happened.” And it’s happening across Silicon Valley. Because structurally and technologically, the things that just about every internet company needs to do are more and more suited to the skill set of a physicist.

The Naturals

Of course, physicists have played a role in computer technology since its earliest days, just as they’ve played a role in so many other fields. John Mauchly, who helped design the ENIAC, one of the earliest computers, was a physicist. Dennis Ritchie, the father of the C programming language, was too.

But this is a particularly ripe moment for physicists in computer tech, thanks to the rise of machine learning, where machines learn tasks by analyzing vast amounts of data. This new wave of data science and AI is something that suits physicists right down to their socks.

Among other things, the industry has embraced neural networks, software that aims to mimic the structure of the human brain. But these neural networks are really just math on an enormous scale, mostly linear algebra and probability theory. Computer scientists aren’t necessarily trained in these areas, but physicists are. “The only thing that is really new to physicists is learning how to optimize these neural networks, training them, but that’s relatively straightforward,” Boykin says. “One technique is called Newton’s method. Newton the physicist, not some other Newton.”

Chris Bishop, who heads Microsoft’s Cambridge research lab, felt the same way thirty years ago, when deep neural networks first started to show promise in the academic world. That’s what led him from physics into machine learning. “There is something very natural about a physicist going into machine learning,” he says, “more natural than a computer scientist.”

The Challenge Space

Ten years ago, Boykin says, so many of his old physics pals were moving into the financial world. That same flavor of mathematics was also enormously useful on Wall Street as a way of predicting where the markets would go. One key method was The Black-Scholes Equation, a means of determining the value of a financial derivative. But Black-Scholes helped foment the great crash of 2008, and now, Boykin and others physicists say that far more of their colleagues are moving into data science and other kinds of computer tech.

Earlier this decade, physicists arrived at the top tech companies to help build so-called Big Data software, systems that juggle data across hundreds or even thousands of machines. At Twitter, Boykin helped build one called Summingbird, and three guys who met in the physics department at MIT built similar software at a startup called Cloudant. Physicists know how to handle data—at MIT, Cloudant’s founders handled massive datasets from the the Large Hadron Collider—and building these enormously complex systems requires its own breed of abstract thought. Then, once these systems were built, so many physicists have helped use the data they harnessed.

In the early days of Google, one of the key people building the massively distributed systems in the companys engine room was Yonatan Zunger, who has a PhD in string theory from Stanford. And when Kevin Scott joined the Google’s ads team, charged with grabbing data from across Google and using it to predict which ads were most likely to get the most clicks, he hired countless physicists. Unlike many computer scientists, they were suited to the very experimental nature of machine learning. “It was almost like lab science,” says Scott, now chief technology officer at LinkedIn.

Now that Big Data software is commonplace—Stripe uses an open source version of what Boykin helped build at Twitter—its helping machine learning models drive predictions inside so many other companies. That provides physicists with any even wider avenue into the Silicon Valley. At Stripe, Boykin’s team also includes Roban Kramer (physics PhD, Columbia), Christian Anderson (physics master’s, Harvard), and Kelley Rivoire (physics bachelor’s, MIT). They come because they’re suited to the work. And they come because of the money. As Boykin says: “The salaries in tech are arguably absurd.” But they also come because there are so many hard problems to solve.

Anderson left Harvard before getting his PhD because he came to view the field much as Boykin does—as an intellectual pursuit of diminishing returns. But that’s not the case on the internet. “Implicit in ‘the internet’ is the scope, the coverage of it,” Anderson says. “It makes opportunities much greater, but it also enriches the challenge space, the problem space. There is intellectual upside.”

The Future

Today, physicists are moving into Silicon Valley companies. But in the years come, a similar phenomenon will spread much further. Machine learning will change not only how the world analyzes data but how it builds software. Neural networks are already reinventing image recognition, speech recognition, machine translation, and the very nature of software interfaces. As Microsofts Chris Bishop says, software engineering is moving from handcrafted code based on logic to machine learning models based on probability and uncertainty. Companies like Google and Facebook are beginning to retrain their engineers in this new way of thinking. Eventually, the rest of the computing world will follow suit.

In other words, all the physicists pushing into the realm of the Silicon Valley engineer is a sign of a much bigger change to come. Soon, all the Silicon Valley engineers will push into the realm of the physicist.

Read more: https://www.wired.com/2017/01/move-coders-physicists-will-soon-rule-silicon-valley/

The Man Whos Trying to Kill Dark Matter

For 80 years, scientists have puzzled over the way galaxies and other cosmic structures appear to gravitate toward something they cannot see. This hypothetical dark matter seems to outweigh all visible matter by a startling ratio of five to one, suggesting that we barely know our own universe. Thousands of physicists are doggedly searching for these invisible particles.

Quanta Magazine


About

Original storyreprinted with permission from Quanta Magazine, an editorially independent division of theSimons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences


But the dark matter hypothesis assumes scientists know how matter in the sky ought to move in the first place. At the end of 2016, a series of developments has revived a long-disfavored argument that dark matter doesnt exist after all. In this view, no missing matter is needed to explain the errant motions of the heavenly bodies; rather, on cosmic scales, gravity itself works in a different way than either Isaac Newton or Albert Einstein predicted.

The latest attempt to explain away dark matter is a much-discussed proposal by Erik Verlinde, a theoretical physicist at the University of Amsterdam who is known for bold and prescient, if sometimes imperfect, ideas. In a dense 51-page paper posted online on Nov. 7, Verlinde casts gravity as a byproduct of quantum interactions and suggests that the extra gravity attributed to dark matter is an effect of dark energythe background energy woven into the space-time fabric of the universe.

Instead of hordes of invisible particles, dark matter is an interplay between ordinary matter and dark energy, Verlinde said.

To make his case, Verlinde has adopted a radical perspective on the origin of gravity that is currently in vogue among leading theoretical physicists. Einstein defined gravity as the effect of curves in space-time created by the presence of matter. According to the new approach, gravity is an emergent phenomenon. Space-time and the matter within it are treated as a hologram that arises from an underlying network of quantum bits (called qubits), much as the three-dimensional environment of a computer game is encoded in classical bits on a silicon chip. Working within this framework, Verlinde traces dark energy to a property of these underlying qubits that supposedly encode the universe. On large scales in the hologram, he argues, dark energy interacts with matter in just the right way to create the illusion of dark matter.

In his calculations, Verlinde rediscovered the equations of modified Newtonian dynamics, or MOND. This 30-year-old theory makes an ad hoc tweak to the famous inverse-square law of gravity in Newtons and Einsteins theories in order to explain some of the phenomena attributed to dark matter. That this ugly fix works at all has long puzzled physicists. I have a way of understanding the MOND success from a more fundamental perspective, Verlinde said.

Many experts have called Verlindes paper compelling but hard to follow. While it remains to be seen whether his arguments will hold up to scrutiny, the timing is fortuitous. In a new analysis of galaxies published on Nov. 9 in Physical Review Letters, three astrophysicists led by Stacy McGaugh of Case Western Reserve University in Cleveland, Ohio, have strengthened MONDs case against dark matter.

The researchers analyzed a diverse set of 153 galaxies, and for each one they compared the rotation speed of visible matter at any given distance from the galaxys center with the amount of visible matter contained within that galactic radius. Remarkably, these two variables were tightly linked in all the galaxies by a universal law, dubbed the radial acceleration relation. This makes perfect sense in the MOND paradigm, since visible matter is the exclusive source of the gravity driving the galaxys rotation (even if that gravity does not take the form prescribed by Newton or Einstein). With such a tight relationship between gravity felt by visible matter and gravity given by visible matter, there would seem to be no room, or need, for dark matter.

Even as dark matter proponents rise to its defense, a third challenge has materialized. In new research that has been presented at seminars and is under review by the Monthly Notices of the Royal Astronomical Society, a team of Dutch astronomers have conducted what they call the first test of Verlindes theory: In comparing his formulas to data from more than 30,000 galaxies, Margot Brouwer of Leiden University in the Netherlands and her colleagues found that Verlinde correctly predicts the gravitational distortion or lensing of light from the galaxiesanother phenomenon that is normally attributed to dark matter. This is somewhat to be expected, as MONDs original developer, the Israeli astrophysicist Mordehai Milgrom, showed years ago that MOND accounts for gravitational lensing data. Verlindes theory will need to succeed at reproducing dark matter phenomena in cases where the old MOND failed.

Kathryn Zurek, a dark matter theorist at Lawrence Berkeley National Laboratory, said Verlindes proposal at least demonstrates how something like MOND might be right after all. One of the challenges with modified gravity is that there was no sensible theory that gives rise to this behavior, she said. If [Verlindes] paper ends up giving that framework, then that by itself could be enough to breathe more life into looking at [MOND] more seriously.

The New MOND

In Newtons and Einsteins theories, the gravitational attraction of a massive object drops in proportion to the square of the distance away from it. This means stars orbiting around a galaxy should feel less gravitational pulland orbit more slowlythe farther they are from the galactic center. Stars velocities do drop as predicted by the inverse-square law in the inner galaxy, but instead of continuing to drop as they get farther away, their velocities level off beyond a certain point. The flattening of galaxy rotation speeds, discovered by the astronomer Vera Rubin in the 1970s, is widely considered to be Exhibit A in the case for dark matterexplained, in that paradigm, by dark matter clouds or halos that surround galaxies and give an extra gravitational acceleration to their outlying stars.

Lucy Reading-Ikkanda/Quanta Magazine

Searches for dark matter particles have proliferatedwith hypothetical weakly interacting massive particles (WIMPs) and lighter-weight axions serving as prime candidatesbut so far, experiments have found nothing.

Meanwhile, in the 1970s and 1980s, some researchers, including Milgrom, took a different tack. Many early attempts at tweaking gravity were easy to rule out, but Milgrom found a winning formula: When the gravitational acceleration felt by a star drops below a certain levelprecisely 0.00000000012 meters per second per second, or 100 billion times weaker than we feel on the surface of the Earthhe postulated that gravity somehow switches from an inverse-square law to something close to an inverse-distance law. Theres this magic scale, McGaugh said. Above this scale, everything is normal and Newtonian. Below this scale is where things get strange. But the theory does not really specify how you get from one regime to the other.

Physicists do not like magic; when other cosmological observations seemed far easier to explain with dark matter than with MOND, they left the approach for dead. Verlindes theory revitalizes MOND by attempting to reveal the method behind the magic.

Verlinde, ruddy and fluffy-haired at 54 and lauded for highly technical string theory calculations, first jotted down a back-of-the-envelope version of his idea in 2010. It built on a famous paper he had written months earlier, in which he boldly declared that gravity does not really exist. By weaving together numerous concepts and conjectures at the vanguard of physics, he had concluded that gravity is an emergent thermodynamic effect, related to increasing entropy (or disorder). Then, as now, experts were uncertain what to make of the paper, though it inspired fruitful discussions.

The particular brand of emergent gravity in Verlindes paper turned out not to be quite right, but he was tapping into the same intuition that led other theorists to develop the modern holographic description of emergent gravity and space-timean approach that Verlinde has now absorbed into his new work.

In this framework, bendy, curvy space-time and everything in it is a geometric representation of pure quantum informationthat is, data stored in qubits. Unlike classical bits, qubits can exist simultaneously in two states (0 and 1) with varying degrees of probability, and they become entangled with each other, such that the state of one qubit determines the state of the other, and vice versa, no matter how far apart they are. Physicists have begun to work out the rules by which the entanglement structure of qubits mathematically translates into an associated space-time geometry. An array of qubits entangled with their nearest neighbors might encode flat space, for instance, while more complicated patterns of entanglement give rise to matter particles such as quarks and electrons, whose mass causes the space-time to be curved, producing gravity. The best way we understand quantum gravity currently is this holographic approach, said Mark Van Raamsdonk, a physicist at the University of British Columbia in Vancouver who has done influential work on the subject.

The mathematical translations are rapidly being worked out for holographic universes with an Escher-esque space-time geometry known as anti-de Sitter (AdS) space, but universes like ours, which have de Sitter geometries, have proved far more difficult. In his new paper, Verlinde speculates that its exactly the de Sitter property of our native space-time that leads to the dark matter illusion.

De Sitter space-times like ours stretch as you look far into the distance. For this to happen, space-time must be infused with a tiny amount of background energyoften called dark energywhich drives space-time apart from itself. Verlinde models dark energy as a thermal energy, as if our universe has been heated to an excited state. (AdS space, by contrast, is like a system in its ground state.) Verlinde associates this thermal energy with long-range entanglement between the underlying qubits, as if they have been shaken up, driving entangled pairs far apart. He argues that this long-range entanglement is disrupted by the presence of matter, which essentially removes dark energy from the region of space-time that it occupied. The dark energy then tries to move back into this space, exerting a kind of elastic response on the matter that is equivalent to a gravitational attraction.

Because of the long-range nature of the entanglement, the elastic response becomes increasingly important in larger volumes of space-time. Verlinde calculates that it will cause galaxy rotation curves to start deviating from Newtons inverse-square law at exactly the magic acceleration scale pinpointed by Milgrom in his original MOND theory.

Van Raamsdonk calls Verlindes idea definitely an important direction. But he says its too soon to tell whether everything in the paperwhich draws from quantum information theory, thermodynamics, condensed matter physics, holography and astrophysicshangs together. Either way, Van Raamsdonk said, I do find the premise interesting, and feel like the effort to understand whether something like that could be right could be enlightening.

One problem, said Brian Swingle of Harvard and Brandeis universities, who also works in holography, is that Verlinde lacks a concrete model universe like the ones researchers can construct in AdS space, giving him more wiggle room for making unproven speculations. To be fair, weve gotten further by working in a more limited context, one which is less relevant for our own gravitational universe, Swingle said, referring to work in AdS space. We do need to address universes more like our own, so I hold out some hope that his new paper will provide some additional clues or ideas going forward.

The Case for Dark Matter

Verlinde could be capturing the zeitgeist the way his 2010 entropic-gravity paper did. Or he could be flat-out wrong. The question is whether his new and improved MOND can reproduce phenomena that foiled the old MOND and bolstered belief in dark matter.

One such phenomenon is the Bullet cluster, a galaxy cluster in the process of colliding with another. The visible matter in the two clusters crashes together, but gravitational lensing suggests that a large amount of dark matter, which does not interact with visible matter, has passed right through the crash site. Some physicists consider this indisputable proof of dark matter. However, Verlinde thinks his theory will be able to handle the Bullet cluster observations just fine. He says dark energys gravitational effect is embedded in space-time and is less deformable than matter itself, which would have allowed the two to separate during the cluster collision.

But the crowning achievement for Verlindes theory would be to account for the suspected imprints of dark matter in the cosmic microwave background (CMB), ancient light that offers a snapshot of the infant universe. The snapshot reveals the way matter at the time repeatedly contracted due to its gravitational attraction and then expanded due to self-collisions, producing a series of peaks and troughs in the CMB data. Because dark matter does not interact, it would only have contracted without ever expanding, and this would modulate the amplitudes of the CMB peaks in exactly the way that scientists observe. One of the biggest strikes against the old MOND was its failure to predict this modulation and match the peaks amplitudes. Verlinde expects that his version will workonce again, because matter and the gravitational effect of dark energy can separate from each other and exhibit different behaviors. Having said this, he said, I have not calculated this all through.

While Verlinde confronts these and a handful of other challenges, proponents of the dark matter hypothesis have some explaining of their own to do when it comes to McGaugh and his colleagues recent findings about the universal relationship between galaxy rotation speeds and their visible matter content.

In October, responding to a preprint of the paper by McGaugh and his colleagues, two teams of astrophysicists independently argued that the dark matter hypothesis can account for the observations. They say the amount of dark matter in a galaxys halo would have precisely determined the amount of visible matter the galaxy ended up with when it formed. In that case, galaxies rotation speeds, even though theyre set by dark matter and visible matter combined, will exactly correlate with either their dark matter content or their visible matter content (since the two are not independent). However, computer simulations of galaxy formation do not currently indicate that galaxies dark and visible matter contents will always track each other. Experts are busy tweaking the simulations, but Arthur Kosowsky of the University of Pittsburgh, one of the researchers working on them, says its too early to tell if the simulations will be able to match all 153 examples of the universal law in McGaugh and his colleagues galaxy data set. If not, then the standard dark matter paradigm is in big trouble. Obviously this is something that the community needs to look at more carefully, Zurek said.

Even if the simulations can be made to match the data, McGaugh, for one, considers it an implausible coincidence that dark matter and visible matter would conspire to exactly mimic the predictions of MOND at every location in every galaxy. If somebody were to come to you and say, The solar system doesnt work on an inverse-square law, really its an inverse-cube law, but theres dark matter thats arranged just so that it always looks inverse-square, you would say that person is insane, he said. But thats basically what were asking to be the case with dark matter here.

Given the considerable indirect evidence and near consensus among physicists that dark matter exists, it still probably does, Zurek said. That said, you should always check that youre not on a bandwagon, she added. Even though this paradigm explains everything, you should always check that there isnt something else going on.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more: https://www.wired.com/2017/01/case-dark-matter/

Deep Within a Mountain, Physicists Race to Unearth Dark Matter

In a lab buried under the Apennine Mountains of Italy, Elena Aprile, a professor of physics at Columbia University, is racing to unearth what would be one of the biggest discoveries in physics.

She has not yet succeeded, even after more than a decade of work. Then again, nobody else has, either.

Quanta Magazine


About

Original story reprinted with permission from Quanta Magazine, an editorially independent division of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences


Aprile leads the XENON dark matter experiment, one of several competing efforts to detect a particle responsible for the astrophysical peculiarities that are collectively attributed to dark matter. These include stars that rotate around the cores of galaxies as if pulled by invisible mass, excessive warping of space around large galaxy clusters, and the leopard-print pattern of hot and cold spots in the early universe.

For decades, the most popular explanation for such phenomena was that dark matter is made of as-yet undiscovered weakly interacting massive particles, known as WIMPs. These WIMPs would only rarely leave an imprint on the more familiar everyday matter.

That paradigm has recently been under fire. The Large Hadron Collider located at the CERN laboratory near Geneva has not yet found anything to support the existence of WIMPs. Other particles, less studied, could also do the trick. Dark matters astrophysical effects might even be caused by modifications of gravity, with no need for the missing stuff at all.

The most stringent WIMP searches have been done using Apriles strategy: Pour plenty of liquid xenona noble element like helium or neon, but heavierinto a vat. Shield it from cosmic rays, which would inundate the detector with spurious signals. Then wait for a passing WIMP to bang into a xenon atoms nucleus. Once it does, capture out the tiny flash of light that should result.

Underground, I guess, there is no such major thing holding you from operating your detector. But there are still, in the back of your mind, thoughts about the seismic resilience of what you designed and what you built.

In a 2011 interview with The New York Times about women at the top of their scientific fields, you described the life of a scientist as tough, competitive and constantly exposed. You suggested that if one of your daughters aspired to be a scientist you would want her to be made of titanium. What did you mean by that?

Maybe I shouldnt demand this of every woman in science or physics. Its true that it might not be fair to ask that everyone is made of titanium. But we must face itin building or running this new experimentthere is going to be a lot of pressure sometimes. Its on every student, every postdoc, every one of us: Try to go fast and get the results, and work day and night if you want to get there. You can go on medical leave or disability, but the WIMP is not waiting for you. Somebody else is going to get it, right? This is what I mean when I say you have to be strong.

Going after something like this, its not a 9-to-5 job. I wouldnt discourage anyone at the beginning to try. But then once you start, you cannot just pretend that this is just a normal job. This is not a normal job. Its not a job. Its a quest.

Aprile in her lab at Columbias Nevis Laboratories.Ben Sklar for Quanta Magazine

In another interview, with the Italian newspaper La Repubblica, you discussed having a brilliant but demanding mentor in Carlo Rubbia, who won the Nobel Prize for Physics in 1984. What was that relationship like?

It made me of titanium, probably. You have to imagine this 23-year-old young woman from Italy ending up at CERN as a summer student in the group of this guy. Even today, I would still be scared if I were that person. Carlo exudes confidence. I was just intimidated.

He would keep pushing you beyond the state that is even possible: Its all about the science; its all about the goal. How the hell you get there I dont care: If youre not sleeping, if youre not eating, if you dont have time to sleep with your husband for a month, who cares? You have a baby to feed? Find some way. Since I survived that period I knew that I was made a bit of titanium, lets put it that way. I did learn to contain my tears. This is a person you dont want to show weakness to.

Now, 30 years after going off to start your own lab, how does the experience of having worked with him inform the scientist you are today, the leader of XENON?

For a long time, he was still involved in his liquid-argon effort. He would still tell me, What are you doing with xenon; you have to turn to argon. It has taken me many years to get over this Rubbia fear, for many reasons, probablyeven if I dont admit it. But now I feel very strong. I can face him and say: Hey, your liquid-argon detector isnt working. Mine is working.

I decided I want to be a more practical person. Most guys are naive. All these guys are naive. A lot of things he did and does are exceptional, yes, but building a successful experiment is not something you do alone. This is a team effort and you must be able to work well with your team. Alone, I wouldnt get anywhere. Everybody counts. It doesnt matter that we build a beautiful machine: I dont believe in machines. We are going to get this damn thing out of it. Were going to get the most out of the thing that we built with our brains, with the brains of our students and postdocs who really look at this data. We want to respect each one of them.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more: https://www.wired.com/2017/01/deep-within-mountain-physicists-race-unearth-dark-matter/