Mathematicians Second-Guess Centuries-Old Fluid Equations

The Navier-Stokes equations capture in a few succinct terms one of the most ubiquitous features of the physical world: the flow of fluids. The equations, which date to the 1820s, are today used to model everything from ocean currents to turbulence in the wake of an airplane to the flow of blood in the heart.

Quanta Magazine


Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

While physicists consider the equations to be as reliable as a hammer, mathematicians eye them warily. To a mathematician, it means little that the equations appear to work. They want proof that the equations are unfailing: that no matter the fluid, and no matter how far into the future you forecast its flow, the mathematics of the equations will still hold. Such a guarantee has proved elusive. The first person (or team) to prove that the Navier-Stokes equations will always work—or to provide an example where they don’t—stands to win one of seven Millennium Prize Problems endowed by the Clay Mathematics Institute, along with the associated $1 million reward.

Mathematicians have developed many ways of trying to solve the problem. New work posted online in September raises serious questions about whether one of the main approaches pursued over the years will succeed. The paper, by Tristan Buckmaster and Vlad Vicol of Princeton University, is the first result to find that under certain assumptions, the Navier-Stokes equations provide inconsistent descriptions of the physical world.

“We’re figuring out some of the inherent issues with these equations and why it’s quite possible [that] people have to rethink them,” said Buckmaster.

Buckmaster and Vicol’s work shows that when you allow solutions to the Navier-Stokes equations to be very rough (like a sketch rather than a photograph), the equations start to output nonsense: They say that the same fluid, from the same starting conditions, could end up in two (or more) very different states. It could flow one way or a completely different way. If that were the case, then the equations don’t reliably reflect the physical world they were designed to describe.

Blowing Up the Equations

To see how the equations can break down, first imagine the flow of an ocean current. Within it there may be a multitude of crosscurrents, with some parts moving in one direction at one speed and other areas moving in other directions at other speeds. These crosscurrents interact with one another in a continually evolving interplay of friction and water pressure that determines how the fluid flows.

Mathematicians model that interplay using a map that tells you the direction and magnitude of the current at every position in the fluid. This map, which is called a vector field, is a snapshot of the internal dynamics of a fluid. The Navier-Stokes equations take that snapshot and play it forward, telling you exactly what the vector field will look like at every subsequent moment in time.

The equations work. They describe fluid flows as reliably as Newton’s equations predict the future positions of the planets; physicists employ them all the time, and they’ve consistently matched experimental results. Mathematicians, however, want more than anecdotal confirmation—they want proof that the equations are inviolate, that no matter what vector field you start with, and no matter how far into the future you play it, the equations always give you a unique new vector field.

This is the subject of the Millennium Prize problem, which asks whether the Navier-Stokes equations have solutions (where solutions are in essence a vector field) for all starting points for all moments in time. These solutions have to provide the exact direction and magnitude of the current at every point in the fluid. Solutions that provide information at such infinitely fine resolution are called “smooth” solutions. With a smooth solution, every point in the field has an associated vector that allows you to travel “smoothly” over the field without ever getting stuck at a point that has no vector—a point from which you don’t know where to move next.

Smooth solutions are a complete representation of the physical world, but mathematically speaking, they may not always exist. Mathematicians who work on equations like Navier-Stokes worry about this kind of scenario: You’re running the Navier-Stokes equations and observing how a vector field changes. After some finite amount of time, the equations tell you a particle in the fluid is moving infinitely fast. That would be a problem. The equations involve measuring changes in properties like pressure, friction, and velocity in the fluid — in the jargon, they take “derivatives” of these quantities — but you can’t take the derivative of an infinite value any more than you can divide by zero. So if the equations produce an infinite value, you can say they’ve broken down, or “blown up.” They can no longer describe subsequent states of your fluid.

Lucy Reading-Ikkanda/Quanta Magazine

Blowup is also a strong hint that your equations are missing something about the physical world they’re supposed to describe. “Maybe the equation is not capturing all the effects of the real fluid because in a real fluid we don’t expect” particles to ever start moving infinitely fast, said Buckmaster.

Solving the Millennium Prize problem involves either showing that blowup never happens for the Navier-Stokes equations or identifying the circumstances under which it does. One strategy mathematicians have pursued to do that is to first relax just how descriptive they require solutions to the equations to be.

From Weak to Smooth

When mathematicians study equations like Navier-Stokes, they sometimes start by broadening their definition of what counts as a solution. Smooth solutions require maximal information — in the case of Navier-Stokes, they require that you have a vector at every point in the vector field associated with the fluid. But what if you slackened your requirements and said that you only needed to be able to compute a vector for some points or only needed to be able to approximate vectors? These kinds of solutions are called “weak” solutions. They allow mathematicians to start feeling out the behavior of an equation without having to do all the work of finding smooth solutions (which may be impossible to do in practice).

Tristan Buckmaster, a mathematician at Princeton University, says of the Navier-Stokes equations “it’s possible that people will have to rethink them.”
Princeton University

“From a certain point of view, weak solutions are even easier to describe than actual solutions because you have to know much less,” said Camillo De Lellis, coauthor with László Székelyhidi of several important papers that laid the groundwork for Buckmaster and Vicol’s work.

Weak solutions come in gradations of weakness. If you think of a smooth solution as a mathematical image of a fluid down to infinitely fine resolution, weak solutions are like the 32-bit, or 16-bit, or 8-bit version of that picture (depending on how weak you allow them to be).

In 1934 the French mathematician Jean Leray defined an important class of weak solutions. Rather than working with exact vectors, “Leray solutions” take the average value of vectors in small neighborhoods of the vector field. Leray proved that it’s always possible to solve the Navier-Stokes equations when you allow your solutions to take this particular form. In other words, Leray solutions never blow up.

Leray’s achievement established a new approach to the Navier-Stokes problem: Start with Leray solutions, which you know always exist, and see if you can convert them into smooth solutions, which you want to prove always exist. It’s a process akin to starting with a crude picture and seeing if you can gradually dial up the resolution to get a perfect image of something real.

“One possible strategy is to show these weak Leray solutions are smooth, and if you show they’re smooth, you’ve solved the original Millennium Prize problem,” said Buckmaster.

Vlad Vicol, a mathematician at Princeton, is half of a team that uncovered problems in an approach to validating the Navier-Stokes equations.
Courtesy of S. Vicol

There’s one more catch. Solutions to the Navier-Stokes equations correspond to real physical events, and physical events happen in just one way. Given that, you’d like your equations to have only one set of unique solutions. If the equations give you multiple possible solutions, they’ve failed.

Because of this, mathematicians will be able to use Leray solutions to solve the Millennium Prize problem only if Leray solutions are unique. Nonunique Leray solutions would mean that, according to the rules of Navier-Stokes, the exact same fluid from the exact same starting conditions could end up in two distinct physical states, which makes no physical sense and implies that the equations aren’t really describing what they’re supposed to describe.

Buckmaster and Vicol’s new result is the first to suggest that, for certain definitions of weak solutions, that might be the case.

Many Worlds

In their new paper, Buckmaster and Vicol consider solutions that are even weaker than Leray solutions—solutions that involve the same averaging principle as Leray solutions but also relax one additional requirement (known as the “energy inequality”). They use a method called “convex integration,” which has its origins in work in geometry by the mathematician John Nash and was imported more recently into the study of fluids by De Lellis and Székelyhidi.

Using this approach, Buckmaster and Vicol prove that these very weak solutions to the Navier-Stokes equations are nonunique. They demonstrate, for example, that if you start with a completely calm fluid, like a glass of water sitting still by your bedside, two scenarios are possible. The first scenario is the obvious one: The water starts still and remains still forever. The second is fantastical but mathematically permissible: The water starts still, erupts in the middle of the night, then returns to stillness.

“This proves nonuniqueness because from zero initial data you can construct at least two objects,” said Vicol.

Buckmaster and Vicol prove the existence of many nonunique weak solutions (not just the two described above) to the Navier-Stokes equations. The significance of this remains to be seen. At a certain point, weak solutions might become so weak that they stop really bearing on the smoother solutions they’re meant to imitate. If that’s the case, then Buckmaster and Vicol’s result might not lead far.

“Their result is certainly a warning, but you could argue it’s a warning for the weakest notion of weak solutions. There are many layers [of stronger solutions] on which you could still hope for much better behavior” in the Navier-Stokes equations, said De Lellis.

Buckmaster and Vicol are also thinking in terms of layers, and they have their sights set on Leray solutions—proving that those, too, allow for a multitrack physics in which the same fluid from the same position can take on more than one future form.

“Tristan and I think Leray solutions are not unique. We don’t have that yet, but our work is laying the foundation for how you’d attack the problem,” said Vicol.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more:

Earliest Black Hole Gives Rare Glimpse of Ancient Universe

Astronomers have at least two gnawing questions about the first billion years of the universe, an era steeped in literal fog and figurative mystery. They want to know what burned the fog away: stars, supermassive black holes, or both in tandem? And how did those behemoth black holes grow so big in so little time?

Quanta Magazine


Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Now the discovery of a supermassive black hole smack in the middle of this period is helping astronomers resolve both questions. “It’s a dream come true that all of these data are coming along,” said Avi Loeb, the chair of the astronomy department at Harvard University.

The black hole, announced Wednesday in the journal Nature, is the most distant ever found. It dates back to 690 million years after the Big Bang. Analysis of this object reveals that reionization, the process that defogged the universe like a hair dryer on a steamy bathroom mirror, was about half complete at that time. The researchers also show that the black hole already weighed a hard-to-explain 780 million times the mass of the sun.

A team led by Eduardo Bañados, an astronomer at the Carnegie Institution for Science in Pasadena, found the new black hole by searching through old data for objects with the right color to be ultradistant quasars—the visible signatures of supermassive black holes swallowing gas. The team went through a preliminary list of candidates, observing each in turn with a powerful telescope at Las Campanas Observatory in Chile. On March 9, Bañados observed a faint dot in the southern sky for just 10 minutes. A glance at the raw, unprocessed data confirmed it was a quasar—not a nearer object masquerading as one—and that it was perhaps the oldest ever found. “That night I couldn’t even sleep,” he said.

Eduardo Bañados at the Las Campanas Observatory in Chile, where the new quasar was discovered.
Courtesy of Eduardo Bañados

The new black hole’s mass, calculated after more observations, adds to an existing problem. Black holes grow when cosmic matter falls into them. But this process generates light and heat. At some point, the radiation released by material as it falls into the black hole carries out so much momentum that it blocks new gas from falling in and disrupts the flow. This tug-of-war creates an effective speed limit for black hole growth called the Eddington rate. If this black hole began as a star-size object and grew as fast as theoretically possible, it couldn’t have reached its estimated mass in time.

Other quasars share this kind of precocious heaviness, too. The second-farthest one known, reported on in 2011, tipped the scales at an estimated 2 billion solar masses after 770 million years of cosmic time.

These objects are too young to be so massive. “They’re rare, but they’re very much there, and we need to figure out how they form,” said Priyamvada Natarajan, an astrophysicist at Yale University who was not part of the research team. Theorists have spent years learning how to bulk up a black hole in computer models, she said. Recent work suggests that these black holes could have gone through episodic growth spurts during which they devoured gas well over the Eddington rate.

Bañados and colleagues explored another possibility: If you start at the new black hole’s current mass and rewind the tape, sucking away matter at the Eddington rate until you approach the Big Bang, you see it must have initially formed as an object heavier than 1,000 times the mass of the sun. In this approach, collapsing clouds in the early universe gave birth to overgrown baby black holes that weighed thousands or tens of thousands of solar masses. Yet this scenario requires exceptional conditions that would have allowed gas clouds to condense all together into a single object instead of splintering into many stars, as is typically the case.

Cosmic Dark Ages

Even earlier in the early universe, before any stars or black holes existed, the chaotic scramble of naked protons and electrons came together to make hydrogen atoms. These neutral atoms then absorbed the bright ultraviolet light coming from the first stars. After hundreds of millions of years, young stars or quasars emitted enough light to strip the electrons back off these atoms, dissipating the cosmic fog like mist at dawn.

Lucy Reading-Ikkanda/Quanta Magazine

Astronomers have known that reionization was largely complete by around a billion years after the Big Bang. At that time, only traces of neutral hydrogen remained. But the gas around the newly discovered quasar is about half neutral, half ionized, which indicates that, at least in this part of the universe, reionization was only half finished. “This is super interesting, to really map the epoch of reionization,” said Volker Bromm, an astrophysicist at the University of Texas.

When the light sources that powered reionization first switched on, they must have carved out the opaque cosmos like Swiss cheese. But what these sources were, when it happened, and how patchy or homogeneous the process was are all debated. The new quasar shows that reionization took place relatively late. That scenario squares with what the known population of early galaxies and their stars could have done, without requiring astronomers to hunt for even earlier sources to accomplish it quicker, said study coauthor Bram Venemans of the Max Planck Institute for Astronomy in Heidelberg.

More data points may be on the way. For radio astronomers, who are gearing up to search for emissions from the neutral hydrogen itself, this discovery shows that they are looking in the right time period. “The good news is that there will be neutral hydrogen for them to see,” said Loeb. “We were not sure about that.”

The team also hopes to identify more quasars that date back to the same time period but in different parts of the early universe. Bañados believes that there are between 20 and 100 such very distant, very bright objects across the entire sky. The current discovery comes from his team’s searches in the southern sky; next year, they plan to begin searching in the northern sky as well.

“Let’s hope that pans out,” said Bromm. For years, he said, the baton has been handed off between different classes of objects that seem to give the best glimpses at early cosmic time, with recent attention often going to faraway galaxies or fleeting gamma-ray bursts. “People had almost given up on quasars,” he said.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more:

A Hidden Supercluster Could Solve the Mystery of the Milky Way

Glance at the night sky from a clear vantage point, and the thick band of the Milky Way will slash across the sky. But the stars and dust that paint our galaxy’s disk are an unwelcome sight to astronomers who study all the galaxies that lie beyond our own. It’s like a thick stripe of fog across a windshield, a blur that renders our knowledge of the greater universe incomplete. Astronomers call it the Zone of Avoidance.

Quanta Magazine


Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Renée Kraan-Korteweg has spent her career trying to uncover what lies beyond the zone. She first caught a whiff of something spectacular in the background when, in the 1980s, she found hints of a potential cluster of objects on old photographic survey plates. Over the next few decades, the hints of a large-scale structure kept coming.

Late last year, Kraan-Korteweg and colleagues announced that they had discovered an enormous cosmic structure: a “supercluster” of thousands upon thousands of galaxies. The collection spans 300 million light years, stretching both above and below the galactic plane like an ogre hiding behind a lamppost. The astronomers call it the Vela Supercluster, for its approximate position around the constellation Vela.

Renée Kraan-Korteweg, an astronomer at the University of Cape Town, has spent decades trying to peer through the Zone of Avoidance.
University of Cape Town

Milky Way Movers

The Milky Way, just like every galaxy in the cosmos, moves. While everything in the universe is constantly moving because the universe itself is expanding, since the 1970s astronomers have known of an additional motion, called peculiar velocity. This is a different sort of flow that we seem to be caught in. The Local Group of galaxies—a collection that includes the Milky Way, Andromeda and a few dozen smaller galactic companions—moves at about 600 kilometers per second with respect to the leftover radiation from the Big Bang.

Over the past few decades, astronomers have tallied up all the things that could be pulling and pushing on the Local Group — nearby galaxy clusters, superclusters, walls of clusters and cosmic voids that exert a non-negligible gravitational pull on our own neighborhood.

The biggest tugboat is the Shapley Supercluster, a behemoth of 50 million billion solar masses that resides about 500 million light years away from Earth (and not too far away in the sky from the Vela Supercluster). It accounts for between a quarter and half of the Local Group’s peculiar velocity.

The Milky Way as seen by the Gaia satellite shows the dark clouds of dust that obscure the view of galaxies in the universe beyond.

The remaining motion can’t be accounted for by structures astronomers have already found. So astronomers keep looking farther out into the universe, tallying increasingly distant objects that contribute to the net gravitational pull on the Milky Way. Gravitational pull decreases with increasing distance, but the effect is partly offset by the increasing size of these structures. “As the maps have gone outward,” said Mike Hudson, a cosmologist at the University of Waterloo in Canada, “people continue to identify bigger and bigger things at the edge of the survey. We’re looking out farther, but there’s always a bigger mountain just out of sight.” So far astronomers have only been able to account for about 450 to 500 kilometers per second of the Local Group’s motion.

Astronomers still haven’t fully scoured the Zone of Avoidance to those same depths, however. And the Vela Supercluster discovery shows that something big can be out there, just out of reach.

In February 2014, Kraan-Korteweg and Michelle Cluver, an astronomer at the University of Western Cape in South Africa, set out to map the Vela Supercluster over a six-night observing run at the Anglo-Australian Telescope in Australia. Kraan-Korteweg, of the University of Cape Town, knew where the gas and dust in the Zone of Avoidance was thickest; she targeted individual spots where they had the best chance of seeing through the zone. The goal was to create a “skeleton,” as she calls it, of the structure. Cluver, who had prior experience with the instrument, would read off the distances to individual galaxies.

That project allowed them to conclude that the Vela Supercluster is real, and that it extends 20 by 25 degrees across the sky. But they still don’t understand what’s going on in the core of the supercluster. “We see walls crossing the Zone of Avoidance, but where they cross, we don’t have data at the moment because of the dust,” Kraan-Korteweg said. How are those walls interacting? Have they started to merge? Is there a denser core, hidden by the Milky Way’s glow?

And most important, what is the Vela’s Supercluster’s mass? After all, it is mass that governs the pull of gravity, the buildup of structure.

How to See Through the Haze

While the Zone’s dust and stars block out light in optical and infrared wavelengths, radio waves can pierce through the region. With that in mind, Kraan-Korteweg has a plan to use a type of cosmic radio beacon to map out everything behind the thickest parts of the Zone of Avoidance.

The plan hinges on hydrogen, the simplest and most abundant gas in the universe. Atomic hydrogen is made of a single proton and an electron. Both the proton and the electron have a quantum property called spin, which can be thought of as a little arrow attached to each particle. In hydrogen, these spins can line up parallel to each other, with both pointing in the same direction, or antiparallel, pointing in opposite directions. Occasionally a spin will flip—a parallel atom will switch to antiparallel. When this happens, the atom will release a photon of light with a particular wavelength.

One of the 64 antenna dishes that will make up the MeerKAT telescope in South Africa.
SKA South Africa

The likelihood of one hydrogen atom’s emitting this radio wave is low, but gather a lot of neutral hydrogen gas together, and the chance of detecting it increases. Luckily for Kraan-Korteweg and her colleagues, many of Vela’s member galaxies have a lot of this gas.

During that 2014 observing session, she and Cluver saw indications that many of their identified galaxies host young stars. “And if you have young stars, it means they recently formed, it means there’s gas,” Kraan-Korteweg said, because gas is the raw material that makes stars.

The Milky Way has some of this hydrogen, too—another foreground haze to interfere with observations. But the expansion of the universe can be used to identify hydrogen coming from the Vela structure. As the universe expands, it pulls away galaxies that lie outside our Local Group and shifts the radio light toward the red end of the spectrum. “Those emission lines separate, so you can pick them out,” said Thomas Jarrett, an astronomer at the University of Cape Town and part of the Vela Supercluster discovery team.

While Kraan-Korteweg’s work over her career has dug up some 5,000 galaxies in the Vela Supercluster, she is confident that a sensitive enough radio survey of this neutral hydrogen gas will triple that number and reveal structures that lie behind the densest part of the Milky Way’s disk.

That’s where the MeerKAT radio telescope enters the picture. Located near the small desert town of Carnarvon, South Africa, the instrument will be more sensitive than any radio telescope on Earth. Its 64th and final antenna dish was installed in October, although some dishes still need to be linked together and tested. A half array of 32 dishes should be operating by the end of this year, with the full array following early next year.

Kraan-Korteweg has been pushing over the past year for observing time in this half-array stage, but if she isn’t awarded her requested 200 hours, she’s hoping for 50 hours on the full array. Both options provide the same sensitivity, which she and her colleagues need to detect the radio signals of neutral hydrogen in thousands of individual galaxies hundreds of light years away. Armed with that data, they’ll be able to map what the full structure actually looks like.

Cosmic Basins

Hélène Courtois, an astronomer at the University of Lyon, is taking a different approach to mapping Vela. She makes maps of the universe that she compares to watersheds, or basins. In certain areas of the sky, galaxies migrate toward a common point, just as all the rain in a watershed flows into a single lake or stream. She and her colleagues look for the boundaries, the tipping points of where matter flows toward one basin or another.

Hélène Courtois, an astronomer at the University of Lyon, maps cosmic structure by examining the flow of galaxies.
Eric Leroux, University Lyon Claude Bernard Lyon 1.

A few years ago, Courtois and colleagues used this method to attempt to define our local large-scale structure, which they call Laniakea. The emphasis on defining is important, Courtois explains, because while we have definitions of galaxies and galaxy clusters, there’s no commonly agreed-upon definition for larger-scale structures in the universe such as superclusters and walls.

Part of the problem is that there just aren’t enough superclusters to arrive at a statistically rigorous definition. We can list the ones we know about, but as aggregate structures filled with thousands of galaxies, superclusters show an unknown amount of variation.

Now Courtois and colleagues are turning their attention farther out. “Vela is the most intriguing,” Courtois said. “I want to try to measure the basin of attraction, the boundary, the frontier of Vela.” She is using her own data to find the flows that move toward Vela, and from that she can infer how much mass is pulling on those flows. By comparing those flow lines to Kraan-Korteweg’s map showing where the galaxies physically cluster together, they can try to address how dense of a supercluster Vela is and how far it extends. “The two methods are totally complementary,” Courtois added.

The two astronomers are now collaborating on a map of Vela. When it’s complete, the astronomers hope that they can use it to nail down Vela’s mass, and thus the puzzle of the remaining piece of the Local Group’s motion—“that discrepancy that has been haunting us for 25 years,” Kraan-Korteweg said. And even if the supercluster isn’t responsible for that remaining motion, collecting signals through the Zone of Avoidance from whatever is back there will help resolve our place in the universe.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more:

A Retiree Discovers an Elusive Math ProofAnd Nobody Notices

As he was brushing his teeth on the morning of July 17, 2014, Thomas Royen, a little-known retired German statistician, suddenly lit upon the proof of a famous conjecture at the intersection of geometry, probability theory, and statistics that had eluded top experts for decades.

Quanta Magazine


Original story reprinted with permission from Quanta Magazine, an editorially independent division of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences

Known as the Gaussian correlation inequality (GCI), the conjecture originated in the 1950s, was posed in its most elegant form in 1972 and has held mathematicians in its thrall ever since. I know of people who worked on it for 40 years, said Donald Richards, a statistician at Pennsylvania State University. I myself worked on it for 30 years.

Royen hadnt given the Gaussian correlation inequality much thought before the raw idea for how to prove it came to him over the bathroom sink. Formerly an employee of a pharmaceutical company, he had moved on to a small technical university in Bingen, Germany, in 1985 in order to have more time to improve the statistical formulas that he and other industry statisticians used to make sense of drug-trial data. In July 2014, still at work on his formulas as a 67-year-old retiree, Royen found that the GCI could be extended into a statement about statistical distributions he had long specialized in. On the morning of the 17th, he saw how to calculate a key derivative for this extended GCI that unlocked the proof. The evening of this day, my first draft of the proof was written, he said.

Not knowing LaTeX, the word processer of choice in mathematics, he typed up his calculations in Microsoft Word, and the following month he posted his paper to the academic preprint site He also sent it to Richards, who had briefly circulated his own failed attempt at a proof of the GCI a year and a half earlier. I got this article by email from him, Richards said. And when I looked at it I knew instantly that it was solved.

Upon seeing the proof, I really kicked myself, Richards said. Over the decades, he and other experts had been attacking the GCI with increasingly sophisticated mathematical methods, certain that bold new ideas in convex geometry, probability theory or analysis would be needed to prove it. Some mathematicians, after years of toiling in vain, had come to suspect the inequality was actually false. In the end, though, Royens proof was short and simple, filling just a few pages and using only classic techniques. Richards was shocked that he and everyone else had missed it. But on the other hand I have to also tell you that when I saw it, it was with relief, he said. I remember thinking to myself that I was glad to have seen it before I died. He laughed. Really, I was so glad I saw it.

Rdiger Nehmzow/Quanta Magazine

Richards notified a few colleagues and even helped Royen retype his paper in LaTeX to make it appear more professional. But other experts whom Richards and Royen contacted seemed dismissive of his dramatic claim. False proofs of the GCI had been floated repeatedly over the decades, including two that had appeared on since 2010. Boaz Klartag of the Weizmann Institute of Science and Tel Aviv University recalls receiving the batch of three purported proofs, including Royens, in an email from a colleague in 2015. When he checked one of them and found a mistake, he set the others aside for lack of time. For this reason and others, Royens achievement went unrecognized.

Proofs of obscure provenance are sometimes overlooked at first, but usually not for long: A major paper like Royens would normally get submitted and published somewhere like the Annals of Statistics, experts said, and then everybody would hear about it. But Royen, not having a career to advance, chose to skip the slow and often demanding peer-review process typical of top journals. He opted instead for quick publication in the Far East Journal of Theoretical Statistics, a periodical based in Allahabad, India, that was largely unknown to experts and which, on its website, rather suspiciously listed Royen as an editor. (He had agreed to join the editorial board the year before.)

With this red flag emblazoned on it, the proof continued to be ignored. Finally, in December 2015, the Polish mathematician Rafa Lataa and his student Dariusz Matlak put out a paper advertising Royens proof, reorganizing it in a way some people found easier to follow. Word is now getting around. Tilmann Gneiting, a statistician at the Heidelberg Institute for Theoretical Studies, just 65 miles from Bingen, said he was shocked to learn in July 2016, two years after the fact, that the GCI had been proved. The statistician Alan Izenman, of Temple University in Philadelphia, still hadnt heard about the proof when asked for comment last month.

No one is quite sure how, in the 21st century, news of Royens proof managed to travel so slowly. It was clearly a lack of communication in an age where its very easy to communicate, Klartag said.

But anyway, at least we found it, he addedand its beautiful.

In its most famous form, formulated in 1972, the GCI links probability and geometry: It places a lower bound on a players odds in a game of darts, including hypothetical dart games in higher dimensions.

Lucy Reading-Ikkanda/Quanta Magazine

Imagine two convex polygons, such as a rectangle and a circle, centered on a point that serves as the target. Darts thrown at the target will land in a bell curve or Gaussian distribution of positions around the center point. The Gaussian correlation inequality says that the probability that a dart will land inside both the rectangle and the circle is always as high as or higher than the individual probability of its landing inside the rectangle multiplied by the individual probability of its landing in the circle. In plainer terms, because the two shapes overlap, striking one increases your chances of also striking the other. The same inequality was thought to hold for any two convex symmetrical shapes with any number of dimensions centered on a point.

Special cases of the GCI have been provedin 1977, for instance, Loren Pitt of the University of Virginia established it as true for two-dimensional convex shapesbut the general case eluded all mathematicians who tried to prove it. Pitt had been trying since 1973, when he first heard about the inequality over lunch with colleagues at a meeting in Albuquerque, New Mexico. Being an arrogant young mathematician I was shocked that grown men who were putting themselves off as respectable math and science people didnt know the answer to this, he said. He locked himself in his motel room and was sure he would prove or disprove the conjecture before coming out. Fifty years or so later I still didnt know the answer, he said.

Despite hundreds of pages of calculations leading nowhere, Pitt and other mathematicians felt certainand took his 2-D proof as evidencethat the convex geometry framing of the GCI would lead to the general proof. I had developed a conceptual way of thinking about this that perhaps I was overly wedded to, Pitt said. And what Royen did was kind of diametrically opposed to what I had in mind.

Royens proof harkened back to his roots in the pharmaceutical industry, and to the obscure origin of the Gaussian correlation inequality itself. Before it was a statement about convex symmetrical shapes, the GCI was conjectured in 1959 by the American statistician Olive Dunn as a formula for calculating simultaneous confidence intervals, or ranges that multiple variables are all estimated to fall in.

Suppose you want to estimate the weight and height ranges that 95 percent of a given population fall in, based on a sample of measurements. If you plot peoples weights and heights on an xy plot, the weights will form a Gaussian bell-curve distribution along the x-axis, and heights will form a bell curve along the y-axis. Together, the weights and heights follow a two-dimensional bell curve. You can then ask, what are the weight and height rangescall them w < x < w and h < y < hsuch that 95 percent of the population will fall inside the rectangle formed by these ranges?

If weight and height were independent, you could just calculate the individual odds of a given weight falling inside w < x < w and a given height falling inside h < y < h, then multiply them to get the odds that both conditions are satisfied. But weight and height are correlated. As with darts and overlapping shapes, if someones weight lands in the normal range, that person is more likely to have a normal height. Dunn, generalizing an inequality posed three years earlier, conjectured the following: The probability that both Gaussian random variables will simultaneously fall inside the rectangular region is always greater than or equal to the product of the individual probabilities of each variable falling in its own specified range. (This can be generalized to any number of variables.) If the variables are independent, then the joint probability equals the product of the individual probabilities. But any correlation between the variables causes the joint probability to increase.

Royen found that he could generalize the GCI to apply not just to Gaussian distributions of random variables but to more general statistical spreads related to the squares of Gaussian distributions, called gamma distributions, which are used in certain statistical tests. In mathematics, it occurs frequently that a seemingly difficult special problem can be solved by answering a more general question, he said.

Rdiger Nehmzow/Quanta Magazine

Royen represented the amount of correlation between variables in his generalized GCI by a factor we might call C, and he defined a new function whose value depends on C. When C = 0 (corresponding to independent variables like weight and eye color), the function equals the product of the separate probabilities. When you crank up the correlation to the maximum, C = 1, the function equals the joint probability. To prove that the latter is bigger than the former and the GCI is true, Royen needed to show that his function always increases as C increases. And it does so if its derivative, or rate of change, with respect to C is always positive.

His familiarity with gamma distributions sparked his bathroom-sink epiphany. He knew he could apply a classic trick to transform his function into a simpler function. Suddenly, he recognized that the derivative of this transformed function was equivalent to the transform of the derivative of the original function. He could easily show that the latter derivative was always positive, proving the GCI. He had formulas that enabled him to pull off his magic, Pitt said. And I didnt have the formulas.

Any graduate student in statistics could follow the arguments, experts say. Royen said he hopes the surprisingly simple proof might encourage young students to use their own creativity to find new mathematical theorems, since a very high theoretical level is not always required.

Some researchers, however, still want a geometric proof of the GCI, which would help explain strange new facts in convex geometry that are only de facto implied by Royens analytic proof. In particular, Pitt said, the GCI defines an interesting relationship between vectors on the surfaces of overlapping convex shapes, which could blossom into a new subdomain of convex geometry. At least now we know its true, he said of the vector relationship. But if someone could see their way through this geometry wed understand a class of problems in a way that we just dont today.

Beyond the GCIs geometric implications, Richards said a variation on the inequality could help statisticians better predict the ranges in which variables like stock prices fluctuate over time. In probability theory, the GCI proof now permits exact calculations of rates that arise in small-ball probabilities, which are related to the random paths of particles moving in a fluid. Richards says he has conjectured a few inequalities that extend the GCI, and which he might now try to prove using Royens approach.

Royens main interest is in improving the practical computation of the formulas used in many statistical testsfor instance, for determining whether a drug causes fatigue based on measurements of several variables, such as patients reaction time and body sway. He said that his extended GCI does indeed sharpen these tools of his old trade, and that some of his other recent work related to the GCI has offered further improvements. As for the proofs muted reception, Royen wasnt particularly disappointed or surprised. I am used to being frequently ignored by scientists from [top-tier] German universities, he wrote in an email. I am not so talented for networking and many contacts. I do not need these things for the quality of my life.

The feeling of deep joy and gratitude that comes from finding an important proof has been reward enough. It is like a kind of grace, he said. We can work for a long time on a problem and suddenly an angel[which] stands here poetically for the mysteries of our neuronsbrings a good idea.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more:

The Beauty of Mathematics: It Can Never Lie to You

A few years back, a prospective doctoral student sought out Sylvia Serfaty with some existential questions about the apparent uselessness of pure math. Serfaty, then newly decorated with the prestigious Henri Poincar Prize, won him over simply by being honest and nice. She was very warm and understanding and human, said Thomas Lebl, now an instructor at the Courant Institute of Mathematical Sciences at New York University. She made me feel that even if at times it might seem futile, at least it would be friendly. The intellectual and human adventure would be worth it. For Serfaty, mathematics is about building scientific and human connections. But as Lebl recalled, Serfaty also emphasized that a mathematician has to find satisfaction in weaving ones own rug, alluding to the patient, solitary work that comes first.

Quanta Magazine


Original storyreprinted with permission from Quanta Magazine, an editorially independent division of theSimons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences

Born and raised in Paris, Serfaty first became intrigued by mathematics in high school. Ultimately she gravitated toward physics problems, constructing mathematical tools to forecast what should happen in physical systems. For her doctoral research in the late-1990s, she focused on the Ginzburg-Landau equations, which describe superconductors and their vortices that turn like little whirlwinds. The problem she tackled was to determine when, where and how the vortices appear in the static (time-independent) ground state. She solved this problem with increasing detail over the course of more than a decade, together with tienne Sandier of the University of Paris-East, with whom she co-authored the book Vortices in the Magnetic Ginzburg-Landau Model.

In 1998, Serfaty discovered an irresistibly puzzling problem about how these vortices evolve in time. She decided that this was the problem she really wanted to solve. Thinking about it initially, she got stuck and abandoned it, but now and then she circled back. For years, with collaborators, she built tools that she hoped might eventually provide pathways to the desired destination. In 2015, after almost 18 years, she finally hit upon the right point of view and arrived at the solution.

First you start from a vision that something should be true, Serfaty said. I think we have software, so to speak, in our brain that allows us to judge that moral quality, that truthful quality to a statement.

Stefan Falke for Quanta Magazine

And, she noted, you cannot be cheated, you cannot be lied to. A thing is true or not true, and there is this notion of clarity on which you can base yourself.

In 2004, at age 28, she won the European Mathematical Society prize for her work analyzing the Ginzburg-Landau model; this was followed by the Poincar Prize in 2012. Last September, the piano-playing, bicycle-riding mother of two returned as a fulltime faculty member to the Courant Institute, where she had held various positions since 2001. By her count, she is one of five women among about 60 full-time faculty members in the math department, a ratio she figures is unlikely to balance itself out anytime soon.

Quanta Magazine talked with Serfaty in January at the Courant Institute. An edited and condensed version of the conversation follows.

When did you find mathematics?

In high school, there was one episode that crystallized it for me: We had assignments, little problems to solve at home, and one of them seemed very difficult. I had been thinking about it and thinking about it, and wandering around trying to find a solution. And in the end I came up with a solution that was not the one that was expectedit was more general than the problem was calling for, making it more abstract. So when the teacher gave the solutions, I proposed mine as an alternative, and I think everybody was surprised, including the teacher herself.

I was happy that Id found a creative solution. I was a teenager, and a little bit idealistic. I wanted to have a creative impact, and research seemed like a beautiful profession. I knew I was not an artist. My dad is an architect and hes really an artist, in the full sense of the word. I always compared myself to that image: the guy who has talent, has a gift. That played a role in building my self-perception of what I could do and what I wanted to achieve.

So you dont think of yourself as having a giftyou werent a prodigy.

No. We do a disservice to the profession by giving this image of little geniuses and prodigies. These Hollywood movies about scientists can be somewhat counterproductive, too. They are telling children that there are geniuses out there that do really cool stuff, and kids may think, Oh, thats not me. Maybe 5 percent of the profession fits that stereotype, but 95 percent doesnt. You dont have to be among the 5 percent to do interesting math.

For me, it took a lot of faith and believing in my little dream. My parents told me, You can do anything, you should go for itmy mother is a teacher and she always told me I was at the top of my cohort and that if I didnt succeed, who will? My first university math teacher played a big role and really believed in my potential, and then as I pursued my studies, my intuition was confirmed that I really liked mathI liked the beauty of it, and I liked the challenge.

So you have to be comfortable with frustration if you want to be a mathematician?

Thats research. You enjoy solving a problem if you have difficulty solving it. The fun is in the struggle with a problem that resists. Its the same kind of pleasure as with hiking: You hike uphill and its tough and you sweat, and at the end of the day the reward is the beautiful view. Solving a math problem is a bit like that, but you dont always know where the path is and how far you are from the top. You have to be able to accept frustration, failure, your own limitations. Of course you have to be good enough; thats a minimum requirement. But if you have enough ability, then you cultivate it and build on it, just as a musician plays scales and practices to get to a top level.

How do you tackle a problem?

One of the first pieces of advice I got as I was starting my Ph.D. was from Tristan Rivire (a previous student of my adviser, Fabrice Bthuel), who told me: People think that research in math is about these big ideas, but no, you really have to start from simple, stupid computationsstart again like a student and redo everything yourself. I found that this is so true. A lot of good research actually starts from very simple things, elementary facts, basic bricks, from which you can build a big cathedral. Progress in math comes from understanding the model case, the simplest instance in which you encounter the problem. And often it is an easy computation; its just that no one had thought of looking at it this way.

Do you cultivate that perspective, or does it come naturally?

This is all I know how to do. I tell myself that there are always very bright people who have thought about these problems and made very beautiful and elaborate theories, and certainly I cannot always compete on that end. But let me try to rethink the problem almost from scratch with my own little basic understanding and knowledge and see where I go. Of course, I have built enough experience and intuition that I sort of pretend to be naive. In the end, I think a lot of mathematicians proceed this way, but maybe they dont want to admit it, because they dont want to appear simple-minded. There is a lot of ego in this profession, lets be honest.

Does the ego help or hinder mathematical ambition?

We do math research because we like the problems, and we enjoy finding solutions, but I think maybe half of it is because we want to impress others. Would you do math if you were on a desert island and there was no one to admire your beautiful proof? We prove theorems because there is an audience to communicate it to. A lot of the motivation is presenting the work at the next conference and seeing what colleagues think. And then people appreciate it and provide positive feedback, and this feeds the motivation. And then you may get prizes, and if so, maybe you get even more prizes because you already have prizes. And you get published in good journals, and you keep track of how many papers you published and how many citations you got on MathSciNet, and you inevitably get in the habit of sometimes comparing yourself to your friends. You are constantly judged by your peers.

This is a system that increases peoples productivity. It works very well to push people to publish and to work, because they want to maintain their ranking. But it also puts a lot of ego into it. And at some point I think its too much. We need to put more focus on the real scientific progress, rather than on the signs of wealth, so to speak. And I certainly think this aspect is not very female-friendly. Theres also the nerd stereotypeI dont think of myself as a nerd. I dont identify with that culture. And I dont think that because Im a mathematician I have to be a nerd.

Stefan Falke for Quanta Magazine

Would more women in the field help shift the balance?

Im not super-optimistic, in terms of women in the field. I dont think its a problem that is going to naturally resolve itself. The numbers over the last 20 years are not a great improvement, sometimes even decreasing.

The question is: Can you convince men that it would really be better for science and math if there were more women around? Im not sure they are all convinced. Would it be better? Why? Would it make their life better, would it make the math better? I tend to think it would be better.

In what way?

Its good to have a diversity of frames of mind. Two different mathematicians think in two slightly different ways, and women do tend to think a little bit differently. Math is not about everybody staring at a problem and trying to solve it. We dont even know where the problems are. Some people decide they are going to explore over here, and some people explore over there. Thats why you need people with different points of view, to think of different perspectives and find different roads.

In your own work over the past two decades, youve specialized in one area of mathematical physics, but this has led you in a variety of directions.

Its really beautiful to observe, as you progress in your mathematical maturity, how everything is somehow connected. There are so many things that are related, and you keep building connections in your intellectual landscape. With experience you develop a point of view that is pretty much unique to yourselfsomebody else would come at it from a different angle. Thats whats fruitful, and thats how you can solve problems that maybe somebody smarter than you wouldnt solve just because they dont have the necessary perspective.

And your approach has unexpectedly opened doors to other fieldshow did that come about?

One important question I had from the beginning was to understand the patterns of the vortices. Physicists knew from experiments that the vortices form triangular lattices, called Abrikosov lattices, and so the question was to prove why they form these patterns. This we never completely answered, but we have made progress. A paper we published in 2012 rigorously connected the Ginzburg-Landau problem of vortices with a crystallization problem for the first time. And this problem, as it turns out, arises in other areas of math, such as number theory and statistical mechanics and random matrices.

What we proved was that the vortices in the superconductor behave like particles with whats called a Coulomb interactionessentially, the vortices act like electric charges and repel each other. You can think of the particles as people who dont like each other but are forced to stay in the same roomwhere should they stand to minimize their repulsion to others?

Was it difficult to cross over into a new area?

It was a challenge, because I had to learn the basics of a new subject area and nobody knew me in that field. And initially there was some skepticism about our results. But arriving as newcomers allowed us to develop a new point of view because we werent burdened by any preconceived notionsignorance is helpful in this instance.

Some mathematicians, they start with something, they know how to do it, and then they create variants, like derivative products: You make the film and then you sell the T-shirts, and then you sell the mugs. I think the way that you can distinguish good mathematicians is that they are constantly moving further and forward and advancing onto new ground.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more:

Biologists Are Figuring Out How Cells Tell Left From Right

In 2009, after she was diagnosed with stage 3 breast cancer, Ann Ramsdell began to search the scientific literature to see if someone with her diagnosis could make a full recovery. Ramsdell, a developmental biologist at the University of South Carolina, soon found something strange: The odds of recovery differed for women who had cancer in the left breast versus the right. Even more surprisingly, she found research suggesting that women with asymmetric breast tissue are more likely to develop cancer.

Quanta Magazine


Original storyreprinted with permission from Quanta Magazine, an editorially independent division of theSimons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences

Asymmetry is not readily apparent. Yet below the skin, asymmetric structures are common. Consider how our gut winds its way through the abdominal cavity, sprouting unpaired organs as it goes. Or how our heart, born from two identical structures fused together, twists itself into an asymmetrical pump that can simultaneously push oxygen-rich blood around the body and draw in a new swig from the lungs, all in a heartbeat. The bodys natural asymmetry is crucially important to our well-being. But, as Ramsdell knew, it was all too often ignored.

In her early years as a scientist, Ramsdell never gave asymmetry much thought. But on the day of her dissertation defense, she put a borrowed slide into a projector (this in the days before PowerPoint). The slide was of a chick embryo at the stage where its heart begins to loop to one side. Afterward a colleague asked why she put the slide in backward. Its an embarrassing story, she said, but I had never even thought about the directionality of heart looping. The chicks developing heart could distinguish between left and right, same as ours. She went on to do her postdoctoral research on why the heart loops to one side.

Years later, after her recovery, Ramsdell decided to leave the heart behind and to start looking for asymmetry in the mammary glands of mammals. In marsupials like wallabies and kangaroos, she read, the left and the right glands produce a different kind of milk, geared toward offspring of different ages. But her initial studies of mice proved disappointingtheir left and right mammary glands didnt seem to differ at all.

The wrybill uses its laterally curved bill to reach insect larvae under rounded riverbed stones.Steve Atwood

Then she zoomed in on the genes and proteins that are active in different cells of the breast. There she found strong differences. The left breast, which appears to be more prone to cancer, also tends to have a higher number of unspecialized cells, according to unpublished work thats undergoing peer review. Those allow the breast to repair damaged tissue, but since they have a higher capacity to divide, they can also be involved in tumor formation. Why the cells are more common on the left, Ramsdell has not yet figured out. But we think it has to do with the embryonic environment the cells grow up in, which is quite different on both sides.

Ramsdell and a cadre of other developmental biologists are trying to unravel why the organisms can tell their right from left. Its a complex process, but the key orchestrators of the handedness of life are beginning to come into clearer focus.

A Left Turn

In the 1990s, scientists studying the activity of different genes in the developing embryo discovered something surprising. In every vertebrate embryo examined so far, a gene called Nodal appears on the left side of the embryo. It is closely followed by its collaborator Lefty, a gene that suppresses Nodal activity on the embryos right. The Nodal-Lefty team appears to be the most important genetic pathway that guides asymmetry, said Cliff Tabin, an evolutionary biologist at Harvard University who played a central role in the initial research into Nodal and Lefty.

But what triggers the emergence of Nodal and Lefty inside the embryo? The developmental biologist Nobutaka Hirokawa came up with an explanation that is so elegant we all want to believe it, Tabin said. Most vertebrate embryos start out as a tiny disk. On the bottom side of this disk, theres a little pit, the floor of which is covered in ciliaflickering cell extensions that, Hirokawa suggested, create a leftward current in the surrounding fluid. A 2002 study confirmed that a change in flow direction could change the expression of Nodal as well.

The twospot flounder lies on the seafloor on its right side, with both eyes on its left side.SEFSC Pascagoula Laboratory; Collection of Brandi Noble, NOAA/NMFS/SEFSC

Damaged cilia have long been associated with asymmetry-related disease. In Kartagener syndrome, for example, immobile cilia in the windpipe cause breathing difficulties. Intriguingly, the body asymmetry of people with the syndrome is often entirely inversed, to become an almost perfect mirror image of what it would otherwise. In the early 2000s, researchers discovered that the syndrome was caused by defects in a number of proteins driving movement in cells, including those of the cilia. In addition, a 2015 Nature study identified two dozen mouse genes related to cilia that give rise to unusual asymmetries when defective.

Yet cilia cannot be the whole story. Many animals, even some mammals, dont have a ciliated pit, said Michael Levin, a biologist at Tufts University who was the first author on some of the Nodal papers from Tabins lab in the 1990s.

In addition, the motor proteins critical for normal asymmetry development dont only occur in the cilia, Levin said. They also work with the cellular skeleton, a network of sticks and strands that provides structure to the cell, to guide its movements and transport cellular components.

An increasing number of studies suggest that this may give rise to asymmetry within individual cells as well. Cells have a kind of handedness, said Leo Wan, a biomedical engineer at the Rensselaer Polytechnic Institute. When they hit an obstacle, some types of cells will turn left while others will turn right. Wan has created a test that consists of a plate with two concentric, circular ridges. We place cells between those ridges, then watch them move around, he said. When they hit one of the ridges, they turn, and their preferred direction is clearly visible.

The red crossbill uses its unique beak to access the seeds in conifer cones.Jason Crotty

Wan believes the cells preference depends on the interplay between two elements of the cellular skeleton: actin and myosin. Actin is a protein that forms trails throughout the cell. Myosin, another protein, moves across these trails, often while dragging other cellular components along. Both proteins are well-known for their activity in muscle cells, where they are crucial for contraction. Kenji Matsuno, a cellular biologist at Osaka University, has discovered a series of what he calls unconventional myosins that appear crucial to asymmetrical development. Matsuno agrees that myosins are likely causing cell handedness.

Consider the fruit fly. It lacks both the ciliated pit as well as Nodal, yet it develops an asymmetric hindgut. Matsuno has demonstrated that the handedness of cells in the hindgut depends on myosin and that the handedness reflected by the cells initial tilt is what guides the guts development. The cells handedness does not just define how they move, but also how they hold on to each other, he explains. Together those wrestling cells create a hindgut that curves and turns exactly the way its supposed to. A similar process was described in the roundworm C. elegans.

Nodal isnt necessary for the development of all asymmetry in vertebrates, either. In a study published in Nature Communications in 2013, Jeroen Bakkers, a biologist at the Hubrecht Institute in the Netherlands, described how the zebra fish heart may curve to the right in the absence of Nodal. In fact, he went on to show that it even does so when removed from the body and deposited into a simple lab dish. That being said, he adds, in animals without Nodal, the heart did not shift left as it should, nor did it turn correctly. Though some asymmetry originates within, the cells do need Nodals help.

The European red slug has a large, dark respiratory pore on its right side.Hans Hillewaert

For Tabin, experiments like this show that while Nodal may not be the entire story, it is the most crucial factor in the development of asymmetry. From the standpoint of evolution, it turns out, breaking symmetry wasnt that difficult, he said. There are multiple ways of doing it, and different organisms have done it in different ways. The key that evolution had to solve was making asymmetry reliable and robust, he said. Lefty and Nodal together are a way of making sure that asymmetry is robust.

Yet others believe that important links are waiting to be discovered. Research from Levins lab suggests that communication among cells may be an under-explored factor in the development of asymmetry.

The cellular skeleton also directs the transport of specialized proteins to the cell surface, Levin said. Some of these allow cells to communicate by exchanging electrical charges. This electrical communication, his research suggests, may direct the movements of cells as well as how the cells express their genes. If we block the [communication] channels, asymmetrical development always goes awry, he said. And by manipulating this system, weve been able to guide development in surprising but predictable directions, creating six-legged frogs, four-headed worms or froglets with an eye for a gut, without changing their genomes at all.

The apparent ability of developing organisms to detect and correct their own shape fuels Levins belief that self-repair might one day be an option for humans as well. Under every rock, there is a creature that can repair its complex body all by itself, he points out. If we can figure out how this works, Levin said, it might revolutionize medicine. Many people think Im too optimistic, but I have the engineering view on this: Anything thats not forbidden by the laws of physics is possible.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more:

How Life (and Death) Spring From Disorder

Whats the difference between physics and biology? Take a golf ball and a cannonball and drop them off the Tower of Pisa. The laws of physics allow you to predict their trajectories pretty much as accurately as you could wish for.

Quanta Magazine


Original storyreprinted with permission from Quanta Magazine, an editorially independent division of theSimons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences

Now do the same experiment again, but replace the cannonball with a pigeon.

Biological systems dont defy physical laws, of coursebut neither do they seem to be predicted by them. In contrast, they are goal-directed: survive and reproduce. We can say that they have a purposeor what philosophers have traditionally called a teleologythat guides their behavior.

By the same token, physics now lets us predict, starting from the state of the universe a billionth of a second after the Big Bang, what it looks like today. But no one imagines that the appearance of the first primitive cells on Earth led predictably to the human race. Laws do not, it seems, dictate the course of evolution.

The teleology and historical contingency of biology, said the evolutionary biologist Ernst Mayr, make it unique among the sciences. Both of these features stem from perhaps biologys only general guiding principle: evolution. It depends on chance and randomness, but natural selection gives it the appearance of intention and purpose. Animals are drawn to water not by some magnetic attraction, but because of their instinct, their intention, to survive. Legs serve the purpose of, among other things, taking us to the water.

Mayr claimed that these features make biology exceptionala law unto itself. But recent developments in nonequilibrium physics, complex systems science and information theory are challenging that view.

Once we regard living things as agents performing a computationcollecting and storing information about an unpredictable environmentcapacities and considerations such as replication, adaptation, agency, purpose and meaning can be understood as arising not from evolutionary improvisation, but as inevitable corollaries of physical laws. In other words, there appears to be a kind of physics of things doing stuff, and evolving to do stuff. Meaning and intentionthought to be the defining characteristics of living systemsmay then emerge naturally through the laws of thermodynamics and statistical mechanics.

This past November, physicists, mathematicians and computer scientists came together with evolutionary and molecular biologists to talkand sometimes argueabout these ideas at a workshop at the Santa Fe Institute in New Mexico, the mecca for the science of complex systems. They asked: Just how special (or not) is biology?

Its hardly surprising that there was no consensus. But one message that emerged very clearly was that, if theres a kind of physics behind biological teleology and agency, it has something to do with the same concept that seems to have become installed at the heart of fundamental physics itself: information.

Disorder and Demons

The first attempt to bring information and intention into the laws of thermodynamics came in the middle of the 19th century, when statistical mechanics was being invented by the Scottish scientist James Clerk Maxwell. Maxwell showed how introducing these two ingredients seemed to make it possible to do things that thermodynamics proclaimed impossible.

Maxwell had already shown how the predictable and reliable mathematical relationships between the properties of a gaspressure, volume and temperaturecould be derived from the random and unknowable motions of countless molecules jiggling frantically with thermal energy. In other words, thermodynamicsthe new science of heat flow, which united large-scale properties of matter like pressure and temperaturewas the outcome of statistical mechanics on the microscopic scale of molecules and atoms.

According to thermodynamics, the capacity to extract useful work from the energy resources of the universe is always diminishing. Pockets of energy are declining, concentrations of heat are being smoothed away. In every physical process, some energy is inevitably dissipated as useless heat, lost among the random motions of molecules. This randomness is equated with the thermodynamic quantity called entropya measurement of disorderwhich is always increasing. That is the second law of thermodynamics. Eventually all the universe will be reduced to a uniform, boring jumble: a state of equilibrium, wherein entropy is maximized and nothing meaningful will ever happen again.

Are we really doomed to that dreary fate? Maxwell was reluctant to believe it, and in 1867 he set out to, as he put it, pick a hole in the second law. His aim was to start with a disordered box of randomly jiggling molecules, then separate the fast molecules from the slow ones, reducing entropy in the process.

Imagine some little creaturethe physicist William Thomson later called it, rather to Maxwells dismay, a demonthat can see each individual molecule in the box. The demon separates the box into two compartments, with a sliding door in the wall between them. Every time he sees a particularly energetic molecule approaching the door from the right-hand compartment, he opens it to let it through. And every time a slow, cold molecule approaches from the left, he lets that through, too. Eventually, he has a compartment of cold gas on the right and hot gas on the left: a heat reservoir that can be tapped to do work.

This is only possible for two reasons. First, the demon has more information than we do: It can see all of the molecules individually, rather than just statistical averages. And second, it has intention: a plan to separate the hot from the cold. By exploiting its knowledge with intent, it can defy the laws of thermodynamics.

At least, so it seemed. It took a hundred years to understand why Maxwells demon cant in fact defeat the second law and avert the inexorable slide toward deathly, universal equilibrium. And the reason shows that there is a deep connection between thermodynamics and the processing of informationor in other words, computation. The German-American physicist Rolf Landauer showed that even if the demon can gather information and move the (frictionless) door at no energy cost, a penalty must eventually be paid. Because it cant have unlimited memory of every molecular motion, it must occasionally wipe its memory cleanforget what it has seen and start againbefore it can continue harvesting energy. This act of information erasure has an unavoidable price: It dissipates energy, and therefore increases entropy. All the gains against the second law made by the demons nifty handiwork are canceled by Landauers limit: the finite cost of information erasure (or more generally, of converting information from one form to another).

Living organisms seem rather like Maxwells demon. Whereas a beaker full of reacting chemicals will eventually expend its energy and fall into boring stasis and equilibrium, living systems have collectively been avoiding the lifeless equilibrium state since the origin of life about three and a half billion years ago. They harvest energy from their surroundings to sustain this nonequilibrium state, and they do it with intention. Even simple bacteria move with purpose toward sources of heat and nutrition. In his 1944 book What is Life?, the physicist Erwin Schrdinger expressed this by saying that living organisms feed on negative entropy.

They achieve it, Schrdinger said, by capturing and storing information. Some of that information is encoded in their genes and passed on from one generation to the next: a set of instructions for reaping negative entropy. Schrdinger didnt know where the information is kept or how it is encoded, but his intuition that it is written into what he called an aperiodic crystal inspired Francis Crick, himself trained as a physicist, and James Watson when in 1953 they figured out how genetic information can be encoded in the molecular structure of the DNA molecule.

A genome, then, is at least in part a record of the useful knowledge that has enabled an organisms ancestorsright back to the distant pastto survive on our planet. According to David Wolpert, a mathematician and physicist at the Santa Fe Institute who convened the recent workshop, and his colleague Artemy Kolchinsky, the key point is that well-adapted organisms are correlated with that environment. If a bacterium swims dependably toward the left or the right when there is a food source in that direction, it is better adapted, and will flourish more, than one that swims in random directions and so only finds the food by chance. A correlation between the state of the organism and that of its environment implies that they share information in common. Wolpert and Kolchinsky say that its this information that helps the organism stay out of equilibriumbecause, like Maxwells demon, it can then tailor its behavior to extract work from fluctuations in its surroundings. If it did not acquire this information, the organism would gradually revert to equilibrium: It would die.

Looked at this way, life can be considered as a computation that aims to optimize the storage and use of meaningful information. And life turns out to be extremely good at it. Landauers resolution of the conundrum of Maxwells demon set an absolute lower limit on the amount of energy a finite-memory computation requires: namely, the energetic cost of forgetting. The best computers today are far, far more wasteful of energy than that, typically consuming and dissipating more than a million times more. But according to Wolpert, a very conservative estimate of the thermodynamic efficiency of the total computation done by a cell is that it is only 10 or so times more than the Landauer limit.

The implication, he said, is that natural selection has been hugely concerned with minimizing the thermodynamic cost of computation. It will do all it can to reduce the total amount of computation a cell must perform. In other words, biology (possibly excepting ourselves) seems to take great care not to overthink the problem of survival. This issue of the costs and benefits of computing ones way through life, he said, has been largely overlooked in biology so far.

Inanimate Darwinism

So living organisms can be regarded as entities that attune to their environment by using information to harvest energy and evade equilibrium. Sure, its a bit of a mouthful. But notice that it said nothing about genes and evolution, on which Mayr, like many biologists, assumed that biological intention and purpose depend.

How far can this picture then take us? Genes honed by natural selection are undoubtedly central to biology. But could it be that evolution by natural selection is itself just a particular case of a more general imperative toward function and apparent purpose that exists in the purely physical universe? It is starting to look that way.

Adaptation has long been seen as the hallmark of Darwinian evolution. But Jeremy England at the Massachusetts Institute of Technology has argued that adaptation to the environment can happen even in complex nonliving systems.

Adaptation here has a more specific meaning than the usual Darwinian picture of an organism well-equipped for survival. One difficulty with the Darwinian view is that theres no way of defining a well-adapted organism except in retrospect. The fittest are those that turned out to be better at survival and replication, but you cant predict what fitness entails. Whales and plankton are well-adapted to marine life, but in ways that bear little obvious relation to one another.

Englands definition of adaptation is closer to Schrdingers, and indeed to Maxwells: A well-adapted entity can absorb energy efficiently from an unpredictable, fluctuating environment. It is like the person who keeps his footing on a pitching ship while others fall over because shes better at adjusting to the fluctuations of the deck. Using the concepts and methods of statistical mechanics in a nonequilibrium setting, England and his colleagues argue that these well-adapted systems are the ones that absorb and dissipate the energy of the environment, generating entropy in the process.

Complex systems tend to settle into these well-adapted states with surprising ease, said England: Thermally fluctuating matter often gets spontaneously beaten into shapes that are good at absorbing work from the time-varying environment.

There is nothing in this process that involves the gradual accommodation to the surroundings through the Darwinian mechanisms of replication, mutation and inheritance of traits. Theres no replication at all. What is exciting about this is that it means that when we give a physical account of the origins of some of the adapted-looking structures we see, they dont necessarily have to have had parents in the usual biological sense, said England. You can explain evolutionary adaptation using thermodynamics, even in intriguing cases where there are no self-replicators and Darwinian logic breaks downso long as the system in question is complex, versatile and sensitive enough to respond to fluctuations in its environment.

But neither is there any conflict between physical and Darwinian adaptation. In fact, the latter can be seen as a particular case of the former. If replication is present, then natural selection becomes the route by which systems acquire the ability to absorb workSchrdingers negative entropyfrom the environment. Self-replication is, in fact, an especially good mechanism for stabilizing complex systems, and so its no surprise that this is what biology uses. But in the nonliving world where replication doesnt usually happen, the well-adapted dissipative structures tend to be ones that are highly organized, like sand ripples and dunes crystallizing from the random dance of windblown sand. Looked at this way, Darwinian evolution can be regarded as a specific instance of a more general physical principle governing nonequilibrium systems.

Prediction Machines

This picture of complex structures adapting to a fluctuating environment allows us also to deduce something about how these structures store information. In short, so long as such structureswhether living or notare compelled to use the available energy efficiently, they are likely to become prediction machines.

Its almost a defining characteristic of life that biological systems change their state in response to some driving signal from the environment. Something happens; you respond. Plants grow toward the light; they produce toxins in response to pathogens. These environmental signals are typically unpredictable, but living systems learn from experience, storing up information about their environment and using it to guide future behavior. (Genes, in this picture, just give you the basic, general-purpose essentials.)

Prediction isnt optional, though. According to the work of Susanne Still at the University of Hawaii, Gavin Crooks, formerly at the Lawrence Berkeley National Laboratory in California, and their colleagues, predicting the future seems to be essential for any energy-efficient system in a random, fluctuating environment.

Theres a thermodynamic cost to storing information about the past that has no predictive value for the future, Still and colleagues show. To be maximally efficient, a system has to be selective. If it indiscriminately remembers everything that happened, it incurs a large energy cost. On the other hand, if it doesnt bother storing any information about its environment at all, it will be constantly struggling to cope with the unexpected. A thermodynamically optimal machine must balance memory against prediction by minimizing its nostalgiathe useless information about the past, said a co-author, David Sivak, now at Simon Fraser University in Burnaby, British Columbia. In short, it must become good at harvesting meaningful informationthat which is likely to be useful for future survival.

Youd expect natural selection to favor organisms that use energy efficiently. But even individual biomolecular devices like the pumps and motors in our cells should, in some important way, learn from the past to anticipate the future. To acquire their remarkable efficiency, Still said, these devices must implicitly construct concise representations of the world they have encountered so far, enabling them to anticipate whats to come.

The Thermodynamics of Death

Even if some of these basic information-processing features of living systems are already prompted, in the absence of evolution or replication, by nonequilibrium thermodynamics, you might imagine that more complex traitstool use, say, or social cooperationmust be supplied by evolution.

Well, dont count on it. These behaviors, commonly thought to be the exclusive domain of the highly advanced evolutionary niche that includes primates and birds, can be mimicked in a simple model consisting of a system of interacting particles. The trick is that the system is guided by a constraint: It acts in a way that maximizes the amount of entropy (in this case, defined in terms of the different possible paths the particles could take) it generates within a given timespan.

Entropy maximization has long been thought to be a trait of nonequilibrium systems. But the system in this model obeys a rule that lets it maximize entropy over a fixed time window that stretches into the future. In other words, it has foresight. In effect, the model looks at all the paths the particles could take and compels them to adopt the path that produces the greatest entropy. Crudely speaking, this tends to be the path that keeps open the largest number of options for how the particles might move subsequently.

You might say that the system of particles experiences a kind of urge to preserve freedom of future action, and that this urge guides its behavior at any moment. The researchers who developed the modelAlexander Wissner-Gross at Harvard University and Cameron Freer, a mathematician at the Massachusetts Institute of Technologycall this a causal entropic force. In computer simulations of configurations of disk-shaped particles moving around in particular settings, this force creates outcomes that are eerily suggestive of intelligence.

In one case, a large disk was able to use a small disk to extract a second small disk from a narrow tubea process that looked like tool use. Freeing the disk increased the entropy of the system. In another example, two disks in separate compartments synchronized their behavior to pull a larger disk down so that they could interact with it, giving the appearance of social cooperation.

Of course, these simple interacting agents get the benefit of a glimpse into the future. Life, as a general rule, does not. So how relevant is this for biology? Thats not clear, although Wissner-Gross said that he is now working to establish a practical, biologically plausible, mechanism for causal entropic forces. In the meantime, he thinks that the approach could have practical spinoffs, offering a shortcut to artificial intelligence. I predict that a faster way to achieve it will be to discover such behavior first and then work backward from the physical principles and constraints, rather than working forward from particular calculation or prediction techniques, he said. In other words, first find a system that does what you want it to do and then figure out how it does it.

Aging, too, has conventionally been seen as a trait dictated by evolution. Organisms have a lifespan that creates opportunities to reproduce, the story goes, without inhibiting the survival prospects of offspring by the parents sticking around too long and competing for resources. That seems surely to be part of the story, but Hildegard Meyer-Ortmanns, a physicist at Jacobs University in Bremen, Germany, thinks that ultimately aging is a physical process, not a biological one, governed by the thermodynamics of information.

Its certainly not simply a matter of things wearing out. Most of the soft material we are made of is renewed before it has the chance to age, Meyer-Ortmanns said. But this renewal process isnt perfect. The thermodynamics of information copying dictates that there must be a trade-off between precision and energy. An organism has a finite supply of energy, so errors necessarily accumulate over time. The organism then has to spend an increasingly large amount of energy to repair these errors. The renewal process eventually yields copies too flawed to function properly; death follows.

Empirical evidence seems to bear that out. It has been long known that cultured human cells seem able to replicate no more than 40 to 60 times (called the Hayflick limit) before they stop and become senescent. And recent observations of human longevity have suggested that there may be some fundamental reason why humans cant survive much beyond age 100.

Theres a corollary to this apparent urge for energy-efficient, organized, predictive systems to appear in a fluctuating nonequilibrium environment. We ourselves are such a system, as are all our ancestors back to the first primitive cell. And nonequilibrium thermodynamics seems to be telling us that this is just what matter does under such circumstances. In other words, the appearance of life on a planet like the early Earth, imbued with energy sources such as sunlight and volcanic activity that keep things churning out of equilibrium, starts to seem not an extremely unlikely event, as many scientists have assumed, but virtually inevitable. In 2006, Eric Smith and the late Harold Morowitz at the Santa Fe Institute argued that the thermodynamics of nonequilibrium systems makes the emergence of organized, complex systems much more likely on a prebiotic Earth far from equilibrium than it would be if the raw chemical ingredients were just sitting in a warm little pond (as Charles Darwin put it) stewing gently.

In the decade since that argument was first made, researchers have added detail and insight to the analysis. Those qualities that Ernst Mayr thought essential to biologymeaning and intentionmay emerge as a natural consequence of statistics and thermodynamics. And those general properties may in turn lead naturally to something like life.

At the same time, astronomers have shown us just how many worlds there areby some estimates stretching into the billionsorbiting other stars in our galaxy. Many are far from equilibrium, and at least a few are Earth-like. And the same rules are surely playing out there, too.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more:

How Did Life Begin? Dividing Droplets Could Hold the Answer

A collaboration of physicists and biologists in Germany has found a simple mechanism that might have enabled liquid droplets to evolve into living cells in early Earths primordial soup.

Origin-of-life researchers have praised the minimalism of the idea. Ramin Golestanian, a professor of theoretical physics at the University of Oxford who was not involved in the research, called it a big achievement that suggests that the general phenomenology of life formation is a lot easier than one might think.

Quanta Magazine


Original storyreprinted with permission from Quanta Magazine, an editorially independent division of theSimons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences

The central question about the origin of life has been how the first cells arose from primitive precursors. What were those precursors, dubbed protocells, and how did they come alive? Proponents of the membrane-first hypothesis have argued that a fatty-acid membrane was needed to corral the chemicals of life and incubate biological complexity. But how could something as complex as a membrane start to self-replicate and proliferate, allowing evolution to act on it?

In 1924, Alexander Oparin, the Russian biochemist who first envisioned a hot, briny primordial soup as the source of lifes humble beginnings, proposed that the mystery protocells might have been liquid dropletsnaturally forming, membrane-free containers that concentrate chemicals and thereby foster reactions. In recent years, droplets have been found to perform a range of essential functions inside modern cells, reviving Oparins long-forgotten speculation about their role in evolutionary history. But neither he nor anyone else could explain how droplets might have proliferated, growing and dividing and, in the process, evolving into the first cells.

Now, the new work by David Zwicker and collaborators at the Max Planck Institute for the Physics of Complex Systems and the Max Planck Institute of Molecular Cell Biology and Genetics, both in Dresden, suggests an answer. The scientists studied the physics of chemically active droplets, which cycle chemicals in and out of the surrounding fluid, and discovered that these droplets tend to grow to cell size and divide, just like cells. This active droplet behavior differs from the passive and more familiar tendencies of oil droplets in water, which glom together into bigger and bigger droplets without ever dividing.

If chemically active droplets can grow to a set size and divide of their own accord, then it makes it more plausible that there could have been spontaneous emergence of life from nonliving soup, said Frank Jlicher, a biophysicist in Dresden and a co-author of the new paper.

The findings, reported in Nature Physics last month, paint a possible picture of lifes start by explaining how cells made daughters, said Zwicker, who is now a postdoctoral researcher at Harvard University. This is, of course, key if you want to think about evolution.

Luca Giomi, a theoretical biophysicist at Leiden University in the Netherlands who studies the possible physical mechanisms behind the origin of life, said the new proposal is significantly simpler than other mechanisms of protocell division that have been considered, calling it a very promising direction.

However, David Deamer, a biochemist at the University of California, Santa Cruz, and a longtime champion of the membrane-first hypothesis, argues that while the newfound mechanism of droplet division is interesting, its relevance to the origin of life remains to be seen. The mechanism is a far cry, he noted, from the complicated, multistep process by which modern cells divide.

Could simple dividing droplets have evolved into the teeming menagerie of modern life, from amoebas to zebras? Physicists and biologists familiar with the new work say its plausible. As a next step, experiments are under way in Dresden to try to observe the growth and division of active droplets made of synthetic polymers that are modeled after the droplets found in living cells. After that, the scientists hope to observe biological droplets dividing in the same way.

Clifford Brangwynne, a biophysicist at Princeton University who was part of the Dresden-based team that identified the first subcellular droplets eight years agotiny liquid aggregates of protein and RNA in cells of the worm C. elegansexplained that it would not be surprising if these were vestiges of evolutionary history. Just as mitochondria, organelles that have their own DNA, came from ancient bacteria that infected cells and developed a symbiotic relationship with them, the condensed liquid phases that we see in living cells might reflect, in a similar sense, a sort of fossil record of the physicochemical driving forces that helped set up cells in the first place, he said.

This Nature Physics paper takes that to the next level, by revealing the features that droplets would have needed to play a role as protocells, Brangwynne added.

Droplets in Dresden

The Dresden droplet discoveries began in 2009, when Brangwynne and collaborators demystified the nature of little dots known as P granules in C. elegans germline cells, which undergo division into sperm and egg cells. During this division process, the researchers observed that P granules grow, shrink and move across the cells via diffusion. The discovery that they are liquid droplets, reported in Science, prompted a wave of activity as other subcellular structures were also identified as droplets. It didnt take long for Brangwynne and Tony Hyman, head of the Dresden biology lab where the initial experiments took place, to make the connection to Oparins 1924 protocell theory. In a 2012 essay about Oparins life and seminal book, The Origin of Life, Brangwynne and Hyman wrote that the droplets he theorized about may still be alive and well, safe within our cells, like flies in lifes evolving amber.

Oparin most famously hypothesized that lightning strikes or geothermal activity on early Earth could have triggered the synthesis of organic macromolecules necessary for lifea conjecture later made independently by the British scientist John Haldane and triumphantly confirmed by the Miller-Urey experiment in the 1950s. Another of Oparins ideas, that liquid aggregates of these macromolecules might have served as protocells, was less celebrated, in part because he had no clue as to how the droplets might have reproduced, thereby enabling evolution. The Dresden group studying P granules didnt know either.

In the wake of their discovery, Jlicher assigned his new student, Zwicker, the task of unraveling the physics of centrosomes, organelles involved in animal cell division that also seemed to behave like droplets. Zwicker modeled the centrosomes as out-of-equilibrium systems that are chemically active, continuously cycling constituent proteins into and out of the surrounding liquid cytoplasm. In his model, these proteins have two chemical states. Proteins in state A dissolve in the surrounding liquid, while those in state B are insoluble, aggregating inside a droplet. Sometimes, proteins in state B spontaneously switch to state A and flow out of the droplet. An energy source can trigger the reverse reaction, causing a protein in state A to overcome a chemical barrier and transform into state B; when this insoluble protein bumps into a droplet, it slinks easily inside, like a raindrop in a puddle. Thus, as long as theres an energy source, molecules flow in and out of an active droplet. In the context of early Earth, sunlight would be the driving force, Jlicher said.

Zwicker discovered that this chemical influx and efflux will exactly counterbalance each other when an active droplet reaches a certain volume, causing the droplet to stop growing. Typical droplets in Zwickers simulations grew to tens or hundreds of microns across depending on their propertiesthe scale of cells.

Lucy Reading-Ikkanda/Quanta Magazine

The next discovery was even more unexpected. Although active droplets have a stable size, Zwicker found that they are unstable with respect to shape: When a surplus of B molecules enters a droplet on one part of its surface, causing it to bulge slightly in that direction, the extra surface area from the bulging further accelerates the droplets growth as more molecules can diffuse inside. The droplet elongates further and pinches in at the middle, which has low surface area. Eventually, it splits into a pair of droplets, which then grow to the characteristic size. When Jlicher saw simulations of Zwickers equations, he immediately jumped on it and said, That looks very much like division, Zwicker said. And then this whole protocell idea emerged quickly.

Zwicker, Jlicher and their collaborators, Rabea Seyboldt, Christoph Weber and Tony Hyman, developed their theory over the next three years, extending Oparins vision. If you just think about droplets like Oparin did, then its not clear how evolution could act on these droplets, Zwicker said. For evolution, you have to make copies of yourself with slight modifications, and then natural selection decides how things get more complex.

Globule Ancestor

Last spring, Jlicher began meeting with Dora Tang, head of a biology lab at the Max Planck Institute of Molecular Cell Biology and Genetics, to discuss plans to try to observe active-droplet division in action.

Tangs lab synthesizes artificial cells made of polymers, lipids and proteins that resemble biochemical molecules. Over the next few months, she and her team will look for division of liquid droplets made of polymers that are physically similar to the proteins in P granules and centrosomes. The next step, which will be made in collaboration with Hymans lab, is to try to observe centrosomes or other biological droplets dividing, and to determine if they utilize the mechanism identified in the paper by Zwicker and colleagues. That would be a big deal, said Giomi, the Leiden biophysicist.

When Deamer, the membrane-first proponent, read the new paper, he recalled having once observed something like the predicted behavior in hydrocarbon droplets he had extracted from a meteorite. When he illuminated the droplets in near-ultraviolet light, they began moving and dividing. (He sent footage of the phenomenon to Jlicher.) Nonetheless, Deamer isnt convinced of the effects significance. There is no obvious way for the mechanism of division they reported to evolve into the complex process by which living cells actually divide, he said.

Other researchers disagree, including Tang. She says that once droplets started to divide, they could easily have gained the ability to transfer genetic information, essentially divvying up a batch of protein-coding RNA or DNA into equal parcels for their daughter cells. If this genetic material coded for useful proteins that increased the rate of droplet division, natural selection would favor the behavior. Protocells, fueled by sunlight and the law of increasing entropy, would gradually have grown more complex.

Jlicher and colleagues argue that somewhere along the way, protocell droplets could have acquired membranes. Droplets naturally collect crusts of lipids that prefer to lie at the interface between the droplets and the surrounding liquid. Somehow, genes might have started coding for these membranes as a kind of protection. When this idea was put to Deamer, he said, I can go along with that, noting that he would define protocells as the first droplets that had membranes.

The primordial plotline hinges, of course, on the outcome of future experiments, which will determine how robust and relevant the predicted droplet division mechanism really is. Can chemicals be found with the right two states, A and B, to bear out the theory? If so, then a viable path from nonlife to life starts to come into focus.

The luckiest part of the whole process, in Jlichers opinion, was not that droplets turned into cells, but that the first dropletour globule ancestorformed to begin with. Droplets require a lot of chemical material to spontaneously arise or nucleate, and its unclear how so many of the right complex macromolecules could have accumulated in the primordial soup to make it happen. But then again, Jlicher said, there was a lot of soup, and it was stewing for eons.

Its a very rare event. You have to wait a long time for it to happen, he said. And once it happens, then the next things happen more easily, and more systematically.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more:

How to Build Beautiful 3-D Fractals Out of the Simplest Equations

If you came across an animal in the wild and wanted to learn more about it, there are a few things you might do: You might watch what it eats, poke it to see how it reacts, and even dissect it if you got the chance.

Quanta Magazine


Original storyreprinted with permission from Quanta Magazine, an editorially independent division of theSimons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences

Mathematicians are not so different from naturalists. Rather than studying organisms, they study equations and shapes using their own techniques. They twist and stretch mathematical objects, translate them into new mathematical languages, and apply them to new problems. As they find new ways to look at familiar things, the possibilities for insight multiply.

Thats the promise of a new idea from two mathematicians: Laura DeMarco, a professor at Northwestern University, and Kathryn Lindsey, a postdoctoral fellow at the University of Chicago. They begin with a plain old polynomial equation, the kind grudgingly familiar to any high school math student: f(x) = x2 1. Instead of graphing it or finding its roots, they take the unprecedented step of transforming it into a 3-D object.

With polynomials, everything is defined in the two-dimensional plane, Lindsey said. There isnt a natural place a third dimension would come into it until you start thinking about these shapes Laura and I are building.

The 3-D shapes that they build look strange, with broad plains, subtle bends and a zigzag seam that hints at how the objects were formed. DeMarco and Lindsey introduce the shapes in a forthcoming paper in the Arnold Mathematical Journal, a new publication from the Institute for Mathematical Sciences at Stony Brook University. The paper presents what little is known about the objects, such as how theyre constructed and the measurements of their curvature. DeMarco and Lindsey also explain what they believe is a promising new method of inquiry: Using the shapes built from polynomial equations, they hope to come to understand more about the underlying equationswhich is what mathematicians really care about.

Breaking Out of Two Dimensions

In mathematics, several motivating factors can spur new research. One is the quest to solve an open problem, such as the Riemann hypothesis. Another is the desire to build mathematical tools that can be used to do something else. A thirdthe one behind DeMarco and Lindseys workis the equivalent of finding an unidentified species in the wild: One just wants to understand what it is. These are fascinating and beautiful things that arise very naturally in our subject and should be understood! DeMarco said by email, referring to the shapes.

Laura DeMarco, a professor at Northwestern University.Courtesy of Laura DeMarco

Its sort of been in the air for a couple of decades, but theyre the first people to try to do something with it, said Curtis McMullen, a mathematician at Harvard University who won the Fields Medal, maths highest honor, in 1988. McMullen and DeMarco started talking about these shapes in the early 2000s, while she was doing graduate work with him at Harvard. DeMarco then went off to do pioneering work applying techniques from dynamical systems to questions in number theory, for which she will receive the Satter Prizeawarded to a leading female researcherfrom the American Mathematical Society on January 5.

Meanwhile, in 2010 William Thurston, the late Cornell University mathematician and Fields Medal winner, heard about the shapes from McMullen. Thurston suspected that it might be possible to take flat shapes computed from polynomials and bend them to create 3-D objects. To explore this idea, he and Lindsey, who was then a graduate student at Cornell, constructed the 3-D objects from construction paper, tape and a precision cutting device that Thurston had on hand from an earlier project. The result wouldnt have been out of place at an elementary school arts and crafts fair, and Lindsey admits she was kind of mystified by the whole thing.

I never understood why we were doing this, what the point was and what was going on in his mind that made him think this was really important, said Lindsey. Then unfortunately when he died, I couldnt ask him anymore. There was this brilliant guy who suggested something and said he thought it was an important, neat thing, so its natural to wonder What is it? Whats going on here?

In 2014 DeMarco and Lindsey decided to see if they could unwind the mathematical significance of the shapes.

A Fractal Link to Entropy

To get a 3-D shape from an ordinary polynomial takes a little doing. The first step is to run the polynomial dynamicallythat is, to iterate it by feeding each output back into the polynomial as the next input. One of two things will happen: either the values will grow infinitely in size, or theyll settle into a stable, bounded pattern. To keep track of which starting values lead to which of those two outcomes, mathematicians construct the Julia set of a polynomial. The Julia set is the boundary between starting values that go off to infinity and values that remain bounded below a given value. This boundary linewhich differs for every polynomialcan be plotted on the complex plane, where it assumes all manner of highly intricate, swirling, symmetric fractal designs.

Lucy Reading-Ikkanda/Quanta Magazine

If you shade the region bounded by the Julia set, you get the filled Julia set. If you use scissors and cut out the filled Julia set, you get the first piece of the surface of the eventual 3-D shape. To get the second, DeMarco and Lindsey wrote an algorithm. That algorithm analyzes features of the original polynomial, like its degree (the highest number that appears as an exponent) and its coefficients, and outputs another fractal shape that DeMarco and Lindsey call the planar cap.

The Julia set is the base, like the southern hemisphere, and the cap is like the top half, DeMarco said. If you glue them together you get a shape thats polyhedral.

The algorithm was Thurstons idea. When he suggested it to Lindsey in 2010, she wrote a rough version of the program. She and DeMarco improved on the algorithm in their work together and proved it does what we think it does, Lindsey said. That is, for every filled Julia set, the algorithm generates the correct complementary piece.

The filled Julia set and the planar cap are the raw material for constructing a 3-D shape, but by themselves they dont give a sense of what the completed shape will look like. This creates a challenge. When presented with the six faces of a cube laid flat, one could intuitively know how to fold them to make the correct 3-D shape. But, with a less familiar two-dimensional surface, youd be hard-pressed to anticipate the shape of the resulting 3-D object.

Theres no general mathematical theory that tells you what the shape will be if you start with different types of polygons, Lindsey said.

Mathematicians have precise ways of defining what makes a shape a shape. One is to know its curvature. Any 3-D object without holes has a total curvature of exactly 4; its a fixed value in the same way any circular object has exactly 360 degrees of angle. The shapeor geometryof a 3-D object is completely determined by the way that fixed amount of curvature is distributed, combined with information about distances between points. In a sphere, the curvature is distributed evenly over the entire surface; in a cube, its concentrated in equal amounts at the eight evenly spaced vertices.

A unique attribute of Julia sets allows DeMarco and Lindsey to know the curvature of the shapes theyre building. All Julia sets have whats known as a measure of maximal entropy, or MME. The MME is a complicated concept, but there is an intuitive (if slightly incomplete) way to think about it. First, picture a two-dimensional filled Julia set on the plane. Then picture a point on the same plane but very far outside the Julia sets boundary (infinitely far, in fact). From that distant location the point is going to take a random walk across two-dimensional space, meandering until it strikes the Julia set. Wherever it first strikes the Julia set is where it comes to rest.

The MME is a way of quantifying the fact that the meandering point is more likely to strike certain parts of the Julia set than others. For example, the meandering point is more likely to strike a spike in the Julia set that juts out into the plane than it is to intersect with a crevice tucked into a region of the set. The more likely the meandering point is to hit a point on the Julia set, the higher the MME is at that point.

In their paper, DeMarco and Lindsey demonstrated that the 3-D objects they build from Julia sets have a curvature distribution thats exactly proportional to the MME. That is, if theres a 25 percent chance the meandering point will hit a particular place on the Julia set first, then 25 percent of the curvature should also be concentrated at that point when the Julia set is joined with the planar cap and folded into a 3-D shape.

If it was really easy for the meandering point to hit some area on our Julia set wed want to have a lot of curvature at the corresponding point on the 3-D object, Lindsey said. And if it was harder to hit some area on our Julia set, wed want the corresponding area in the 3-D object to be kind of flat.

This is useful information, but it doesnt get you as far as youd think. If given a two-dimensional polygon, and told exactly how its curvature should be distributed, theres still no mathematical way to identify exactly where you need to fold the polygon to end up with the right 3-D shape. Because of this, theres no way to completely anticipate what that 3-D shape will look like.

We know how sharp and pointy the shape has to be, in an abstract, theoretical sense, and we know how far apart the crinkly regions are, again in an abstract, theoretical sense, but we have no idea how to visualize it in three dimensions, DeMarco explained in an email.

She and Lindsey have evidence of the existence of a 3-D shape, and evidence of some of that shapes properties, but no ability yet to see the shape. They are in a position similar to that of astronomers who detect an unexplained stellar wobble that hints at the existence of an exoplanet: The astronomers know there has to be something else out there and they can estimate its mass. Yet the object itself remains just out of view.

A Folding Strategy

Thus far, DeMarco and Lindsey have established basic details of the 3-D shape: They know that one 3-D object exists for every polynomial (by way of its Julia set), and they know the object has a curvature exactly given by the measure of maximal entropy. Everything else has yet to be figured out.

In particular, theyd like to develop a mathematical understanding of the bending laminations, or lines along which a flat surface can be folded to create a 3-D object. The question occurred early on to Thurston, too, who wrote to McMullen in 2010, I wonder how hard it is to compute or characterize the pair of bending laminations, for the inside and the outside, and what they might tell us about the geometry of the Julia set.

Kathryn Lindsey, a mathematician at the University of Chicago.Courtesy of Kathryn Lindsey

In this, DeMarco and Lindseys work is heavily influenced by the mid 20th-century mathematician Aleksandr Aleksandrov. Aleksandrov established that there is only one unique way of folding a given polygon to get a 3-D object. He lamented that it seemed impossible to mathematically calculate the correct folding lines. Today, the best strategy is often to make a best guess about where to fold the polygonand then to get out scissors and tape to see if the estimate is right.

Kathryn and I spent hours cutting out examples and gluing them ourselves, DeMarco said.

DeMarco and Lindsey are currently trying to describe the folding lines on their particular class of 3-D objects, and they think they have a promising strategy. Our working conjecture is that the folding lines, the bending laminations, can be completely described in terms of certain dynamical properties, DeMarco said. Put another way, they hope that by iterating the underlying polynomial in the right way, theyll be able to identify the set of points along which the folding line occurs.

From there, possibilities for exploration are numerous. If you know the folding lines associated to the polynomial f(x) = x2 1, you might then ask what happens to the folding lines if you change the coefficients and consider f(x) = x2 1.1. Do the folding lines of the two polynomials differ a little, a lot or not at all?

Certain polynomials might have similar bending laminations, and that would tell us all these polynomials have something in common, even if on the surface they dont look like they have anything in common, Lindsey said.

Its a bit early to think about all of this, however. DeMarco and Lindsey have found a systematic way to think about polynomials in 3-D terms, but whether that perspective will answer important questions about those polynomials is unclear.

I would even characterize it as being sort of playful at this stage, McMullen said, adding, In a way thats how some of the best mathematical research proceedsyou dont know what something is going to be good for, but it seems to be a feature of the mathematical landscape.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more:

Deep Within a Mountain, Physicists Race to Unearth Dark Matter

In a lab buried under the Apennine Mountains of Italy, Elena Aprile, a professor of physics at Columbia University, is racing to unearth what would be one of the biggest discoveries in physics.

She has not yet succeeded, even after more than a decade of work. Then again, nobody else has, either.

Quanta Magazine


Original story reprinted with permission from Quanta Magazine, an editorially independent division of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences

Aprile leads the XENON dark matter experiment, one of several competing efforts to detect a particle responsible for the astrophysical peculiarities that are collectively attributed to dark matter. These include stars that rotate around the cores of galaxies as if pulled by invisible mass, excessive warping of space around large galaxy clusters, and the leopard-print pattern of hot and cold spots in the early universe.

For decades, the most popular explanation for such phenomena was that dark matter is made of as-yet undiscovered weakly interacting massive particles, known as WIMPs. These WIMPs would only rarely leave an imprint on the more familiar everyday matter.

That paradigm has recently been under fire. The Large Hadron Collider located at the CERN laboratory near Geneva has not yet found anything to support the existence of WIMPs. Other particles, less studied, could also do the trick. Dark matters astrophysical effects might even be caused by modifications of gravity, with no need for the missing stuff at all.

The most stringent WIMP searches have been done using Apriles strategy: Pour plenty of liquid xenona noble element like helium or neon, but heavierinto a vat. Shield it from cosmic rays, which would inundate the detector with spurious signals. Then wait for a passing WIMP to bang into a xenon atoms nucleus. Once it does, capture out the tiny flash of light that should result.

Underground, I guess, there is no such major thing holding you from operating your detector. But there are still, in the back of your mind, thoughts about the seismic resilience of what you designed and what you built.

In a 2011 interview with The New York Times about women at the top of their scientific fields, you described the life of a scientist as tough, competitive and constantly exposed. You suggested that if one of your daughters aspired to be a scientist you would want her to be made of titanium. What did you mean by that?

Maybe I shouldnt demand this of every woman in science or physics. Its true that it might not be fair to ask that everyone is made of titanium. But we must face itin building or running this new experimentthere is going to be a lot of pressure sometimes. Its on every student, every postdoc, every one of us: Try to go fast and get the results, and work day and night if you want to get there. You can go on medical leave or disability, but the WIMP is not waiting for you. Somebody else is going to get it, right? This is what I mean when I say you have to be strong.

Going after something like this, its not a 9-to-5 job. I wouldnt discourage anyone at the beginning to try. But then once you start, you cannot just pretend that this is just a normal job. This is not a normal job. Its not a job. Its a quest.

Aprile in her lab at Columbias Nevis Laboratories.Ben Sklar for Quanta Magazine

In another interview, with the Italian newspaper La Repubblica, you discussed having a brilliant but demanding mentor in Carlo Rubbia, who won the Nobel Prize for Physics in 1984. What was that relationship like?

It made me of titanium, probably. You have to imagine this 23-year-old young woman from Italy ending up at CERN as a summer student in the group of this guy. Even today, I would still be scared if I were that person. Carlo exudes confidence. I was just intimidated.

He would keep pushing you beyond the state that is even possible: Its all about the science; its all about the goal. How the hell you get there I dont care: If youre not sleeping, if youre not eating, if you dont have time to sleep with your husband for a month, who cares? You have a baby to feed? Find some way. Since I survived that period I knew that I was made a bit of titanium, lets put it that way. I did learn to contain my tears. This is a person you dont want to show weakness to.

Now, 30 years after going off to start your own lab, how does the experience of having worked with him inform the scientist you are today, the leader of XENON?

For a long time, he was still involved in his liquid-argon effort. He would still tell me, What are you doing with xenon; you have to turn to argon. It has taken me many years to get over this Rubbia fear, for many reasons, probablyeven if I dont admit it. But now I feel very strong. I can face him and say: Hey, your liquid-argon detector isnt working. Mine is working.

I decided I want to be a more practical person. Most guys are naive. All these guys are naive. A lot of things he did and does are exceptional, yes, but building a successful experiment is not something you do alone. This is a team effort and you must be able to work well with your team. Alone, I wouldnt get anywhere. Everybody counts. It doesnt matter that we build a beautiful machine: I dont believe in machines. We are going to get this damn thing out of it. Were going to get the most out of the thing that we built with our brains, with the brains of our students and postdocs who really look at this data. We want to respect each one of them.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more: