The ‘imminent mini ice age’ myth is back, and it’s still wrong | Dana Nuccitelli

Dana Nuccitellil: We cant accurately predict solar activity, and a quiet solar cycle would have a small impact on Earths climate anyway

Roughly every two years were treated to headlines repeating the myth that Earth is headed for an imminent mini ice age. It happened in 2013, 2015, and again just recently at the tail end of 2017.

This time around, the myth appears to have been sparked by a Sky News interview with Northumbria University mathematics professor Valentina Zharkova. The story was quickly echoed by the Daily Mail, International Business Times, Sputnik News, Metro, Tru News, and others. Zharkova was also behind the mini ice age stories in 2015, based on her research predicting that the sun will soon enter a quiet phase.

The most important takeaway point is that the scientific research is clear were one to occur, a grand solar minimum would temporarily reduce global temperatures by less than 0.3C, while humans are already causing 0.2C warming per decade.

The global mean temperature difference is shown for the time period 1900 to 2100 for the IPCC A2 emissions scenario. The red line shows predicted temperature change for the current level of solar activity, the blue line shows predicted temperature change for solar activity at the much lower level of the Maunder Minimum, and the black line shows observed temperatures through 2010. Illustration: Adapted from Feulner & Rahmstorf (2010) in Geophysical Research Letters by

So the sun could only offset at most 15 years worth of human-caused global warming, and once its quiet phase ended, the sun would then help accelerate global warming once again.

The mini ice age misnomer

The myth ultimately stems from a period climate scientists have coined The Little Ice Age (LIA). This was a modestly cool period running from about the year 1300 to 1850. It was particularly cold in the UK, where the River Thames sometimes froze over, and frost fairs were held.

A team led by University of Reading physicist and solar expert Mike Lockwood wrote a paper reviewing the science behind frost fairs, sunspots, and the LIA. It included the figure below showing northern hemisphere temperatures along with sunspot number and the level of volcanic particles in the atmosphere over the past millennium:

Sunspot number, northern hemisphere temperatures, and volcanic aerosol optical depth (AOD) around the time of the Little Ice Age. Illustration: Lockwood et al. (2017), News & Reviews in Astronomy & Geophysics

During full blown ice ages, temperatures have generally been 48C colder than in modern times. As this figure shows, during the LIA, temperatures were at most only about 0.5C cooler than the early 20th century. Thus, Lockwood calls the Little Ice Age a total misnomer. As the authors put it:

Compared to the changes in the proper ice ages, the so-called Little Ice Age (LIA) is a very short-lived and puny climate and social perturbation.

For comparison, temperatures have risen by a full 1C over the past 120 years, and 0.7C over just the past 40 years.

The minimal solar minima influence on the climate

The Maunder Minimum was a period of quiet solar activity between about 1645 and 1715. Its often referred to interchangeably with Little Ice Age, but the latter lasted centuries longer. In fact, three separate solar minima occurred during the LIA, which also included periods of relatively higher solar activity. Other factors like volcanic eruptions and human activities also contributed to the cool temperatures. In fact, a 2017 paper led by the University of Readings Mathew Owens concluded:

Climate model simulations suggest multiple factors, particularly volcanic activity, were crucial for causing the cooler temperatures in the northern hemisphere during the LIA. A reduction in total solar irradiance likely contributed to the LIA at a level comparable to changing land use [by humans].

Simulated northern hemisphere temperature changes resulting from individual climate factors, as compared to the observed changes in the top panel. The bottom panel shows a simulation with no changes to climatological factors, to illustrate the level of natural variability in the climate. Illustration: Owens et al. (2017), Journal of Space Weather and Space Climate

Several studies have investigated the potential climate impact of a future grand solar minimum. In every case, they have concluded that such a quiet solar period would cause less than 0.3C cooling, which as previously noted, would temporarily offset no more than a decade and a halfs worth of human-caused global warming. These model-based estimates are consistent with the amount of cooling that occurred during the solar minima in the LIA.

Is another grand solar minimum imminent?

Although it would have a relatively small impact on the climate, its still an interesting question to ask whether were headed for another quiet solar period. Zharkova thinks so. Her team created a model that tries to predict solar activity, and suggests another solar minimum will occur from 2020 to 2055. However, other solar scientists have criticized the model as being too simple, created based on just 35 years of data, and failing to accurately reproduce past solar activity.

Ilya Usoskin, head of the Oulu Cosmic Ray Station and Vice-Director of the ReSoLVE Center of Excellence in Research, published a critique of Zharkovas solar model making those points. Most importantly, the model fails in reproducing past known solar activity because Zharkovas team treats the sun as a simple, predictable system like a pendulum. In reality, the sun has more random and unpredictable (in scientific terms, stochastic) behavior:

For example, a perfect pendulum if you saw a few cycles of the pendulum, you can predict its behavior. However, solar activity is known to be non-stationary process, which principally cannot be predicted (the prediction horizon for solar activity is known to be 10-15 years). Deterministic prediction cannot be made because of the essential stochastic component.

Just imagine a very turbulent flow of water in a river rapid, and you throw a small wooden stick into water and trace it. Then you do it second time and third time … each time the stick will end up in very different positions after the same time period. Its movement is unpredictable because of the turbulent stochastic component. This is exactly the situation with solar activity.

Lockwood agrees that we dont yet have a proven predictive theory of solar behavior. He has published research examining the range of possible solar evolutions based on past periods when the Sun was in a similar state to today, but as he puts it, that is the best that I think we can do at the present time!

Solar physicist Paul Charbonneau at the University of Montreal also concurred with Usoskin. He told me that while scientists are working to simulate solar activity, including using simplified models like Zharkovas,

on the standards of contemporary dynamo models theirs is extremely simple in fact borderlining simplistic … To extrapolate such a model outside its calibration window, you need an extra, very strong hypothesis: that the physical systems underlying the magnetic field generation retain their coherence (Phase, amplitude, etc.). As my colleague Ilya Usoskin has already explained, this is very unlikely to be the case in the case of the solar activity cycle.

Why wont this myth die?

Zharkova believes her solar model is correct, but at best it can only try to predict when the next quiet solar period will occur. Its influence on Earths climate is outside her expertise, and the peer-reviewed research is clear that it would be a minimal impact.

Zharkova disagrees I contacted her, and she told me that she believes a grand solar minimum would have a much bigger cooling effect. However, she also referenced long-debunked myths about global warming on Mars and Jupiter, and made a comment about the preachers of global warming. Shes clearly passionate about her research, and has the credibility that comes with publishing peer-reviewed studies on solar activity. Perhaps these factors motivate journalists to write these frequent mini ice age stories.

But Zharkovas climate science beliefs are irrelevant. While she has created a model predicting an imminent period of quiet solar activity, other scientists have identified serious flaws in the model, and in any case, research has shown that another solar minimum would only have a small and temporary impact on Earths climate.

Read more:

Mathematicians Second-Guess Centuries-Old Fluid Equations

The Navier-Stokes equations capture in a few succinct terms one of the most ubiquitous features of the physical world: the flow of fluids. The equations, which date to the 1820s, are today used to model everything from ocean currents to turbulence in the wake of an airplane to the flow of blood in the heart.

Quanta Magazine


Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

While physicists consider the equations to be as reliable as a hammer, mathematicians eye them warily. To a mathematician, it means little that the equations appear to work. They want proof that the equations are unfailing: that no matter the fluid, and no matter how far into the future you forecast its flow, the mathematics of the equations will still hold. Such a guarantee has proved elusive. The first person (or team) to prove that the Navier-Stokes equations will always work—or to provide an example where they don’t—stands to win one of seven Millennium Prize Problems endowed by the Clay Mathematics Institute, along with the associated $1 million reward.

Mathematicians have developed many ways of trying to solve the problem. New work posted online in September raises serious questions about whether one of the main approaches pursued over the years will succeed. The paper, by Tristan Buckmaster and Vlad Vicol of Princeton University, is the first result to find that under certain assumptions, the Navier-Stokes equations provide inconsistent descriptions of the physical world.

“We’re figuring out some of the inherent issues with these equations and why it’s quite possible [that] people have to rethink them,” said Buckmaster.

Buckmaster and Vicol’s work shows that when you allow solutions to the Navier-Stokes equations to be very rough (like a sketch rather than a photograph), the equations start to output nonsense: They say that the same fluid, from the same starting conditions, could end up in two (or more) very different states. It could flow one way or a completely different way. If that were the case, then the equations don’t reliably reflect the physical world they were designed to describe.

Blowing Up the Equations

To see how the equations can break down, first imagine the flow of an ocean current. Within it there may be a multitude of crosscurrents, with some parts moving in one direction at one speed and other areas moving in other directions at other speeds. These crosscurrents interact with one another in a continually evolving interplay of friction and water pressure that determines how the fluid flows.

Mathematicians model that interplay using a map that tells you the direction and magnitude of the current at every position in the fluid. This map, which is called a vector field, is a snapshot of the internal dynamics of a fluid. The Navier-Stokes equations take that snapshot and play it forward, telling you exactly what the vector field will look like at every subsequent moment in time.

The equations work. They describe fluid flows as reliably as Newton’s equations predict the future positions of the planets; physicists employ them all the time, and they’ve consistently matched experimental results. Mathematicians, however, want more than anecdotal confirmation—they want proof that the equations are inviolate, that no matter what vector field you start with, and no matter how far into the future you play it, the equations always give you a unique new vector field.

This is the subject of the Millennium Prize problem, which asks whether the Navier-Stokes equations have solutions (where solutions are in essence a vector field) for all starting points for all moments in time. These solutions have to provide the exact direction and magnitude of the current at every point in the fluid. Solutions that provide information at such infinitely fine resolution are called “smooth” solutions. With a smooth solution, every point in the field has an associated vector that allows you to travel “smoothly” over the field without ever getting stuck at a point that has no vector—a point from which you don’t know where to move next.

Smooth solutions are a complete representation of the physical world, but mathematically speaking, they may not always exist. Mathematicians who work on equations like Navier-Stokes worry about this kind of scenario: You’re running the Navier-Stokes equations and observing how a vector field changes. After some finite amount of time, the equations tell you a particle in the fluid is moving infinitely fast. That would be a problem. The equations involve measuring changes in properties like pressure, friction, and velocity in the fluid — in the jargon, they take “derivatives” of these quantities — but you can’t take the derivative of an infinite value any more than you can divide by zero. So if the equations produce an infinite value, you can say they’ve broken down, or “blown up.” They can no longer describe subsequent states of your fluid.

Lucy Reading-Ikkanda/Quanta Magazine

Blowup is also a strong hint that your equations are missing something about the physical world they’re supposed to describe. “Maybe the equation is not capturing all the effects of the real fluid because in a real fluid we don’t expect” particles to ever start moving infinitely fast, said Buckmaster.

Solving the Millennium Prize problem involves either showing that blowup never happens for the Navier-Stokes equations or identifying the circumstances under which it does. One strategy mathematicians have pursued to do that is to first relax just how descriptive they require solutions to the equations to be.

From Weak to Smooth

When mathematicians study equations like Navier-Stokes, they sometimes start by broadening their definition of what counts as a solution. Smooth solutions require maximal information — in the case of Navier-Stokes, they require that you have a vector at every point in the vector field associated with the fluid. But what if you slackened your requirements and said that you only needed to be able to compute a vector for some points or only needed to be able to approximate vectors? These kinds of solutions are called “weak” solutions. They allow mathematicians to start feeling out the behavior of an equation without having to do all the work of finding smooth solutions (which may be impossible to do in practice).

Tristan Buckmaster, a mathematician at Princeton University, says of the Navier-Stokes equations “it’s possible that people will have to rethink them.”
Princeton University

“From a certain point of view, weak solutions are even easier to describe than actual solutions because you have to know much less,” said Camillo De Lellis, coauthor with László Székelyhidi of several important papers that laid the groundwork for Buckmaster and Vicol’s work.

Weak solutions come in gradations of weakness. If you think of a smooth solution as a mathematical image of a fluid down to infinitely fine resolution, weak solutions are like the 32-bit, or 16-bit, or 8-bit version of that picture (depending on how weak you allow them to be).

In 1934 the French mathematician Jean Leray defined an important class of weak solutions. Rather than working with exact vectors, “Leray solutions” take the average value of vectors in small neighborhoods of the vector field. Leray proved that it’s always possible to solve the Navier-Stokes equations when you allow your solutions to take this particular form. In other words, Leray solutions never blow up.

Leray’s achievement established a new approach to the Navier-Stokes problem: Start with Leray solutions, which you know always exist, and see if you can convert them into smooth solutions, which you want to prove always exist. It’s a process akin to starting with a crude picture and seeing if you can gradually dial up the resolution to get a perfect image of something real.

“One possible strategy is to show these weak Leray solutions are smooth, and if you show they’re smooth, you’ve solved the original Millennium Prize problem,” said Buckmaster.

Vlad Vicol, a mathematician at Princeton, is half of a team that uncovered problems in an approach to validating the Navier-Stokes equations.
Courtesy of S. Vicol

There’s one more catch. Solutions to the Navier-Stokes equations correspond to real physical events, and physical events happen in just one way. Given that, you’d like your equations to have only one set of unique solutions. If the equations give you multiple possible solutions, they’ve failed.

Because of this, mathematicians will be able to use Leray solutions to solve the Millennium Prize problem only if Leray solutions are unique. Nonunique Leray solutions would mean that, according to the rules of Navier-Stokes, the exact same fluid from the exact same starting conditions could end up in two distinct physical states, which makes no physical sense and implies that the equations aren’t really describing what they’re supposed to describe.

Buckmaster and Vicol’s new result is the first to suggest that, for certain definitions of weak solutions, that might be the case.

Many Worlds

In their new paper, Buckmaster and Vicol consider solutions that are even weaker than Leray solutions—solutions that involve the same averaging principle as Leray solutions but also relax one additional requirement (known as the “energy inequality”). They use a method called “convex integration,” which has its origins in work in geometry by the mathematician John Nash and was imported more recently into the study of fluids by De Lellis and Székelyhidi.

Using this approach, Buckmaster and Vicol prove that these very weak solutions to the Navier-Stokes equations are nonunique. They demonstrate, for example, that if you start with a completely calm fluid, like a glass of water sitting still by your bedside, two scenarios are possible. The first scenario is the obvious one: The water starts still and remains still forever. The second is fantastical but mathematically permissible: The water starts still, erupts in the middle of the night, then returns to stillness.

“This proves nonuniqueness because from zero initial data you can construct at least two objects,” said Vicol.

Buckmaster and Vicol prove the existence of many nonunique weak solutions (not just the two described above) to the Navier-Stokes equations. The significance of this remains to be seen. At a certain point, weak solutions might become so weak that they stop really bearing on the smoother solutions they’re meant to imitate. If that’s the case, then Buckmaster and Vicol’s result might not lead far.

“Their result is certainly a warning, but you could argue it’s a warning for the weakest notion of weak solutions. There are many layers [of stronger solutions] on which you could still hope for much better behavior” in the Navier-Stokes equations, said De Lellis.

Buckmaster and Vicol are also thinking in terms of layers, and they have their sights set on Leray solutions—proving that those, too, allow for a multitrack physics in which the same fluid from the same position can take on more than one future form.

“Tristan and I think Leray solutions are not unique. We don’t have that yet, but our work is laying the foundation for how you’d attack the problem,” said Vicol.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more:

Earliest Black Hole Gives Rare Glimpse of Ancient Universe

Astronomers have at least two gnawing questions about the first billion years of the universe, an era steeped in literal fog and figurative mystery. They want to know what burned the fog away: stars, supermassive black holes, or both in tandem? And how did those behemoth black holes grow so big in so little time?

Quanta Magazine


Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Now the discovery of a supermassive black hole smack in the middle of this period is helping astronomers resolve both questions. “It’s a dream come true that all of these data are coming along,” said Avi Loeb, the chair of the astronomy department at Harvard University.

The black hole, announced Wednesday in the journal Nature, is the most distant ever found. It dates back to 690 million years after the Big Bang. Analysis of this object reveals that reionization, the process that defogged the universe like a hair dryer on a steamy bathroom mirror, was about half complete at that time. The researchers also show that the black hole already weighed a hard-to-explain 780 million times the mass of the sun.

A team led by Eduardo Bañados, an astronomer at the Carnegie Institution for Science in Pasadena, found the new black hole by searching through old data for objects with the right color to be ultradistant quasars—the visible signatures of supermassive black holes swallowing gas. The team went through a preliminary list of candidates, observing each in turn with a powerful telescope at Las Campanas Observatory in Chile. On March 9, Bañados observed a faint dot in the southern sky for just 10 minutes. A glance at the raw, unprocessed data confirmed it was a quasar—not a nearer object masquerading as one—and that it was perhaps the oldest ever found. “That night I couldn’t even sleep,” he said.

Eduardo Bañados at the Las Campanas Observatory in Chile, where the new quasar was discovered.
Courtesy of Eduardo Bañados

The new black hole’s mass, calculated after more observations, adds to an existing problem. Black holes grow when cosmic matter falls into them. But this process generates light and heat. At some point, the radiation released by material as it falls into the black hole carries out so much momentum that it blocks new gas from falling in and disrupts the flow. This tug-of-war creates an effective speed limit for black hole growth called the Eddington rate. If this black hole began as a star-size object and grew as fast as theoretically possible, it couldn’t have reached its estimated mass in time.

Other quasars share this kind of precocious heaviness, too. The second-farthest one known, reported on in 2011, tipped the scales at an estimated 2 billion solar masses after 770 million years of cosmic time.

These objects are too young to be so massive. “They’re rare, but they’re very much there, and we need to figure out how they form,” said Priyamvada Natarajan, an astrophysicist at Yale University who was not part of the research team. Theorists have spent years learning how to bulk up a black hole in computer models, she said. Recent work suggests that these black holes could have gone through episodic growth spurts during which they devoured gas well over the Eddington rate.

Bañados and colleagues explored another possibility: If you start at the new black hole’s current mass and rewind the tape, sucking away matter at the Eddington rate until you approach the Big Bang, you see it must have initially formed as an object heavier than 1,000 times the mass of the sun. In this approach, collapsing clouds in the early universe gave birth to overgrown baby black holes that weighed thousands or tens of thousands of solar masses. Yet this scenario requires exceptional conditions that would have allowed gas clouds to condense all together into a single object instead of splintering into many stars, as is typically the case.

Cosmic Dark Ages

Even earlier in the early universe, before any stars or black holes existed, the chaotic scramble of naked protons and electrons came together to make hydrogen atoms. These neutral atoms then absorbed the bright ultraviolet light coming from the first stars. After hundreds of millions of years, young stars or quasars emitted enough light to strip the electrons back off these atoms, dissipating the cosmic fog like mist at dawn.

Lucy Reading-Ikkanda/Quanta Magazine

Astronomers have known that reionization was largely complete by around a billion years after the Big Bang. At that time, only traces of neutral hydrogen remained. But the gas around the newly discovered quasar is about half neutral, half ionized, which indicates that, at least in this part of the universe, reionization was only half finished. “This is super interesting, to really map the epoch of reionization,” said Volker Bromm, an astrophysicist at the University of Texas.

When the light sources that powered reionization first switched on, they must have carved out the opaque cosmos like Swiss cheese. But what these sources were, when it happened, and how patchy or homogeneous the process was are all debated. The new quasar shows that reionization took place relatively late. That scenario squares with what the known population of early galaxies and their stars could have done, without requiring astronomers to hunt for even earlier sources to accomplish it quicker, said study coauthor Bram Venemans of the Max Planck Institute for Astronomy in Heidelberg.

More data points may be on the way. For radio astronomers, who are gearing up to search for emissions from the neutral hydrogen itself, this discovery shows that they are looking in the right time period. “The good news is that there will be neutral hydrogen for them to see,” said Loeb. “We were not sure about that.”

The team also hopes to identify more quasars that date back to the same time period but in different parts of the early universe. Bañados believes that there are between 20 and 100 such very distant, very bright objects across the entire sky. The current discovery comes from his team’s searches in the southern sky; next year, they plan to begin searching in the northern sky as well.

“Let’s hope that pans out,” said Bromm. For years, he said, the baton has been handed off between different classes of objects that seem to give the best glimpses at early cosmic time, with recent attention often going to faraway galaxies or fleeting gamma-ray bursts. “People had almost given up on quasars,” he said.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more:

At the Breakthrough Prizes, Silicon Valley Puts Scientists in the Spotlight

Every year in December, a makeshift hangar at the NASA Ames Research Center pops up for one night, transforming the austere airfield into a glitzy, paparazzi’d, black velvet-roped Nerd Prom. At the Breakthrough Prizes—where on December 3, a total of $22 million was handed out to pioneers in math, physics, and the life sciences—researchers traded lab coats and latex gloves for floor-length gowns and bow-tied tuxedos. Outside it was all barbed wire and cold steel, but on the red carpet, stars and scientists alike sweated under the bright white lights and flash of cameras.

“My lab is going to be totally shocked to see me like this,” said Joanne Chory, a plant geneticist at the Salk Institute in San Diego and the only female awardee at Sunday’s prize ceremony, as she sparkled in a pink sequined shift with matching metal glasses frames. The winners, all 12 of them, had been under strict instructions not to tell any of their colleagues before hopping on planes and flying in for the event. But as the clock struck 4:30pm Pacific Time, and news began to get out, the emails started flooding in. David Spergel, a theoretical physicist at Princeton, was one of five members of the universe-cataloguing WMAP team to win the prize in physics. “There are five of us here being recognized, but 20 more on our team who just found out, they’re absolutely thrilled.”

The Nobels may be more prestigious than the Breakthroughs, but they come with a lot less money (about $2 million less, per prize). Alfred Nobel, whose fortune in the dynamite industry financed the namesake prize, hoped it might atone for his explosive contributions to science. But that isn’t the only thing that has embroiled the award in controversy from the very start. His will instructed that each prize could be awarded to only one person, only for discoveries made the preceding year, and oh, yeah, no mathematicians. While the committee tasked with carrying out his dying wishes has relaxed some of the rules over the years, the underlying framework still upholds the absurd notion that scientific advancement arrives on the back of lone geniuses.

The Breakthrough Prize was supposed to fix all that—with a spirit of inclusivity, optimism, and shiny Silicon Valley cash. Much of that prize money comes from Yuri Milner, the Russian billionaire tech investor who set up and financed the first award before convincing execs at Facebook, Google, 23andMe, and Alibaba to chip in more. (Since 2012, they’ve together awarded more than 70 $3 million prizes to research standouts.) But when the Paradise Papers were made public in early November, they revealed that behind Milner’s investments in Facebook and Twitter were hundreds of millions of dollars traced back to the Kremlin.

The Valley’s other oligarchs—Mark Zuckerberg, Sergey Brin, Sundar Pichai, Jack Dorsey—have also come under fire for their platforms’ complicity in spreading Russian misinformation during the 2016 presidential campaign. In October, the tech titans took a bipartisan beating on Capitol Hill, where members of Congress excoriated Facebook, Twitter, and Google for enabling Russian attempts to divide the American electorate and sow doubt in the democratic process.

The ensuing tech backlash is hitting hardest in Washington, where talk of regulation and anti-trust lawsuits have ticked up in recent months. In the nation’s capital, the corporate leviathans once seen as beacons of new American enterprise are increasingly portrayed as sinister centers of power, too big to be accountable. These revelations and transformations can’t help but change the perception of the wealth backing the various Breakthroughs. Perhaps anticipating this line of questioning, the event’s tech royalty were noticeably silent this year. Only Dick Costolo, previously of Twitter, braved the media corral, saying only that he was happy to be at an event that “puts scientists front and center.” Brin declined questions, as did Milner, who barely cracked a close-mouthed smile for the cameras. Mark Zuckerberg and Priscilla Chan were no-shows, though Zuck did send in a grey-hoodied video greeting that played during the award ceremony. Cori Bargmann was present, the lone, in-person representative from the couple’s philanthropic organization, CZI. This marked a striking comparison to last year, said one of the male reporters in the pool, who had found Milner an entertaining interview in 2016. “If I had known it was going to be like this, I don’t think I would have come,” he said.

With Silicon Valley’s luminaries sticking to the sidelines, it was perhaps Gavin Newsom, California’s Lieutenant Governor, who captured the moment best: “We are celebrating tonight everything that Trump’s Washington is not: facts, science, innovation, entrepreneurialism,” he said “It’s important that we show here in California that we are committed to investing in that.”

And at least for the winning scientists, the award has not yet been tainted by the backlash or the current political climate. Chory says she didn’t think twice about accepting. She’s planning to give most of the money to her kids, so that they can pay back student loan debts and buy houses. But at least a sizeable chunk will go toward turning her research into a global reality. Despite her daily battle with Parkinson’s, Chory has spent the last three decades in the lab genetically engineering crop plants like chickpeas and lentils that can pull 20 times the average amount of carbon dioxide from the atmosphere and store it as a cork-like polymer deep underground.

What’s next is scaling up to the planetary level. She’s calculated that converting just 5 percent of the world’s cropland to her plants could get rid of half of global CO2 emissions. But financing field trials and seed production and distribution and farmer outreach is beyond the scope of most basic research funding mechanisms. Which is why she’s hoping the prize money will give her initiative a jump-start to bring in other grants and investors. “I’ll do my best to milk it as best I can,” says Chory, who figures the total cost for launching the project hovers around what Milner paid for his $100 million mansion, located just up the hill from the Ames red carpet. She says she appreciates being recognized, and a reason to go shopping with her family. But while she was happy to attend the evening with her kids, she’s focused on doing something to make the world they’ll inherit a less dangerous place. “I’m trying to do something now for humankind, not just to please by brain or follow a scientific curiosity. I don’t want to leave a crappy planet as my legacy.”

Bargmann, who was on this year’s selection committee for the life sciences, said the prize was, to her, as much about the future, as about moments in the past that changed science forever. “We’re honoring people tonight who totally changed a field; it was one way before they came along, and something totally different afterward.”

For the chromosome theory of human genetic inheritance, i.e. how you got the genes you got, that was Kim Nasmyth, a biochemist from Oxford who figured out how chromosomes separate during mitosis. He thought about brushing off his old wool tuxedo for the event, but ultimately opted for something newer, and less warm. In his lapel he wore a gold pin with a white cross on a red shield—a gift from the city of Vienna, where he used to work. “It’s the only piece of jewelry I own,” he said. “I thought I might as well wear it.”

While he’s thrilled to receive the award, and pay some of the money forward to a foundation that will support the next generation of scientists, he says that recognition should never be the goal of a good researcher. “Ultimately, when you get out of bed in the morning, you just want to know, to understand,” he says. “I think what drives discoveries are the mysteries that can’t be explained.”

Here’s a complete list of this year’s Breakthrough Prize Winners

Life Sciences
(Each of the five Life Science winners will receive a $3 million prize.)

  • Joanne Chory, a molecular plant biologist at the Salk Institute for Biological Studies and Howard Hughes Medical Institute, for deciphering how plants optimize their growth, development, and cellular structure to transform sunlight into energy.
  • Don W. Cleveland, a neurobiologist at the Ludwig Institute for Cancer Research at the University of California, San Diego, for elucidating the molecular mechanisms behind a type of inherited Lou Gehrig's disease, including the role of glia in neurodegeneration.
  • Kim Nasmyth, a molecular biologist at the University of Oxford for figuring out how chromosomes separate during cell division, the most dramatic event in the life of a cell.
  • Kazutoshi Mori, a structural biologist at Kyoto University and Peter Walter, a biochemist at the University of California, San Francisco, were each recognized for their separate discoveries of a cellular quality-control system that detects disease-causing unfolded proteins and directs cells to take corrective measures.

Fundamental Physics
(The five winners received a single, $3 million prize, which they will share with the entire WMAP science mission team.)

Charles L. Bennett, an astrophysicist at Johns Hopkins; Gary Hinshaw, an astrophysicist at the University of British of Columbia; Norman Jarosik, a physicist at Princeton; Lyman Page Jr., a physicist at Princeton; and, David N. Spergel, an astrophysicist at Princeton, for their work building detailed maps of the early universe that redefined the evolution of the cosmos and the fluctuations that seeded the formation of galaxies.

(The two winners will equally share a $3 million prize.)

Christopher Hacon, a mathematician at the University of Utah, and James McKernan, a mathematician at the University of California, San Diego, for their transformational contributions to birational algebraic geometry, especially to the minimal model program in all dimensions.

Read more:

Researchers share $22m Breakthrough prize as science gets rock star treatment

Glitzy ceremony honours work including that on mapping post-big bang primordial light, cell biology, plant science and neurodegenerative diseases

The most glitzy event on the scientific calendar took place on Sunday night when the Breakthrough Foundation gave away $22m (16.3m) in prizes to dozens of physicists, biologists and mathematicians at a ceremony in Silicon Valley.

The winners this year include five researchers who won $3m (2.2m) each for their work on cell biology, plant science and neurodegenerative diseases, two mathematicians, and a team of 27 physicists who mapped the primordial light that warmed the universe moments after the big bang 13.8 billion years ago.

Now in their sixth year, the Breakthrough prizes are backed by Yuri Milner, a Silicon Valley tech investor, Mark Zuckerberg of Facebook and his wife Priscilla Chan, Anne Wojcicki from the DNA testing company 23andMe, and Googles Sergey Brin. Launched by Milner in 2012, the awards aim to make rock stars of scientists and raise their profile in the public consciousness.

The annual ceremony at Nasas Ames Research Center in California provides a rare opportunity for some of the worlds leading minds to rub shoulders with celebrities, who this year included Morgan Freeman as host, fellow actors Kerry Washington and Mila Kunis, and Miss USA 2017 Kra McCullough. When Joe Polchinski at the University of California in Santa Barbara shared the physics prize last year, he conceded his nieces and nephews would know more about the A-list attendees than he would.

Oxford University geneticist Kim Nasmyth won for his work on chromosomes but said he had not worked out what to do with the windfall. Its a wonderful bonus, but not something you expect, he said. Its a huge amount of money, I havent had time to think it through. On being recognised for what amounts to his lifes work, he added: You have to do science because you want to know, not because you want to get recognition. If you do what it takes to please other people, youll lose your moral compass. Nasmyth has won lucrative awards before and channelled some of his winnings into Gregor Mendels former monastery in Brno.

Another life sciences prizewinner, Joanne Chory at the Salk Institute in San Diego, was honoured for three decades of painstaking research into the genetic programs that flip into action when plants find themselves plunged into shade. Her work revealed that plants can sense when a nearby competitor is about to steal their light, sparking a growth spurt in response. The plants detect threatening neighbours by sensing a surge in the particular wavelengths of red light that are given off by vegetation.

Chory now has ambitious plans to breed plants that can suck vast quantities of carbon dioxide out of the atmosphere in a bid to combat climate change. She believes that crops could be selected to absorb 20 times more of the greenhouse gas than they do today, and convert it into suberin, a waxy material found in roots and bark that breaks down incredibly slowly in soil. If we can do this on 5% of the landmass people are growing crops on, we can take out 50% of global human emissions, she said.

Three other life sciences prizes went to Kazutoshi Mori at Kyoto University and Peter Walter for their work on quality control mechanisms that keep cells healthy, and to Don Cleveland at the University of California, San Diego, for his research on motor neurone disease.

The $3m Breakthrough prize in mathematics was shared by two British-born mathematicians, Christopher Hacon at the University of Utah and James McKernan at the University of California in San Diego. The pair made major contributions to a field of mathematics known as birational algebraic geometry, which sets the rules for projecting abstract objects with more than 1,000 dimensions onto lower-dimensional surfaces. It gets very technical, very quickly, said McKernan.

Speaking before the ceremony, Hacon was feeling a little unnerved. Its really not a mathematician kind of thing, but Ill probably survive, he said. Ive got a tux ready, but Im not keen on wearing it. Asked what he might do with his share of the winnings, Hacon was nothing if not realistic. Ill start by paying taxes, he said. And I have six kids, so the rest will evaporate.

Chuck Bennett, an astrophysicist at Johns Hopkins University in Baltimore, led a Nasa mission known as the Wilkinson Microwave Anisotropy Probe (WMAP) to map the faint afterglow of the big bangs radiation that now permeates the universe. The achievement, now more than a decade old, won the 27-strong science team the $3m Breakthrough prize in fundamental physics. When we made our first maps of the sky, I thought these are beautiful, Bennett told the Guardian. It is still absolutely amazing to me. We can look directly back in time.

Bennett believes that the prizes may help raise the profile of science at a time when it is sorely needed. The point is not to make rock stars of us, but of the science itself, he said. I dont think people realise how big a role science plays in their lives. In everything you do, from the moment you wake up to the moment you go to sleep, theres something about what youre doing that involves scientific advances. I dont think people think about that at all.

Read more:

A Hidden Supercluster Could Solve the Mystery of the Milky Way

Glance at the night sky from a clear vantage point, and the thick band of the Milky Way will slash across the sky. But the stars and dust that paint our galaxy’s disk are an unwelcome sight to astronomers who study all the galaxies that lie beyond our own. It’s like a thick stripe of fog across a windshield, a blur that renders our knowledge of the greater universe incomplete. Astronomers call it the Zone of Avoidance.

Quanta Magazine


Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Renée Kraan-Korteweg has spent her career trying to uncover what lies beyond the zone. She first caught a whiff of something spectacular in the background when, in the 1980s, she found hints of a potential cluster of objects on old photographic survey plates. Over the next few decades, the hints of a large-scale structure kept coming.

Late last year, Kraan-Korteweg and colleagues announced that they had discovered an enormous cosmic structure: a “supercluster” of thousands upon thousands of galaxies. The collection spans 300 million light years, stretching both above and below the galactic plane like an ogre hiding behind a lamppost. The astronomers call it the Vela Supercluster, for its approximate position around the constellation Vela.

Renée Kraan-Korteweg, an astronomer at the University of Cape Town, has spent decades trying to peer through the Zone of Avoidance.
University of Cape Town

Milky Way Movers

The Milky Way, just like every galaxy in the cosmos, moves. While everything in the universe is constantly moving because the universe itself is expanding, since the 1970s astronomers have known of an additional motion, called peculiar velocity. This is a different sort of flow that we seem to be caught in. The Local Group of galaxies—a collection that includes the Milky Way, Andromeda and a few dozen smaller galactic companions—moves at about 600 kilometers per second with respect to the leftover radiation from the Big Bang.

Over the past few decades, astronomers have tallied up all the things that could be pulling and pushing on the Local Group — nearby galaxy clusters, superclusters, walls of clusters and cosmic voids that exert a non-negligible gravitational pull on our own neighborhood.

The biggest tugboat is the Shapley Supercluster, a behemoth of 50 million billion solar masses that resides about 500 million light years away from Earth (and not too far away in the sky from the Vela Supercluster). It accounts for between a quarter and half of the Local Group’s peculiar velocity.

The Milky Way as seen by the Gaia satellite shows the dark clouds of dust that obscure the view of galaxies in the universe beyond.

The remaining motion can’t be accounted for by structures astronomers have already found. So astronomers keep looking farther out into the universe, tallying increasingly distant objects that contribute to the net gravitational pull on the Milky Way. Gravitational pull decreases with increasing distance, but the effect is partly offset by the increasing size of these structures. “As the maps have gone outward,” said Mike Hudson, a cosmologist at the University of Waterloo in Canada, “people continue to identify bigger and bigger things at the edge of the survey. We’re looking out farther, but there’s always a bigger mountain just out of sight.” So far astronomers have only been able to account for about 450 to 500 kilometers per second of the Local Group’s motion.

Astronomers still haven’t fully scoured the Zone of Avoidance to those same depths, however. And the Vela Supercluster discovery shows that something big can be out there, just out of reach.

In February 2014, Kraan-Korteweg and Michelle Cluver, an astronomer at the University of Western Cape in South Africa, set out to map the Vela Supercluster over a six-night observing run at the Anglo-Australian Telescope in Australia. Kraan-Korteweg, of the University of Cape Town, knew where the gas and dust in the Zone of Avoidance was thickest; she targeted individual spots where they had the best chance of seeing through the zone. The goal was to create a “skeleton,” as she calls it, of the structure. Cluver, who had prior experience with the instrument, would read off the distances to individual galaxies.

That project allowed them to conclude that the Vela Supercluster is real, and that it extends 20 by 25 degrees across the sky. But they still don’t understand what’s going on in the core of the supercluster. “We see walls crossing the Zone of Avoidance, but where they cross, we don’t have data at the moment because of the dust,” Kraan-Korteweg said. How are those walls interacting? Have they started to merge? Is there a denser core, hidden by the Milky Way’s glow?

And most important, what is the Vela’s Supercluster’s mass? After all, it is mass that governs the pull of gravity, the buildup of structure.

How to See Through the Haze

While the Zone’s dust and stars block out light in optical and infrared wavelengths, radio waves can pierce through the region. With that in mind, Kraan-Korteweg has a plan to use a type of cosmic radio beacon to map out everything behind the thickest parts of the Zone of Avoidance.

The plan hinges on hydrogen, the simplest and most abundant gas in the universe. Atomic hydrogen is made of a single proton and an electron. Both the proton and the electron have a quantum property called spin, which can be thought of as a little arrow attached to each particle. In hydrogen, these spins can line up parallel to each other, with both pointing in the same direction, or antiparallel, pointing in opposite directions. Occasionally a spin will flip—a parallel atom will switch to antiparallel. When this happens, the atom will release a photon of light with a particular wavelength.

One of the 64 antenna dishes that will make up the MeerKAT telescope in South Africa.
SKA South Africa

The likelihood of one hydrogen atom’s emitting this radio wave is low, but gather a lot of neutral hydrogen gas together, and the chance of detecting it increases. Luckily for Kraan-Korteweg and her colleagues, many of Vela’s member galaxies have a lot of this gas.

During that 2014 observing session, she and Cluver saw indications that many of their identified galaxies host young stars. “And if you have young stars, it means they recently formed, it means there’s gas,” Kraan-Korteweg said, because gas is the raw material that makes stars.

The Milky Way has some of this hydrogen, too—another foreground haze to interfere with observations. But the expansion of the universe can be used to identify hydrogen coming from the Vela structure. As the universe expands, it pulls away galaxies that lie outside our Local Group and shifts the radio light toward the red end of the spectrum. “Those emission lines separate, so you can pick them out,” said Thomas Jarrett, an astronomer at the University of Cape Town and part of the Vela Supercluster discovery team.

While Kraan-Korteweg’s work over her career has dug up some 5,000 galaxies in the Vela Supercluster, she is confident that a sensitive enough radio survey of this neutral hydrogen gas will triple that number and reveal structures that lie behind the densest part of the Milky Way’s disk.

That’s where the MeerKAT radio telescope enters the picture. Located near the small desert town of Carnarvon, South Africa, the instrument will be more sensitive than any radio telescope on Earth. Its 64th and final antenna dish was installed in October, although some dishes still need to be linked together and tested. A half array of 32 dishes should be operating by the end of this year, with the full array following early next year.

Kraan-Korteweg has been pushing over the past year for observing time in this half-array stage, but if she isn’t awarded her requested 200 hours, she’s hoping for 50 hours on the full array. Both options provide the same sensitivity, which she and her colleagues need to detect the radio signals of neutral hydrogen in thousands of individual galaxies hundreds of light years away. Armed with that data, they’ll be able to map what the full structure actually looks like.

Cosmic Basins

Hélène Courtois, an astronomer at the University of Lyon, is taking a different approach to mapping Vela. She makes maps of the universe that she compares to watersheds, or basins. In certain areas of the sky, galaxies migrate toward a common point, just as all the rain in a watershed flows into a single lake or stream. She and her colleagues look for the boundaries, the tipping points of where matter flows toward one basin or another.

Hélène Courtois, an astronomer at the University of Lyon, maps cosmic structure by examining the flow of galaxies.
Eric Leroux, University Lyon Claude Bernard Lyon 1.

A few years ago, Courtois and colleagues used this method to attempt to define our local large-scale structure, which they call Laniakea. The emphasis on defining is important, Courtois explains, because while we have definitions of galaxies and galaxy clusters, there’s no commonly agreed-upon definition for larger-scale structures in the universe such as superclusters and walls.

Part of the problem is that there just aren’t enough superclusters to arrive at a statistically rigorous definition. We can list the ones we know about, but as aggregate structures filled with thousands of galaxies, superclusters show an unknown amount of variation.

Now Courtois and colleagues are turning their attention farther out. “Vela is the most intriguing,” Courtois said. “I want to try to measure the basin of attraction, the boundary, the frontier of Vela.” She is using her own data to find the flows that move toward Vela, and from that she can infer how much mass is pulling on those flows. By comparing those flow lines to Kraan-Korteweg’s map showing where the galaxies physically cluster together, they can try to address how dense of a supercluster Vela is and how far it extends. “The two methods are totally complementary,” Courtois added.

The two astronomers are now collaborating on a map of Vela. When it’s complete, the astronomers hope that they can use it to nail down Vela’s mass, and thus the puzzle of the remaining piece of the Local Group’s motion—“that discrepancy that has been haunting us for 25 years,” Kraan-Korteweg said. And even if the supercluster isn’t responsible for that remaining motion, collecting signals through the Zone of Avoidance from whatever is back there will help resolve our place in the universe.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Read more:

METI’s First Message Is a Music Lesson for Aliens

Tromsø, Norway is usually a destination for northern lights lovers—tourists and scientists alike. But on October 16, the small city north of the Arctic Circle took on a new cosmic role. A radio telescope in the city, a hotspot for aurora investigators, became the origin point of a transmission aimed at the exoplanet GJ 273b, a potentially habitable world just over 12 light years from Earth. Or really, at any GJ 273b inhabitants who might be listening.

The meat of the transmission was carefully crafted by METI International—the alien hunters so serious about Messaging Extraterrestrial Intelligence that they splintered off from the SETI Institute to get more ambitious. It's the first information-rich message they've sent since forming in 2015. And with this bit of code, constructed in partnership with the Spanish Sónar music festival and the Institute of Space Studies of Cataloniar, they want to teach aliens about music—one radio wave pulse at a time.

Sending a message to ET isn't easy. On the terrestrial level, the team first had to get access to a radio telescope, which can be tricky: The instruments are in high demand, and time slots tend to go to projects that are likely to produce, well, results. So METI sent someone who had experience asking for time and resources no one really wanted to give them: their exoplanet hunter Ignasi Ribas. “Twenty years ago, when we didn't know if exoplanets were out there, exoplanet hunters were in the same position we are," says METI president Douglas Vakoch. Fortunately, the director of Tromsø's EISCAT 930 MHz transmitter was intrigued, and gave the group access to the facility for a three-day transmission.

Extraterrestially, the researchers had to choose one target from nearly limitless potential worlds. METI has two main criteria: the planet has to be (relatively) close to Earth, and it has to be a place that aliens could plausibly evolve and thrive. Which usually means it has to be in the Goldilocks Zone—not so cold that any water would be locked away in an glacier, and not too hot that it would just evaporate. “You want a star that has some staying power,” says Vakoch. “You need time for life to cook up.” Smaller stars have longer lives, and GJ 273b orbits a red dwarf named Luyten's Star.

But perhaps harder than all that was deciding what they wanted to say once they'd phoned ET. The METI team wanted to tackle altruism, but the radio telescope METI partnered with was interested in sending a message that would convey Earth culture. So METI joined forces with the Spanish Sónar music festival to create a music-based message, and it was up to METI to make sure any extraterrestrials on the receiving end understood what they were hearing—if they can hear, that is.

Music actually turns out to be a decent universal language. Aliens don’t have to see in order to perceive it. And even if extraterrestrials can't appreciate a good tune, Vakoch is hopeful that they might enjoy the mathematical relationships between the notes. They encoded their entire message in a binary system of two alternating frequencies (an industry standard among alien hunters), pulsing the frequencies 125 times per second. Interpreting those beeps and boops (which act a lot like ones and zeros for computers) is ultimately up to the aliens. "It's like creating a puzzle," says Mike Matessa, a cognitive scientist and friend of Vakoch's who helped develop METI's message. "We tried to make it as easy as possible, but it’s really challenging when you can’t refer to anything in your culture, only science."

So they started with math. The basic 1 + 1 = 2 kind that they assume is a constant across the galaxy. From there, they build up all the mathematics you need to understand music from the foundation: You need to understand the relationships between triangles to understand the sine function to understand sine waves to understand electromagnetic waves to understand sound waves to understand music. "We try to ground things in reality whenever we can," Matessa says. "You can’t hold up an apple and say, 'apple.' But you can send a tone and say that it had a certain frequency and duration."

One of the messages' novel features is a clock. "We introduce notions of time like a second by having a pulse and saying, 'This is one," says Vakoch. "Then we pulse for two seconds and say, 'This is two, and the pulses vary because the two is twice as long as the one.'" Then they expanded on those basics just as they did with the mathematical concepts underpinning musical theory. As the aliens work their way through the message, converting pulses to a numbering system, doing basic arithmetic, the clock will tell them how many seconds have passed, hopefully confirming that that initial pulse—the second—was a measurement of time. Which will also help them figure out that Earthlings will be listening for a response from them in 25 years. It's everything human scientists would want to receive in an alien transmission. "It's designed for the SETI scientists of other worlds," Vakoch says.

Their next message, scheduled for April 2018, will be more musically complex. Not only will they send musical samples from Sónar artist community, they'll actually be able to use the EISCAT 930 MHz transmitter to send simple melodies composed of a number of frequencies. They’ll transform a device meant to study the Northern Lights into an interstellar musical instrument.

And there's still so much more METI would like to tell the universe about us. "How can you talk about altruism or kindness or other human qualities, and how do you build that up from math?" Matessa says. "It's advanced storytelling using languages that any civilization can understand." METI wants to compose as many messages in as many different sensory modalities as possible, in the hopes that one of them could start a meaningful conversation between worlds.

Read more:

So You Want to Geoengineer the Planet? Beware the Hurricanes

Every country on Earth, save for cough one, has banded together to cut emissions and stop the runaway heating of our only home. That’s nearly 200 countries working to keep the global average temperature from climbing 2 degrees Celsius above pre-Industrial Revolution levels.

Phenomenal. But what if cooperation and emissions reduction aren’t enough? Projections show that even if all those countries hit their Paris Agreement emissions pledges, the world will still get too warm too fast, plunging us into climate chaos. So, if we can’t stop what we’ve set in motion, what if we could just cool the planet off by making it more reflective—more like a disco ball than a baseball?

Actually, we could. It’s called solar geoengineering. Scientists could release materials into the stratosphere that reflect sunlight back into space, kind of like slapping giant sunglasses on Earth. You could theoretically do this with giant space mirrors, but that would require a mountain of R&D and money and materials. More likely, scientists might be able to steal a strategy from Earth itself. When volcanoes erupt, they spew sulfur high in the sky, where the gas turns into an aerosol that blocks sunlight. If scientists added sulfur to the stratosphere manually, that could reflect light away from Earth and help humanity reach its climate goals.

It's not that simple, though: The massive Tambora eruption of 1815 cooled the Earth so much that Europe suffered the “year without summer,” leading to extreme food shortages. And in a study published today in the journal Nature, researchers examine a bunch of other ways a blast of sulfur could do more harm than good.

Specifically, the group looked at how sulfur seeding could impact storms in the North Atlantic. They built models showing what would happen if they were to inject sulfur dioxide into the lower stratosphere above either the Northern or Southern Hemisphere, at a rate of 5 million metric tons per year. Sulfur dioxide gas (SO2) is not itself reflective, but up there it reacts with water, picking up oxygen molecules to become sulfate aerosol (SO4)—now that's reflective. Block out some of the sun, and you block out some of the solar energy.

Now, the Earth's hemispheres aren't just divided by a thick line on your globe; they're actually well-divided by what is essentially a giant updraft. That tends to keep materials like, say, sulfate aerosol, stuck in a given hemisphere. “It goes up and it goes more to the one side where you injected it,” says Simone Tilmes, who studies geoengineering at the National Center for Atmospheric Research and was not involved in the study.

This wall of wind gives you some measure of control. If you were to inject SO2 into the Northern Hemisphere, the models show, you would reduce storm activity in the North Atlantic—probably because the injection would put the tropical jet stream on a collision course with the Atlantic hurricane main development region. Wind shear like that weakens storms as they grow. But inject gas into the Southern Hemisphere and the stream shifts north, increasing storms.

Which all jibes with historical data. In 1912, the Katmai eruption in Alaska spewed 30 cubic kilometers of ash and debris into the atmosphere. What followed was the historical record’s only year without hurricanes.

The potentially good news is that models like these make solar geoengineering a bit more predictable than a volcano eruption. The bad news is not everyone would win. Solar geoengineering in the north would cut precipitation in the semi-arid Sahel in north-central Africa.

What we’re looking at, then, isn’t just a strategy with environmental implications, but humanitarian ones as well. Think about current conflicts over water supplies, especially in the developing world. Now scale that up into conflict over the weather itself. It’s not hard to imagine one part of the world deciding to geoengineer for more water and another part of the world suffering for it. “I therefore think that solar geoengineering is currently too risky to be utilized due to the enormous political friction that it may cause,” says lead author Anthony Jones of the University of Exeter.

What researchers need is way more science, more models, more data, way more of whatever you can get to understand these processes. And they’ll need international guidelines for a technology that could nourish some regions and devastate others—individual nations can’t just make unilateral climate decisions that have global repercussions. “There's a lot we don't know and a lot of differences in models,” says Tilmes. “The answer is we really have to look at it more.”

Really, it’s hard to imagine a conundrum of bigger scale. For now, we'll just have to do what we can with baseball Earth. But perhaps one day we’ll be forced to start building a disco ball, one little mirror at a time.

Read more:

The Lava Lamps That Help Keep The Internet Secure

At the headquarters of Cloudflare, in San Francisco, there’s a wall of lava lamps: the Entropy Wall. They’re used to generate random numbers and keep a good bit of the internet secure: here’s how.

For a technical overview of the Entropy Wall click here.

Video by YouTuber Tom Scott

Read more:

What does a sexist google engineer teach us about women in science? | John Abraham

John Abraham: The Google engineers infamous sexist manifesto is contradicted by the brilliance of women in science.

What does a sexist Google engineer teach us about women in science?


Thats the short answer, but it deserves some commentary. In early August, a young Google computer engineer made lots of news in the US when he penned a manifesto that many described as sexist and which led to his firing. The memo was written as a backlash against efforts to improve diversity in the workplace. However, the arguments articulated by the manifesto were rightly described as offensive by Google executives.

The explosive part in the memo involved comments about how biological differences explain the paucity of women in technology and leadership fields. While there are certainly both physical and mental differences between men and women, the comments about both genders are, in my opinion, misguided and offensive.

This article is not going to focus much on the content of this so-called manifesto. It also wont focus on the author of this document, except to question the basis for how a very young engineer has the experience, training, or education to make such broad-brush generalizations. I mean, has he for instance managed scores of male and female engineers and been able to assess their quality of work and intellectual capacity? I doubt it. Has he studied this in any detail or published on the topic? I doubt it.

I found this manifesto so ironic because I give a lot of thought to differences between male and female scientists. I am not an expert in the area, certainly not in evolutionary biology. But I am a Full Professor with many years of instructing both undergraduate and graduate students in engineering. I am often struck by how small the female population is in my discipline (perhaps 20%), yet it is higher in other technical fields (biology, mathematics, chemistry, etc.). I am also impressed by how well female students do in technical courses and degree programs. I note a statistically significant performance gap between male and female students in courses; females consistently outperform their male peers.

I also have had the fortune to be a consultant for many different engineering companies from industries such as biomedical, aerospace, manufacturing, clean energy and other fields. In my work, I notice that women team members easily hold their own with male co-workers. I also believe (but I have no evidence) that women think differently than men.

In my anecdotal experience, women are able to consider problems from a wider range of perspectives. This perspective has real value to design teams, it encourages companies to pay more for female employees (yes, our female engineering graduates tend to make more than their male counterparts). Diverse teams make effective teams. That includes gender diversity. So, in my 15 or so years as a professor, and in my perhaps 50 consulting positions, I have lived an experience very different from the one this young Google engineer articulated.

With all that said, I thought this event provided an excellent opportunity to showcase some female scientists who are either world-known or becoming world-known in the field of climate science. So, here are some short bios of brilliant women climate scientists.

Dr. Magdalena Balmaseda

Magdalena A. Balmaseda has been working at ECMWF since 1995. She currently leads the Earth System Predictability Section in the Research Department.

Dr. Magdalena Balmaseda. Photograph: ECMWF

Dr. Balmaseda has developed her career by helping us understand weather and climate. She has contributed to building bridges between the climate and weather sciences. Her expertise in ocean modelling in general, and in El Nio in particular, greatly contributed to ECMWFs first steps in seasonal forecasting back in 1995. Now seasonal forecasts are one of the pillars of the EU Copernicus Climate Change Service (C3S), and the ocean is included in all ECMWF probabilistic forecasting systems, contributing the provision of forecasts of atmospheric conditions from days to months and seasons ahead.

Equally important have been her contributions to the role of the ocean in a warming climate. The apparent slowing of the global rise in surface temperature in the first decade of the 21st century the so-called hiatus had puzzled the scientific community. In 2013 Dr. Balmaseda together with other colleagues demonstrated that a fair amount of energy trapped in the Earth system had actually been absorbed by deep ocean waters. This outcome was only possible thanks to a combination of information from ocean models, atmospheric winds, and ocean observations, using similar combination techniques as those employed for weather forecasting.

Dr. Karina von Schuckmann

Karina von Schuckmann is an oceanographer working in France at Mercator Ocean. She leads the ocean climate monitoring activities of the Copernicus Marine Environment Monitoring Service, which includes the development of a regular Ocean State Report with more than 100 authors. She is also a lead author of the upcoming Intergovernmental Panel on Climate Change Special report on ocean and cryosphere.

Dr. Karina von Schuckmann.

Her research is focused on the oceans role in the Earth energy budget. This means she studies how much heat is stored in and how it flows throughout the ocean waters. Her studies particularly highlight the unique importance of measuring the global ocean as its global heat storage is the most fundamental metric defining the status of global climate change and expectations for continued global warming. With this topic she is also playing a leading role on international scientific collaborations under the framework of the World Climate Research Program.

Dr. von Schuckmanns rsum reads like a seasoned superstars; she has worked at some of the best research labs in France, the USA, Germany. I was so surprised to find she only recently received her PhD (in 2006). I want to know how she has become a leader in the field so quickly. I guess talent will do that. Her dissertation topic was on ocean climate variability in the tropical Atlantic Ocean.

Dr. Jessica Conroy

Dr Jessica Conroy is a faculty member at the University of Illinois Urbana Champaign. She holds a dual appointment in the departments of Geology and Plant Biology. Another young and upcoming research scientist, she has been at the forefront of connecting modern climate observations and climate model outputs with long-past climate measurements (paleoclimate data). Her work has helped improve our understanding of past Earth climate.

Dr. Jessica Conroy. Photograph: Jessica Moerman

In addition, she has developed long paleoclimate records from regions that are very sensitive to climate change. For instance, remote islands across the tropical Pacific and the Tibetan Plateau. She goes where few scientists have gone to make measurements that even fewer can.

Part of her work relies upon lake sediment samples and on the use of stable isotopes (oxygen and hydrogen) to give clues about what past climate was like. Not only does this give information about past temperatures but these data also, perhaps more importantly, tell us what the water cycle was like in the past. She was recently selected as a National Academy of Sciences Kavli Fellow.

Dr. Sarah Myhre

Dr. Myhre is skilled in climate science as well as climate communication. Her area of research is in paleoceanography (the study of past climate and biology through the oceans). Her research requires her team to gather sediment cores from the seafloor, to analyze the chemical compositions and the shells of creatures that are contained within such cores, or to observe deep sea ecosystem using remotely operated submersibles. Her publications have appeared in some of the most prestigious scientific journals.

She may become even better known, however, for her work not only communicating about climate science to the general public but in training other scientists to be communicators. We scientists are often good at talking amongst ourselves, but we are less skilled at explaining why our research is important and how society can use our research to make informed decisions. This is where Dr. Myhre shines. She is a board member of the organization 500 Women Scientists and the Center for Women and Democracy, and is an uncompromising advocate for womens leadership in science and society.

Dr. Myhre and her son at the North Cascades Institute.

Dr. Rita Colwell

World-renowned expert in infectious diseases and health, Dr. Colwell has degrees in bacteriology, genetics, and oceanography; her pioneering use of computational tools and DNA sequencing helped lay the foundation for the bioinformatics revolution. This unique background has allowed her to make connections across these disciplines and enhance our understanding of water availability, disease, safe drinking water, and the effects of climate change on waterborne pathogens.

Dr. Colwell has won numerous national and international awards, such as the 2006 National Medal of Science, the 2010 Stockholm Water Prize, and the 2017 International Prize for Biology, and has and been elected to multiple Halls of Fame. She served as the director of the NSF and has served on numerous advisory roles throughout her career. As with some of the other women discussed here, Dr. Colwell prioritizes scientific communication and engagement with the public, particularly children, and expanding participation of minorities and women in the STEM fields.

Dr. Cynthia Rosenzweig

Nasa Goddard Institute for Space Studies is fortunate to employ Dr. Cynthia Rosenzweig. Her technical focus is on climate change impacts how society will be affected by a changing climate. In addition to her technical research, she has been a tireless service worker in the field, serving as a Coordinating Lead Author on the Fourth IPCC report as well as numerous editing and directorship roles with various climate and adaptation organizations. She also has an appointment at the Center for Climate Systems Research and Columbia University.

Perhaps her most current area of research is on climate and crop productivity. She wants to know how agricultural outputs will change as the climate warms and changes to water availability occur.

Dr. Jane Lubchenco

Dr. Lubchenco is well known as a world leader in the field of environmental science. After decades of innovative research at the intersection of climate change and the ocean, President Obama tapped her to lead the National Oceanic and Atmospheric Administration the federal agency that keeps the nations climate records, produces much of the federal agency climate science, and leads the U.S. National Climate Assessment. She has focused squarely on the intersection between climate change and human well-being, and the opportunities to mitigate and adapt to climate change through smarter coastal and oceanic policies and practices.

With a technical record extending back more than four decades, it is challenging to find anyone with a stronger pedigree. She has used her prestige to raise the importance of scientific communication amongst her colleagues. In the past, communicating science to the larger public was an afterthought. Dr. Lubchenco made it a critical measure of ones career. Most recently, she has issued a new call-to-arms for scientists to become more engaged with society to counter the post-truth mentality, enable citizens and leaders to have confidence in evidence and science and work together to solve climate and other urgent environmental changes.

None of these short biographies do the scientists justice, but hopefully they give a sense of how some of our top female scientists are contributing to our understanding of the world in which we live. I know they are making a sexist engineer formally employed at Google look a bit silly.

Read more: