Quora can be an awesome resource, a place where rational, intelligent people come together to have in-depth discussions about important and stimulating topics. However, like anywhere else on the web, it has it’s fair share of pretentious and arrogant contributors who can’t seem to get over themselves, like this “budding mathematician” for example.

The poor woman, obviously doing the superior science, found herself stuck with a boyfriend studying something as lowly as psychology, the science of the human mind and behavior. “I’m planning to be a mathematician and I can’t take his interest seriously,” she wailed. “It’s a joke compared to mine. We have chemistry but his profession/interest in that pop junk is annoying. I prefer intellectual discussions not junk talk.”

Now, whether this was a genuine post, or just a wind-up trying to get a reaction is open to question, but get a reaction it did, in the form of a glorious put down. Using pinpoint logic to defend the complexity and importance of psychology in relation to mathematics, the response eloquently describes the OP’s shortcomings, while suggesting that her psychologist boyfriend could find her as a useful subject for study, if nothing else.

She will no doubt learn, with time, to respect others choices and passions in life, even if they are different to her own. But for now, such a superior attitude is probably not going to stand this student in good stead for the duration of her studies, or her love life for that matter. Scroll down below to read how it unfolded for yourself, as well as see how others reacted to the post. What do you think? Let us know in the comments!

A “budding mathematician” took to Quora to share her embarrassment at her boyfriend’s choice of study, psychology

Rock-Paper-Scissors works great for deciding who has to take out the garbage. But have you ever noticed what happens when, instead of playing best of three, you just let the game continue round after round? At first, you play a pattern that gives you the upper hand, but then your opponent quickly catches on and turns things in her favor. As strategies evolve, a point is reached where neither side seems to be able to improve any further. Why does that happen?

Quanta Magazine

About

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

In 1950, the mathematician John Nash proved that in any kind of game with a finite number of players and a finite number of options—like Rock-Paper-Scissors—a mix of strategies always exists where no single player can do any better by changing their own strategy alone. The theory behind such stable strategy profiles, which came to be known as “Nash equilibria,” revolutionized the field of game theory, altering the course of economics and changing the way everything from political treaties to network traffic is studied and analyzed. And it earned Nash the Nobel Prize in 1994.

So, what does a Nash equilibrium look like in Rock-Paper-Scissors? Let’s model the situation with you (Player A) and your opponent (Player B) playing the game over and over. Each round, the winner earns a point, the loser loses a point, and ties count as zero.

Now, suppose Player B adopts the (silly) strategy of choosing Paper every turn. After a few rounds of winning, losing, and tying, you are likely to notice the pattern and adopt a winning counterstrategy by choosing Scissors every turn. Let’s call this strategy profile (Scissors, Paper). If every round unfolds as Scissors vs. Paper, you’ll slice your way to a perfect record.

But Player B soon sees the folly in this strategy profile. Observing your reliance on Scissors, she switches to the strategy of always choosing Rock. This strategy profile (Scissors, Rock) starts winning for Player B. But of course, you now switch to Paper. During these stretches of the game, Players A and B are employing what are known as “pure” strategies—a single strategy that is chosen and repeatedly executed.

Clearly, no equilibrium will be achieved here: For any pure strategy, like “always choose Rock,” a counterstrategy can be adopted, like “always choose Paper,” which will force another change in strategy. You and your opponent will forever be chasing each other around the circle of strategies.

But you can also try a “mixed” strategy. Let’s assume that, instead of choosing only one strategy to play, you can randomly choose one of the pure strategies each round. Instead of “always play Rock,” a mixed strategy could be to “play Rock half the time and Scissors the other half.” Nash proved that, when such mixed strategies are allowed, every game like this must have at least one equilibrium point. Let’s find it.

So, what’s a sensible mixed strategy for Rock-Paper-Scissors? A reasonable intuition would be “choose Rock, Paper or Scissors with equal probability,” denoted as (^{1}/_{3},^{1}/_{3},^{1}/_{3}). This means Rock, Paper and Scissors are each chosen with probability ^{1}/_{3}. Is this a good strategy?

Well, suppose your opponent’s strategy is “always choose Rock,” a pure strategy that can be represented as (1,0,0). How will the game play out under the strategy profile (^{1}/_{3},^{1}/_{3},^{1}/_{3}) for A and (1,0,0) for B?

In order to get better picture of our game, we’ll construct a table that shows the probability of each of the nine possible outcomes every round: Rock for A, Rock for B; Rock for A, Paper for B; and so on. In the chart below, the top row indicates Player B’s choice, and the leftmost column indicates Player A’s choice.

Each entry in the table shows the probability that the given pair of choices is made in any given round. This is simply the product of the probabilities that each player makes their respective choice. For example, the probability of Player A choosing Paper is ^{1}/_{3}, and the probability of player B choosing Rock is 1, so the probability of (Paper for A, Rock for B) is ^{1}/_{3}×1=^{1}/_{3}. But the probability of (Paper for A, Scissors for B) is ^{1}/_{3}×0=0, since there is a zero probability of Player B choosing Scissors.

So how does Player A fare in this strategy profile? Player A will win one-third of the time (Paper, Rock), lose one-third of the time (Scissors, Rock) and tie one-third of the time (Rock, Rock). We can compute the number of points that Player A will earn, on average, each round by computing the sum of the product of each outcome with its respective probability:

This says that, on average, Player A will earn 0 points per round. You will win, lose and tie with equal likelihood. On average, the number of wins and losses will even out, and both players are essentially headed for a draw.

But as we’ve already discussed, you can do better by changing your strategy, assuming your opponent doesn’t change theirs. If you switch to the strategy (0,1,0) (“choose Paper every time”), the probability chart will look like this

Each time you play, your Paper will wrap your opponent’s Rock, and you’ll earn one point every round.

So, this pair of strategies—(^{1}/_{3},^{1}/_{3},^{1}/_{3}) for A and (1,0,0) for B—is not a Nash equilibrium: You, as Player A, can improve your results by changing your strategy.

As we’ve seen, pure strategies don’t seem to lead to equilibrium. But what if your opponent tries a mixed strategy, like (^{1}/_{2},^{1}/_{4},^{1}/_{4})? This is the strategy “Rock half the time; Paper and Scissors each one quarter of the time.” Here’s the associated probability chart:

Now, here’s the “payoff” chart, from Player A’s perspective; this is the number of points Player A receives for each outcome.

We put the two charts together, using multiplication, to compute how many points, on average, Player A will earn each round.

On average, Player A is again earning 0 points per round. Like before, this strategy profile, (^{1}/_{3},^{1}/_{3},^{1}/_{3}) for A and (^{1}/_{2},^{1}/_{4},^{1}/_{4}) for B, ends up in a draw.

But also like before, you as Player A can improve your results by switching strategies: Against Player B’s (^{1}/_{2},^{1}/_{4},^{1}/_{4}), Player A should play (^{1}/_{4},^{1}/_{2},^{1}/_{4}). This has a probability chart of

That is, under this strategy profile—(^{1}/_{4},^{1}/_{2},^{1}/_{4}) for A and (^{1}/_{2},^{1}/_{4},^{1}/_{4}) for B—Player A nets ^{1}/_{16} of a point per round on average. After 100 games, Player A will be up 6.25 points. There’s a big incentive for Player A to switch strategies. So, the strategy profile of (^{1}/_{3},^{1}/_{3},^{1}/_{3}) for A and (^{1}/_{2},^{1}/_{4},^{1}/_{4}) for B is not a Nash equilibrium, either.

But now let’s consider the pair of strategies (^{1}/_{3},^{1}/_{3},^{1}/_{3}) for A and (^{1}/_{3},^{1}/_{3},^{1}/_{3}) for B. Here’s the corresponding probability chart:

Symmetry makes quick work of the net result calculation:

Again, you and your opponent are playing to draw. But the difference here is that no player has an incentive to change strategies! If Player B were to switch to any imbalanced strategy where one option—say, Rock—were played more than the others, Player A would simply alter their strategy to play Paper more frequently. This would ultimately yield a positive net result for Player A each round. This is precisely what happened when Player A adopted the strategy (^{1}/_{4},^{1}/_{2},^{1}/_{4}) against Player B’s (^{1}/_{2},^{1}/_{4},^{1}/_{4}) strategy above.

Of course, if Player A switched from (^{1}/_{3},^{1}/_{3},^{1}/_{3}) to an imbalanced strategy, Player B could take advantage in a similar manner. So, neither player can improve their results solely by changing their own individual strategy. The game has reached a Nash equilibrium.

The fact that all such games have such equilibria, as Nash proved, is important for several reasons. One of those reasons is that many real-life situations can be modeled as games. Whenever a group of individuals is caught in the tension between personal gain and collective satisfaction—like in a negotiation, or a competition for shared resources—you’ll find strategies being employed and payoffs being evaluated. The ubiquitous nature of this mathematical model is part of the reason Nash’s work has been so impactful.

Another reason is that a Nash equilibrium is, in some sense, a positive outcome for all players. When reached, no individual can do better by changing their own strategy. There might exist better collective outcomes that could be reached if all players acted in perfect cooperation, but if all you can control is yourself, ending up at a Nash equilibrium is the best you can reasonably hope to do.

And so, we might hope that “games” like economic incentive packages, tax codes, treaty parameters and network designs will end in Nash equilibria, where individuals, acting in their own interest, all end up with something to be happy about, and systems are stable. But when playing these games, is it reasonable to assume that players will naturally arrive at a Nash equilibrium?

It’s tempting to think so. In our Rock-Paper-Scissors game, we might have guessed right away that neither player could do better than playing completely randomly. But that’s in part because all player preferences are known to all other players: Everyone knows how much everyone else wins and loses for each outcome. But what if preferences were secret and more complex?

Imagine a new game in which Player B scores three points when she defeats Scissors, and one point for any other victory. This would alter the mixed strategy: Player B would play Rock more often, hoping for the triple payoff when Player A chooses Scissors. And while the difference in points wouldn’t directly affect Player A’s payoffs, the resulting change in Player B’s strategy would trigger a new counterstrategy for A.

And if every one of Player B’s payoffs was different, and secret, it would take some time for Player A to figure out what Player B’s strategy was. Many rounds would pass before Player A could get a sense of, say, how often Player B was choosing Rock, in order to figure out how often to choose Paper.

Now imagine there are 100 people playing Rock-Paper-Scissors, each with a different set of secret payoffs, each depending on how many of their 99 opponents they defeat using Rock, Paper or Scissors. How long would it take to calculate just the right frequency of Rock, Paper or Scissors you should play in order to reach an equilibrium point? Probably a long time. Maybe longer than the game will go on. Maybe even longer than the lifetime of the universe!

At the very least, it’s not obvious that even perfectly rational and reflective players, playing good strategies and acting in their own best interests, will end up at equilibrium in this game. This idea lies at the heart of a paper posted online in 2016 that proves there is no uniform approach that, in all games, would lead players to even an approximate Nash equilibrium. This is not to say that perfect players never tend toward equilibrium in games—they often do. It just means that there’s no reason to believe that just because a game is being played by perfect players, equilibrium will be achieved.

When we design a transportation network, we might hope that the players in the game, travelers each seeking the fastest way home, will collectively achieve an equilibrium where nothing is gained by taking a different route. We might hope that the invisible hand of John Nash will guide them so that their competing and cooperating interests—to take the shortest possible route yet avoid creating traffic jams—produce an equilibrium.

But our increasingly complex game of Rock-Paper-Scissors shows why such hopes may be misplaced. The invisible hand may guide some games, but others may resist its hold, trapping players in a never-ending competition for gains forever just out of reach.

Exercises

Suppose Player B plays the mixed strategy (^{1}/_{2},^{1}/_{2},0). What mixed strategy should A play to maximize wins in the long run?

Suppose Player B plays the mixed strategy (^{1}/_{6},^{2}/_{6},^{3}/_{6}). What mixed strategy should A play to maximize wins in the long run?

List item

How might the dynamics of the game change if both players are awarded a point for a tie?

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Related Video

Science

The Fascinating Math Behind Why You Won't Win The Powerball

The Powerball jackpot is over a billion dollars but what are your chances?

In the arid, sun-soaked northwest corner of Australia, along the Tropic of Capricorn, the oldest face of Earth is exposed to the sky. Drive through the northern outback for a while, south of Port Hedlund on the coast, and you will come upon hills softened by time. They are part of a region called the Pilbara Craton, which formed about 3.5 billion years ago, when Earth was in its youth.

Quanta Magazine

About

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Look closer. From a seam in one of these hills, a jumble of ancient, orange-Creamsicle rock spills forth: a deposit called the Apex Chert. Within this rock, viewable only through a microscope, there are tiny tubes. Some look like petroglyphs depicting a tornado; others resemble flattened worms. They are among the most controversial rock samples ever collected on this planet, and they might represent some of the oldest forms of life ever found.

Last month, researchers lobbed another salvo in the decades-long debate about the nature of these forms. They are indeed fossil life, and they date to 3.465 billion years ago, according to John Valley, a geochemist at the University of Wisconsin. If Valley and his team are right, the fossils imply that life diversified remarkably early in the planet’s tumultuous youth.

The fossils add to a wave of discoveries that point to a new story of ancient Earth. In the past year, separate teams of researchers have dug up, pulverized and laser-blasted pieces of rock that may contain life dating to 3.7, 3.95 and maybe even 4.28 billion years ago. All of these microfossils—or the chemical evidence associated with them—are hotly debated. But they all cast doubt on the traditional tale.

As that story goes, in the half-billion years after it formed, Earth was hellish and hot. The infant world would have been rent by volcanism and bombarded by other planetary crumbs, making for an environment so horrible, and so inhospitable to life, that the geologic era is named the Hadean, for the Greek underworld. Not until a particularly violent asteroid barrage ended some 3.8 billion years ago could life have evolved.

But this story is increasingly under fire. Many geologists now think Earth may have been tepid and watery from the outset. The oldest rocks in the record suggest parts of the planet’s crust had cooled and solidified by 4.4 billion years ago. Oxygen in those ancient rocks suggest the planet had water as far back as 4.3 billion years ago. And instead of an epochal, final bombardment, meteorite strikes might have slowly tapered off as the solar system settled into its current configuration.

“Things were actually looking a lot more like the modern world, in some respects, early on. There was water, potentially some stable crust. It’s not completely out of the question that there would have been a habitable world and life of some kind,” said Elizabeth Bell, a geochemist at the University of California, Los Angeles.

Taken together, the latest evidence from the ancient Earth and from the moon is painting a picture of a very different Hadean Earth: a stoutly solid, temperate, meteorite-clear and watery world, an Eden from the very beginning.

Ancient Clues

About 4.54 billion years ago, Earth was forming out of dust and rocks left over from the sun’s birth. Smaller solar leftovers continually pelted baby Earth, heating it up and endowing it with radioactive materials, which further warmed it from within. Oceans of magma covered Earth’s surface. Back then, Earth was not so much a rocky planet as an incandescent ball of lava.

Not long after Earth coalesced, a wayward planet whacked into it with incredible force, possibly vaporizing Earth anew and forming the moon. The meteorite strikes continued, some excavating craters 1,000 kilometers across. In the standard paradigm of the Hadean eon, these strikes culminated in an assault dubbed the Late Heavy Bombardment, also known as the lunar cataclysm, in which asteroids emigrated to the inner solar system and pounded the rocky planets. Throughout this early era, ending about 3.8 billion years ago, Earth was molten and couldn’t support a crust of solid rock, let alone life.

But starting around a decade ago, this story started to change, thanks largely to tiny crystals called zircons. The gems, which are often about the size of the period at the end of this sentence, told of a cooler, wetter and maybe livable world as far back as 4.3 billion years ago. In recent years, fossils in ancient rock bolstered the zircons’ story of calmer climes. The tornadic microfossils of the Pilbara Craton are the latest example.

Today, the oldest evidence for possible life—which many scientists doubt or outright reject—is at least 3.77 billion years old and may be a stunningly ancient 4.28 billion years old.

In March 2017, Dominic Papineau, a geochemist at University College London, and his student Matthew Dodd described tubelike fossils in an outcrop in Quebec that dates to the basement of Earth’s history. The formation, called the Nuvvuagittuq (noo-voo-wog-it-tuck) Greenstone Belt, is a fragment of Earth’s primitive ocean floor. The fossils, about half the width of a human hair and just half a millimeter long, were buried within. They are made from an iron oxide called hematite and may be fossilized cities built by microbial communities up to 4.28 billion years ago, Dodd said.

“They would have formed these gelatinous, rusty-red-colored mats on the rocks around the vents,” he said. Similar structures exist in today’s oceans, where communities of microbes and bloody-looking tube worms blossom around sunless, black-smoking chimneys.

Dodd found the tubes near graphite and with carbonate “rosettes,” tiny carbon rings that contain organic materials. The rosettes can form through varying nonbiological processes, but Dodd also found a mineral called apatite, which he said is diagnostic of biological activity. The researchers also analyzed the variants, or isotopes, of carbon within the graphite. Generally, living things like to use the more lightweight isotopes, so an abundance of carbon 12 over carbon 13 can be used to infer past biological activity. The graphite near the rosettes also suggested the presence of life. Taken together, the tubes and their surrounding chemistry suggest they are remnants of a microbial community that lived near a deep-ocean hydrothermal vent, Dodd said.

Geologists debate the exact age of the rock belt where they were found, but they agree it includes one of the oldest, if not the oldest, iron formations on Earth. This suggests the fossils are that old, too—much older than anything found previously and much older than many scientists had thought possible.

Then in September 2017, researchers in Japan published an examination of graphite flakes from a 3.95-billion-year-old sedimentary rock called the Saglek Block in Labrador, Canada. Yuji Sano and Tsuyoshi Komiya of the University of Tokyo argued their graphite’s carbon-isotope ratio indicates it, too, was made by life. But the graphite flakes were not accompanied by any feature that looked like a fossil; what’s more, the history of the surrounding rock is murky, suggesting the carbon may be younger than it appears.

Farther to the east, in southwestern Greenland, another team had also found evidence of ancient life. In August 2016, Allen Nutman of the University of Wollongong in Australia and colleagues reported finding stromatolites, fossil remains of microbes, from 3.7 billion years ago.

Many geologists have been skeptical of each claim. Nutman’s fossils, for example, come from the Isua belt in southern Greenland, home to the oldest known sedimentary rocks on Earth. But the Isua belt is tough to interpret. Just as nonbiological processes can form Dodd’s carbon rosettes, basic chemistry can form plenty of layered structures without any help from life, suggesting they may not be stromatolites but lifeless pretenders.

In addition, both the Nuvvuagittuq Greenstone Belt and the Isua belt have been heated and squished over billions of years, a process that melts and recrystallizes the rocks, morphing them from their original sedimentary state.

“I don’t think any of those other studies are wrong, but I don’t think any of them are proof,” said Valley, the Wisconsin researcher. “All we can say is [Nutman’s rocks] look like stromatolites, and that’s very enticing.”

Regarding his work with the Pilbara Craton fossils, however, Valley is much less circumspect.

Signs of Life

The tornadic microfossils lay in the Pilbara Craton for 3.465 billion years before being separated from their natal rock, packed up in a box and shipped to California. Paleobiologist William Schopf of UCLA published his discovery of the strange squiggles in 1993 and identified 11 distinct microbial taxa in the samples. Critics said the forms could have been made in nonbiological processes, and geologists have argued back and forth in the years since. Last year, Schopf sent a sample to Valley, who is an expert with a super-sensitive instrument for measuring isotope ratios called a secondary ion mass spectrometer.

Valley’s team found that some of the apparent fossils had the same carbon-isotope ratio as modern photosynthetic bacteria. Three other types of fossils had the same ratios as methane-eating or methane-producing microbes. Moreover, the isotope ratios correlate to specific species that had already been identified by Schopf. The locations where these isotope ratios were measured corresponded to the shapes of the microfossils themselves, Valley said, adding they are the oldest samples that look like fossils both physically and chemically.

While they are not the oldest samples in the record—supposing you accept the provenance of the rocks described by Dodd, Komiya and Nutman—Schopf’s and Valley’s cyclonic miniatures do have an important distinction: They are diverse. The presence of so many different carbon isotope ratios suggests the rock represents a complex community of primitive organisms. The life-forms must have had time to evolve into endless iterations. This means they must have originated even earlier than 3.465 billion years ago. And that means our oldest ancestors are very, very old indeed.

Watery World

Fossils were not the first sign that early Earth might have been Edenic rather than hellish. The rocks themselves started providing that evidence as far back as 2001. That year, Valley found zircons that suggested the planet had a crust as far back as 4.4 billion years ago.

Zircons are crystalline minerals containing silicon, oxygen, zirconium and sometimes other elements. They form inside magma, and like some better-known carbon crystals, zircons are forever—they can outlast the rocks they form in and withstand eons of unspeakable pressure, erosion and deformation. As a result, they are the only rocks left over from the Hadean, making them invaluable time capsules.

Valley chipped some out of Western Australia’s Jack Hills and found oxygen isotopes that suggested the crystal formed from material that was altered by liquid water. This suggested part of Earth’s crust had cooled, solidified and harbored water at least 400 million years earlier than the earliest known sedimentary rocks. If there was liquid water, there were likely entire oceans, Valley said. Other zircons showed the same thing.

“The Hadean was not hell-like. That’s what we learned from the zircons. Sure, there were volcanoes, but they were probably surrounded by oceans. There would have been at least some dry land,” he said.

Zircons suggest there may even have been life.

In research published in 2015, Bell and her coauthors presented evidence for graphite embedded within a tiny, 4.1-billion-year-old zircon crystal from the same Jack Hills. The graphite’s blend of carbon isotopes hints at biological origins, although the finding is—once again—hotly debated.

“Are there other explanations than life? Yeah, there are,” Bell said. “But this is what I would consider the most secure evidence for some sort of fossil or biogenic structure.”

If the signals in the ancient rocks are true, they are telling us that life was everywhere, always. In almost every place scientists look, they are finding evidence of life and its chemistry, whether it is in the form of fossils themselves or the remnants of life’s long-ago stirrings. Far from fussy and delicate, life may have taken hold in the worst conditions imaginable.

“Life was managing to do interesting things at the same time Earth was dealing with the worst impacts it’s ever had,” said Bill Bottke, a planetary scientist at the Southwest Research Institute in Boulder, Colorado.

Or maybe not. Maybe Earth was just fine. Maybe those impacts weren’t quite as rapid-fire as everyone thought.

Evidence for a Beating

We know Earth, and everything else, was bombarded by asteroids in the past. The moon, Mars, Venus and Mercury all bear witness to this primordial pummeling. The question is when, and for how long.

Based largely on Apollo samples toted home by moonwalking astronauts, scientists came to believe that in the Earth’s Hadean age, there were at least two distinct epochs of solar system billiards. The first was the inevitable side effect of planet making: It took some time for the planets to sweep up the biggest asteroids and for Jupiter to gather the rest into the main asteroid belt.

The second came later. It began sometime between 500 and 700 million years after the solar system was born and finally tapered off around 3.8 billion years ago. That one is called the Late Heavy Bombardment, or the lunar cataclysm.

As with most things in geochemistry, evidence for a world-rending blitz, an event on the hugest scales imaginable, is derived from the very, very small. Isotopes of potassium and argon in Apollo samples suggested bits of the moon suddenly melted some 500 million years after it formed. This was taken as evidence that it was blasted within an inch of its life.

Zircons also provide tentative physical evidence of a late-era hellscape. Some zircons do contain “shocked” minerals, evidence for extreme heat and pressure that can be indicative of something horrendous. Many are younger than 3 billion years, but Bell found one zircon suggesting rapid, extreme heating around 3.9 billion years ago—a possible signature of the Late Heavy Bombardment. “All we know is there is a group of recrystallized zircons at this time period. Given the coincidence with the Late Heavy Bombardment, it was too hard not to say that maybe this is connected,” she said. “But to really establish that, we will need to look at zircon records at other localities around the planet.”

In 2016 Patrick Boehnke, now at the University of Chicago, took another look at those original Apollo samples, which for decades have been the main evidence in favor of the Late Heavy Bombardment. He and UCLA’s Mark Harrisonreanalyzed the argon isotopes and concluded that the Apollo rocks may have been walloped many times since they crystallized from the natal moon, which could make the rocks seem younger than they really are.

“Even if you solve the analytical problems,” said Boehnke, “then you still have the problem that the Apollo samples are all right next to each other.” There’s a chance that astronauts from the six Apollo missions sampled rocks from a single asteroid strike whose ejecta spread throughout the Earth-facing side of our satellite.

In addition, moon-orbiting probes like the Gravity Recovery and Interior Laboratory (GRAIL) spacecraft and the Lunar Reconnaissance Orbiter have found around 100 previously unknown craters, including a spike in impacts as early as 4.3 billion years ago.

“This interesting confluence of orbital data and sample data, and all different kinds of sample data—lunar impact glass, Luna samples, Apollo samples, lunar meteorites—they are all coming together and pointing to something that is not a cataclysmic spike at 3.9 billion years ago,” said Nicolle Zellner, a planetary scientist at Albion College in Michigan.

Bottke, who studies asteroids and solar system dynamics, is one of several researchers coming up with modified explanations. He now favors a slow uptick in bombardment, followed by a gradual decline. Others think there was no late bombardment, and instead the craters on the moon and other rocky bodies are remnants from the first type of billiards, the natural process of planet building.

“We have a tiny sliver of data, and we’re trying to do something with it,” he said. “You try to build a story, and sometimes you are just chasing ghosts.”

Life Takes Hold

While it plays out, scientists will be debating much bigger questions than early solar-system dynamics.

If some of the new evidence truly represents impressions of primeval life, then our ancestors may be much older than we thought. Life might have arisen the moment the planet was amenable to it—the moment it cooled enough to hold liquid water.

“I was taught when I was young that it would take billions and billions of years for life to form. But I have not been able to find any basis for those sorts of statements,” said Valley. “I think it’s quite possible that life emerged within a few million years of when conditions became habitable. From the point of view of a microbe, a million years is a really long time, yet that’s a blink of an eye in geologic time.”

“There is no reason life could not have emerged at 4.3 billion years ago,” he added. “There is no reason.”

If there was no mass sterilization at 3.9 billion years ago, or if a few massive asteroid strikes confined the destruction to a single hemisphere, then Earth’s oldest ancestors may have been here from the haziest days of the planet’s own birth. And that, in turn, makes the notion of life elsewhere in the cosmos seem less implausible. Life might be able to withstand horrendous conditions much more readily than we thought. It might not need much time at all to take hold. It might arise early and often and may pepper the universe yet. Its endless forms, from tubemaking microbes to hunkering slime, may be too small or simple to communicate the way life does on Earth—but they would be no less real and no less alive.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Related Video

Science

Big Question | Can Cats Make Us Crazy?

Cats can make you go nuts and it's not just the emotional manipulation that's driving you batty…it might be a parasite.

Not going to class isn’t typically something good to boast about. But perhaps the late Vladimir Voevodsky is the exception to the rule.

Voevodsky is credited with founding new fields of mathematics, such as motivic homotopy theory, and a computer tool to help check mathematical proofs, as the New York Times explored in an obituary this week. The latter was a feat that other mathematicians didn’t dare approach, but Voevodsky’s effort has overwhelmingly benefited the industry — and everyone, really — by allowing mathematicians to fact-check their work.

He died at age 51 on Sept. 30, at his home in Princeton, New Jersey from unknown causes. He leaves behind his former wife Nadia Shalaby and their two daughters.

“His contributions are so fundamental that it’s impossible to imagine how things were thought of before him,” Chris Kapulkin, a former colleague at the University of Western Ontario, told the Times.

Among Voevodsky’s achievements was changing the meaning of the equal sign. In 2002, he won the Fields Medal for discovering the existence of a “mathematical wormhole” that allowed theoretical tools in one field of mathematics to be used in another field.

He wasn’t a top student of the traditional, rule-abiding sense. According to the Times, Voevodsky was kicked out of high school three times. He was also kicked out of Moscow University after failing academically. He later attended Harvard. Despite neglecting to attend lectures, he graduated in 1992.

He worked through it all, and all present and future mathematicians have him to thank.

Shocked by the untimely death of Vladimir Voevodsky, a great mathematician and a wonderful human. R.I.P. dear friendhttps://t.co/wtvbjxtNZ9

If mathematics isnt your strong suit, this equation that went viral in Japan may just trip you up. According to the YouTube channel MindYourDecisions, a study found that only 60 percent of individuals in their 20s could get the right answer. This is significantly lower than the 90 percent success rate in the 1980s.

To learn which common mistake people are making, check out the video below.

The latest news from the none-of-your-thoughts-are-original department comes from mathematics blogger Alex Bellos, who set out to determine the worlds favorite number.

Bellos apparently doesnt have a favorite number himself, but as Nautiluswrites, he began asking people about their favorite numbers a few years ago, and after setting up theFavourite Numberwebsite, more than 44,000 people voted for the numeral they liked best and explained why.

Heres what Bellos discovered.

The third-most popular number is 8, because, as some of Belloss respondents wrote, In Japan, eight is a lucky number, because the Japanese character for eight means an opening to the future and because of its symmetrical and round shape and because it has always given me a sense of friendliness and warmth (unlike, for example, 9 which looks bossy or 6 which appears to me a bit submissive).

No. 2 on the list is the No. 3, because Its curly, but not pretentious curly like eight and because in Chinese, “3 means alive.

But the worlds most favorite number is No. 7. As for why, Bello will explain that himself. Its basically because of our desire to be outliers when it comes to arithmetical patterns.

As for why numbers ending in 0 or 5 are unpopular, Bellos said its because we use those numbers as approximations more than, say, 7 or 9.

When we say 100, we dont usually mean exactly 100, we mean around 100, Bellos told Nautilus. So 100 seems incredibly vague. Why would you have something as your favorite that is so vague?It seems that we like our numbers to be somewhat unique, which may be why prime numbers are popular. They arent divisible by any smaller numbers (aside from 1).

Belloss research was revealed in 2014, but his conclusion has been bouncing all over the internet for the past few days. Because your favorite number is probably a topic thatis endlessly fascinating.

Interestingly, the number 13 isnt as unpopular as you might think. It ranks as the sixth-most popular favorite number (but decidedly lower among people who have been hacked to pieces by Jason Voorhees).

As he was brushing his teeth on the morning of July 17, 2014, Thomas Royen, a little-known retired German statistician, suddenly lit upon the proof of a famous conjecture at the intersection of geometry, probability theory, and statistics that had eluded top experts for decades.

Quanta Magazine

About

Original story reprinted with permission from Quanta Magazine, an editorially independent division of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences

Known as the Gaussian correlation inequality (GCI), the conjecture originated in the 1950s, was posed in its most elegant form in 1972 and has held mathematicians in its thrall ever since. I know of people who worked on it for 40 years, said Donald Richards, a statistician at Pennsylvania State University. I myself worked on it for 30 years.

Royen hadnt given the Gaussian correlation inequality much thought before the raw idea for how to prove it came to him over the bathroom sink. Formerly an employee of a pharmaceutical company, he had moved on to a small technical university in Bingen, Germany, in 1985 in order to have more time to improve the statistical formulas that he and other industry statisticians used to make sense of drug-trial data. In July 2014, still at work on his formulas as a 67-year-old retiree, Royen found that the GCI could be extended into a statement about statistical distributions he had long specialized in. On the morning of the 17th, he saw how to calculate a key derivative for this extended GCI that unlocked the proof. The evening of this day, my first draft of the proof was written, he said.

Not knowing LaTeX, the word processer of choice in mathematics, he typed up his calculations in Microsoft Word, and the following month he posted his paper to the academic preprint site arxiv.org. He also sent it to Richards, who had briefly circulated his own failed attempt at a proof of the GCI a year and a half earlier. I got this article by email from him, Richards said. And when I looked at it I knew instantly that it was solved.

Upon seeing the proof, I really kicked myself, Richards said. Over the decades, he and other experts had been attacking the GCI with increasingly sophisticated mathematical methods, certain that bold new ideas in convex geometry, probability theory or analysis would be needed to prove it. Some mathematicians, after years of toiling in vain, had come to suspect the inequality was actually false. In the end, though, Royens proof was short and simple, filling just a few pages and using only classic techniques. Richards was shocked that he and everyone else had missed it. But on the other hand I have to also tell you that when I saw it, it was with relief, he said. I remember thinking to myself that I was glad to have seen it before I died. He laughed. Really, I was so glad I saw it.

Richards notified a few colleagues and even helped Royen retype his paper in LaTeX to make it appear more professional. But other experts whom Richards and Royen contacted seemed dismissive of his dramatic claim. False proofs of the GCI had been floated repeatedly over the decades, including two that had appeared on arxiv.org since 2010. Boaz Klartag of the Weizmann Institute of Science and Tel Aviv University recalls receiving the batch of three purported proofs, including Royens, in an email from a colleague in 2015. When he checked one of them and found a mistake, he set the others aside for lack of time. For this reason and others, Royens achievement went unrecognized.

Proofs of obscure provenance are sometimes overlooked at first, but usually not for long: A major paper like Royens would normally get submitted and published somewhere like the Annals of Statistics, experts said, and then everybody would hear about it. But Royen, not having a career to advance, chose to skip the slow and often demanding peer-review process typical of top journals. He opted instead for quick publication in the Far East Journal of Theoretical Statistics, a periodical based in Allahabad, India, that was largely unknown to experts and which, on its website, rather suspiciously listed Royen as an editor. (He had agreed to join the editorial board the year before.)

With this red flag emblazoned on it, the proof continued to be ignored. Finally, in December 2015, the Polish mathematician Rafa Lataa and his student Dariusz Matlak put out a paper advertising Royens proof, reorganizing it in a way some people found easier to follow. Word is now getting around. Tilmann Gneiting, a statistician at the Heidelberg Institute for Theoretical Studies, just 65 miles from Bingen, said he was shocked to learn in July 2016, two years after the fact, that the GCI had been proved. The statistician Alan Izenman, of Temple University in Philadelphia, still hadnt heard about the proof when asked for comment last month.

No one is quite sure how, in the 21st century, news of Royens proof managed to travel so slowly. It was clearly a lack of communication in an age where its very easy to communicate, Klartag said.

But anyway, at least we found it, he addedand its beautiful.

In its most famous form, formulated in 1972, the GCI links probability and geometry: It places a lower bound on a players odds in a game of darts, including hypothetical dart games in higher dimensions.

Imagine two convex polygons, such as a rectangle and a circle, centered on a point that serves as the target. Darts thrown at the target will land in a bell curve or Gaussian distribution of positions around the center point. The Gaussian correlation inequality says that the probability that a dart will land inside both the rectangle and the circle is always as high as or higher than the individual probability of its landing inside the rectangle multiplied by the individual probability of its landing in the circle. In plainer terms, because the two shapes overlap, striking one increases your chances of also striking the other. The same inequality was thought to hold for any two convex symmetrical shapes with any number of dimensions centered on a point.

Special cases of the GCI have been provedin 1977, for instance, Loren Pitt of the University of Virginia established it as true for two-dimensional convex shapesbut the general case eluded all mathematicians who tried to prove it. Pitt had been trying since 1973, when he first heard about the inequality over lunch with colleagues at a meeting in Albuquerque, New Mexico. Being an arrogant young mathematician I was shocked that grown men who were putting themselves off as respectable math and science people didnt know the answer to this, he said. He locked himself in his motel room and was sure he would prove or disprove the conjecture before coming out. Fifty years or so later I still didnt know the answer, he said.

Despite hundreds of pages of calculations leading nowhere, Pitt and other mathematicians felt certainand took his 2-D proof as evidencethat the convex geometry framing of the GCI would lead to the general proof. I had developed a conceptual way of thinking about this that perhaps I was overly wedded to, Pitt said. And what Royen did was kind of diametrically opposed to what I had in mind.

Royens proof harkened back to his roots in the pharmaceutical industry, and to the obscure origin of the Gaussian correlation inequality itself. Before it was a statement about convex symmetrical shapes, the GCI was conjectured in 1959 by the American statistician Olive Dunn as a formula for calculating simultaneous confidence intervals, or ranges that multiple variables are all estimated to fall in.

Suppose you want to estimate the weight and height ranges that 95 percent of a given population fall in, based on a sample of measurements. If you plot peoples weights and heights on an xy plot, the weights will form a Gaussian bell-curve distribution along the x-axis, and heights will form a bell curve along the y-axis. Together, the weights and heights follow a two-dimensional bell curve. You can then ask, what are the weight and height rangescall them w < x < w and h < y < hsuch that 95 percent of the population will fall inside the rectangle formed by these ranges?

If weight and height were independent, you could just calculate the individual odds of a given weight falling inside w < x < w and a given height falling inside h < y < h, then multiply them to get the odds that both conditions are satisfied. But weight and height are correlated. As with darts and overlapping shapes, if someones weight lands in the normal range, that person is more likely to have a normal height. Dunn, generalizing an inequality posed three years earlier, conjectured the following: The probability that both Gaussian random variables will simultaneously fall inside the rectangular region is always greater than or equal to the product of the individual probabilities of each variable falling in its own specified range. (This can be generalized to any number of variables.) If the variables are independent, then the joint probability equals the product of the individual probabilities. But any correlation between the variables causes the joint probability to increase.

Royen found that he could generalize the GCI to apply not just to Gaussian distributions of random variables but to more general statistical spreads related to the squares of Gaussian distributions, called gamma distributions, which are used in certain statistical tests. In mathematics, it occurs frequently that a seemingly difficult special problem can be solved by answering a more general question, he said.

Royen represented the amount of correlation between variables in his generalized GCI by a factor we might call C, and he defined a new function whose value depends on C. When C = 0 (corresponding to independent variables like weight and eye color), the function equals the product of the separate probabilities. When you crank up the correlation to the maximum, C = 1, the function equals the joint probability. To prove that the latter is bigger than the former and the GCI is true, Royen needed to show that his function always increases as C increases. And it does so if its derivative, or rate of change, with respect to C is always positive.

His familiarity with gamma distributions sparked his bathroom-sink epiphany. He knew he could apply a classic trick to transform his function into a simpler function. Suddenly, he recognized that the derivative of this transformed function was equivalent to the transform of the derivative of the original function. He could easily show that the latter derivative was always positive, proving the GCI. He had formulas that enabled him to pull off his magic, Pitt said. And I didnt have the formulas.

Any graduate student in statistics could follow the arguments, experts say. Royen said he hopes the surprisingly simple proof might encourage young students to use their own creativity to find new mathematical theorems, since a very high theoretical level is not always required.

Some researchers, however, still want a geometric proof of the GCI, which would help explain strange new facts in convex geometry that are only de facto implied by Royens analytic proof. In particular, Pitt said, the GCI defines an interesting relationship between vectors on the surfaces of overlapping convex shapes, which could blossom into a new subdomain of convex geometry. At least now we know its true, he said of the vector relationship. But if someone could see their way through this geometry wed understand a class of problems in a way that we just dont today.

Beyond the GCIs geometric implications, Richards said a variation on the inequality could help statisticians better predict the ranges in which variables like stock prices fluctuate over time. In probability theory, the GCI proof now permits exact calculations of rates that arise in small-ball probabilities, which are related to the random paths of particles moving in a fluid. Richards says he has conjectured a few inequalities that extend the GCI, and which he might now try to prove using Royens approach.

Royens main interest is in improving the practical computation of the formulas used in many statistical testsfor instance, for determining whether a drug causes fatigue based on measurements of several variables, such as patients reaction time and body sway. He said that his extended GCI does indeed sharpen these tools of his old trade, and that some of his other recent work related to the GCI has offered further improvements. As for the proofs muted reception, Royen wasnt particularly disappointed or surprised. I am used to being frequently ignored by scientists from [top-tier] German universities, he wrote in an email. I am not so talented for networking and many contacts. I do not need these things for the quality of my life.

The feeling of deep joy and gratitude that comes from finding an important proof has been reward enough. It is like a kind of grace, he said. We can work for a long time on a problem and suddenly an angel[which] stands here poetically for the mysteries of our neuronsbrings a good idea.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

The latest news from the none-of-your-thoughts-are-original department comes from mathematics blogger Alex Bellos, who set out to determine the worlds favorite number.

Bellos apparently doesnt have a favorite number himself, but as Nautiluswrites, he began asking people about their favorite numbers a few years ago, and after setting up theFavourite Numberwebsite, more than 44,000 people voted for the numeral they liked best and explained why.

Heres what Bellos discovered.

The third-most popular number is 8, because, as some of Belloss respondents wrote, In Japan, eight is a lucky number, because the Japanese character for eight means an opening to the future and because of its symmetrical and round shape and because it has always given me a sense of friendliness and warmth (unlike, for example, 9 which looks bossy or 6 which appears to me a bit submissive).

No. 2 on the list is the No. 3, because Its curly, but not pretentious curly like eight and because in Chinese, “3 means alive.

But the worlds most favorite number is No. 7. As for why, Bello will explain that himself. Its basically because of our desire to be outliers when it comes to arithmetical patterns.

As for why numbers ending in 0 or 5 are unpopular, Bellos said its because we use those numbers as approximations more than, say, 7 or 9.

When we say 100, we dont usually mean exactly 100, we mean around 100, Bellos told Nautilus. So 100 seems incredibly vague. Why would you have something as your favorite that is so vague?It seems that we like our numbers to be somewhat unique, which may be why prime numbers are popular. They arent divisible by any smaller numbers (aside from 1).

Belloss research was revealed in 2014, but his conclusion has been bouncing all over the internet for the past few days. Because your favorite number is probably a topic thatis endlessly fascinating.

Interestingly, the number 13 isnt as unpopular as you might think. It ranks as the sixth-most popular favorite number (but decidedly lower among people who have been hacked to pieces by Jason Voorhees).

A new study found that young U.S. girls are less likely than boys to believe their own gender is the most brilliant.

While all 5-year-olds tended to believe that members of their own gender were geniuses, by age 6 that preference had diminished for girls a difference the researchers attributed to the influence of gender stereotypes.

“We found it surprising, and also very heartbreaking, that even kids at such a young age have learned these stereotypes,” said Lin Bian, the study’s co-author and a doctoral candidate at the University of Illinois at Urbana-Champaign.

“It’s possible that in the long run, the stereotypes will push young women away from the jobs that are perceived as requiring brilliance, like being a scientist or an engineer,” she told Mashable.

A growing field

The study, published Thursday in the journal Science, builds on a growing body of research that suggests gender stereotypes can shape children’s interest and career ambitions at a young age.

A global study by the Organization for Economic Cooperation and Development found that girls “lack self-confidence” in their ability to solve math and science problems and thus score worse than they would otherwise, which discourages them from pursuing science, engineering, technology and mathematics (STEM) fields.

A 2016 study suggested a “masculine culture” in computer science and engineering makes girls feel like they don’t belong.

Thursday’s research looks not at specific skills but at the broader concept of high-level intellectual abilities. In short, can girls be geniuses, too?

Sapna Cheryan, a psychology professor at the University of Washington who was not involved in the study, said the results were “super important” because they’re among the first to show us how young children not adults or high-schoolers respond to gender stereotypes.

But she said the findings are just as revealing for young boys as for girls.

“It’s not that girls are underestimating their own gender it’s that boys are overestimating themselves,” she told Mashable. Cheryan was the lead author of last year’s masculine culture study.

“What we want as a society is for people to say boys and girls are equal,” she added.

Stereotyping starts early

Andrei Cimpian, a co-author of Thursday’s study, said his earlier research with adults showed that the fields people associate with requiring a high level of smarts also tend to be overwhelmingly represented by men.

“Across the board, the more that people in a field believe you need to be brilliant, the fewer women you see in the field,” Cimpian, an associate professor of psychology at New York University, told Mashable.

This same idea burrows itself into our brains as children, the study suggests.

Researchers worked with 400 children ages 5, 6 and 7 in a series of four experiments for the new study. (Not every child participated in every experiment for the study.)

In the first experiment, the psychologists wanted to see whether children associate being “really, really smart” with men more than with women.

To answer that question, a researcher told each child an elaborate story about a person who was brilliant and quick to solve problems, without hinting at all at the person’s gender. Next, the children looked at a series of pictures of men and women and were asked to guess who from the line-up was the character in the story.

During a series of similar questions, researchers kept track of how often children chose members of their own gender as being brilliant.

Among 5-year-olds, boys picked boys a majority of the time, while girls picked girls.

“This is the heyday of the ‘cooties’ stage,” Cimpian said. “It’s consistent with what we know about in-group biases in this young age group.”

But among 6- and 7-year-olds, a divide emerged. Girls were significantly less likely to rate women as super smart than boys were to pick members of their own gender.

The age groups were similarly split in a second prompt. Researchers asked kids to pick from activities described as either suited for brilliant kids, or kids who try really hard.

Five-year-old boys and girls both showed interest in the smart-kid activities. But by age 6, girls expressed more interest in the games for hard workers, while boys kept on with the “brilliant” games.

Why is this happening?

Researchers said it’s not entirely clear how these stereotypes form. Certainly marketing towards children lab sets are for boys, dollhouses are for girls plays a role.

And history books are filled with the achievements of white men who, generally speaking, did not face the same systemic discrimination that kept women and people of color out of classrooms and laboratories.

Cimpian and Bian said they are planning a larger, longer-term study to explore how these stereotypes form and stick, and how we can correct them.

In the meantime, they suggested a few ways that parents and teachers of young children could work to dispel the biased idea that men are inherently more prone to brilliance than women.

Bian noted that previous research has shown that girls respond better to what psychologists call a “growth mindset” the idea that studying, learning and making an effort are the key ingredients for success, not a stroke of genetic luck.

“We should recommend the importance of hard work, as opposed to brilliance,” she said.

Sharing and touting the achievements of women can also help counter the stereotypes that genius is reserved for men. Cimpian cited the book and movie Hidden Figures, about the women scientists who helped NASA astronauts get to space for the first time, as a prime example.

Cheryan, the UW psychologist, said including young boys in such efforts is critical.

“There’s a societal message that if there’s a gender gap, it’s the girls we need to fix,” she said. “We have to be careful with that message, because it just reinforces the similar hierarchy that the boys are always doing the right thing. In reality, there’s probably things that could happen on both sides.”

BONUS: 5 Gender Stereotypes That Used To Be The Opposite