Quora can be an awesome resource, a place where rational, intelligent people come together to have in-depth discussions about important and stimulating topics. However, like anywhere else on the web, it has it’s fair share of pretentious and arrogant contributors who can’t seem to get over themselves, like this “budding mathematician” for example.

The poor woman, obviously doing the superior science, found herself stuck with a boyfriend studying something as lowly as psychology, the science of the human mind and behavior. “I’m planning to be a mathematician and I can’t take his interest seriously,” she wailed. “It’s a joke compared to mine. We have chemistry but his profession/interest in that pop junk is annoying. I prefer intellectual discussions not junk talk.”

Now, whether this was a genuine post, or just a wind-up trying to get a reaction is open to question, but get a reaction it did, in the form of a glorious put down. Using pinpoint logic to defend the complexity and importance of psychology in relation to mathematics, the response eloquently describes the OP’s shortcomings, while suggesting that her psychologist boyfriend could find her as a useful subject for study, if nothing else.

She will no doubt learn, with time, to respect others choices and passions in life, even if they are different to her own. But for now, such a superior attitude is probably not going to stand this student in good stead for the duration of her studies, or her love life for that matter. Scroll down below to read how it unfolded for yourself, as well as see how others reacted to the post. What do you think? Let us know in the comments!

A “budding mathematician” took to Quora to share her embarrassment at her boyfriend’s choice of study, psychology

Paul Erdős, the famously eccentric, peripatetic and prolific 20th-century mathematician, was fond of the idea that God has a celestial volume containing the perfect proof of every mathematical theorem. “This one is from The Book,” he would declare when he wanted to bestow his highest praise on a beautiful proof.

Quanta Magazine

About

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Never mind that Erdős doubted God’s very existence. “You don’t have to believe in God, but you should believe in The Book,” Erdős explained to other mathematicians.

In 1994, during conversations with Erdős at the Oberwolfach Research Institute for Mathematics in Germany, the mathematician Martin Aigner came up with an idea: Why not actually try to make God’s Book—or at least an earthly shadow of it? Aigner enlisted fellow mathematician Günter Ziegler, and the two started collecting examples of exceptionally beautiful proofs, with enthusiastic contributions from Erdős himself. The resulting volume, Proofs From THE BOOK, was published in 1998, sadly too late for Erdős to see it—he had died about two years after the project commenced, at age 83.

“Many of the proofs trace directly back to him, or were initiated by his supreme insight in asking the right question or in making the right conjecture,” Aigner and Ziegler, who are now both professors at the Free University of Berlin, write in the preface.

Whether the proof is understandable and beautiful depends not only on the proof but also on the reader.

The book, which has been called “a glimpse of mathematical heaven,” presents proofs of dozens of theorems from number theory, geometry, analysis, combinatorics and graph theory. Over the two decades since it first appeared, it has gone through five editions, each with new proofs added, and has been translated into 13 languages.

In January, Ziegler traveled to San Diego for the Joint Mathematics Meetings, where he received (on his and Aigner’s behalf) the 2018 Steele Prize for Mathematical Exposition. “The density of elegant ideas per page [in the book] is extraordinarily high,” the prize citation reads.

Quanta Magazine sat down with Ziegler at the meeting to discuss beautiful (and ugly) mathematics. The interview has been edited and condensed for clarity.

You’ve said that you and Martin Aigner have a similar sense of which proofs are worthy of inclusion in THE BOOK. What goes into your aesthetic?

We’ve always shied away from trying to define what is a perfect proof. And I think that’s not only shyness, but actually, there is no definition and no uniform criterion. Of course, there are all these components of a beautiful proof. It can’t be too long; it has to be clear; there has to be a special idea; it might connect things that usually one wouldn’t think of as having any connection.

For some theorems, there are different perfect proofs for different types of readers. I mean, what is a proof? A proof, in the end, is something that convinces the reader of things being true. And whether the proof is understandable and beautiful depends not only on the proof but also on the reader: What do you know? What do you like? What do you find obvious?

You noted in the fifth edition that mathematicians have come up with at least 196 different proofs of the “quadratic reciprocity” theorem (concerning which numbers in “clock” arithmetics are perfect squares) and nearly 100 proofs of the fundamental theorem of algebra (concerning solutions to polynomial equations). Why do you think mathematicians keep devising new proofs for certain theorems, when they already know the theorems are true?

These are things that are central in mathematics, so it’s important to understand them from many different angles. There are theorems that have several genuinely different proofs, and each proof tells you something different about the theorem and the structures. So, it’s really valuable to explore these proofs to understand how you can go beyond the original statement of the theorem.

An example comes to mind—which is not in our book but is very fundamental—Steinitz’s theorem for polyhedra. This says that if you have a planar graph (a network of vertices and edges in the plane) that stays connected if you remove one or two vertices, then there is a convex polyhedron that has exactly the same connectivity pattern. This is a theorem that has three entirely different types of proof—the “Steinitz-type” proof, the “rubber band” proof and the “circle packing” proof. And each of these three has variations.

Any of the Steinitz-type proofs will tell you not only that there is a polyhedron but also that there’s a polyhedron with integers for the coordinates of the vertices. And the circle packing proof tells you that there’s a polyhedron that has all its edges tangent to a sphere. You don’t get that from the Steinitz-type proof, or the other way around—the circle packing proof will not prove that you can do it with integer coordinates. So, having several proofs leads you to several ways to understand the situation beyond the original basic theorem.

You’ve mentioned the element of surprise as one feature you look for in a BOOK proof. And some great proofs do leave one wondering, “How did anyone ever come up with this?” But there are other proofs that have a feeling of inevitability. I think it always depends on what you know and where you come from.

An example is László Lovász’s proof for the Kneser conjecture, which I think we put in the fourth edition. The Kneser conjecture was about a certain type of graph you can construct from the k-element subsets of an n-element set—you construct this graph where the k-element subsets are the vertices, and two k-element sets are connected by an edge if they don’t have any elements in common. And Kneser had asked, in 1955 or ’56, how many colors are required to color all the vertices if vertices that are connected must be different colors.

A proof that eats more than 10 pages cannot be a proof for our book. God—if he exists—has more patience.

It’s rather easy to show that you can color this graph with n – k + 2 colors, but the problem was to show that fewer colors won’t do it. And so, it’s a graph coloring problem, but Lovász, in 1978, gave a proof that was a technical tour de force, that used a topological theorem, the Borsuk-Ulam theorem. And it was an amazing surprise—why should this topological tool prove a graph theoretic thing?

This turned into a whole industry of using topological tools to prove discrete mathematics theorems. And now it seems inevitable that you use these, and very natural and straightforward. It’s become routine, in a certain sense. But then, I think, it’s still valuable not to forget the original surprise.

Brevity is one of your other criteria for a BOOK proof. Could there be a hundred-page proof in God’s Book?

I think there could be, but no human will ever find it.

We have these results from logic that say that there are theorems that are true and that have a proof, but they don’t have a short proof. It’s a logic statement. And so, why shouldn’t there be a proof in God’s Book that goes over a hundred pages, and on each of these hundred pages, makes a brilliant new observation—and so, in that sense, it’s really a proof from The Book?

On the other hand, we are always happy if we manage to prove something with one surprising idea, and proofs with two surprising ideas are even more magical but still harder to find. So a proof that is a hundred pages long and has a hundred surprising ideas—how should a human ever find it?

But I don’t know how the experts judge Andrew Wiles’ proof of Fermat’s Last Theorem. This is a hundred pages, or many hundred pages, depending on how much number theory you assume when you start. And my understanding is that there are lots of beautiful observations and ideas in there. Perhaps Wiles’ proof, with a few simplifications, is God’s proof for Fermat’s Last Theorem.

But it’s not a proof for the readers of our book, because it’s just beyond the scope, both in technical difficulty and layers of theory. By definition, a proof that eats more than 10 pages cannot be a proof for our book. God—if he exists—has more patience.

Paul Erdős has been called a “priest of mathematics.” He traveled across the globe—often with no settled address—to spread the gospel of mathematics, so to speak. And he used these religious metaphors to talk about mathematical beauty.

Paul Erdős referred to his own lectures as “preaching.” But he was an atheist. He called God the “Supreme Fascist.” I think it was more important to him to be funny and to tell stories—he didn’t preach anything religious. So, this story of God and his book was part of his storytelling routine.

When you experience a beautiful proof, does it feel somehow spiritual?

The ugly proofs have their role.

It’s a powerful feeling. I remember these moments of beauty and excitement. And there’s a very powerful type of happiness that comes from it.

If I were a religious person, I would thank God for all this inspiration that I’m blessed to experience. As I’m not religious, for me, this God’s Book thing is a powerful story.

There’s a famous quote from the mathematician G. H. Hardy that says, “There is no permanent place in the world for ugly mathematics.” But ugly mathematics still has a role, right?

You know, the first step is to establish the theorem, so that you can say, “I worked hard. I got the proof. It’s 20 pages. It’s ugly. It’s lots of calculations, but it’s correct and it’s complete and I’m proud of it.”

If the result is interesting, then come the people who simplify it and put in extra ideas and make it more and more elegant and beautiful. And in the end you have, in some sense, the Book proof.

If you look at Lovász’s proof for the Kneser conjecture, people don’t read his paper anymore. It’s rather ugly, because Lovász didn’t know the topological tools at the time, so he had to reinvent a lot of things and put them together. And immediately after that, Imre Bárány had a second proof, which also used the Borsuk-Ulam theorem, and that was, I think, more elegant and more straightforward.

To do these short and surprising proofs, you need a lot of confidence. And one way to get the confidence is if you know the thing is true. If you know that something is true because so-and-so proved it, then you might also dare to say, “What would be the really nice and short and elegant way to establish this?” So, I think, in that sense, the ugly proofs have their role.

You’re currently preparing a sixth edition of Proofs From THE BOOK. Will there be more after that?

The third edition was perhaps the first time that we claimed that that’s it, that’s the final one. And, of course, we also claimed this in the preface of the fifth edition, but we’re currently working hard to finish the sixth edition.

When Martin Aigner talked to me about this plan to do the book, the idea was that this might be a nice project, and we’d get done with it, and that’s it. And with, I don’t know how you translate it into English, jugendlicher Leichtsinn—that’s sort of the foolery of someone being young—you think you can just do this book and then it’s done.

But it’s kept us busy from 1994 until now, with new editions and translations. Now Martin has retired, and I’ve just applied to be university president, and I think there will not be time and energy and opportunity to do more. The sixth edition will be the final one.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Related Video

Science

The Beautiful Relationship Between Physics and Jazz

Stephon Alexander is a theoretical physicist, but he's also a jazz fanatic whose musical obsession has helped him better understand the world of cosmology, quantum gravity and particle physics.

Pi Day is celebrated every year on March 14—or 3.14—as an homage to the mathematical constant, which is equal to 3.14159.

This year Google decided to mark the 30th occurrence of this event with a nod to “the number’s delicious sounding name.” The Doodle represents Pi’s mathematical formula—the ratio between a circle’s circumference to its diameter—in pie form.

Pi Day was first recognised 30 years ago back in 1988 by physicist Larry Shaw, according to Google. And, those wishing to mark to the occasion often do so by enjoying a slice of their favourite pie.

Of course, the pie featured in the Doodle isn’t any old pie. Google got Dominique Ansel—the chap who invented the Cronut—to make the Pi pie.

Oh, and if your mouth is positively salivating at this bit of nerdery, Google’s got you covered. Ansel shared the recipe for the Pi pie.

The morning John Kennedy was set to testify last December, he woke up at 1:30 am, in an unfamiliar hotel room in Harrisburg, Pennsylvania, adrenaline coursing through his veins. He'd never gone to court before for anything serious, much less taken the stand.

Some time after sunrise, he headed to the courthouse, dressed in a gray Brooks Brothers suit, and spent the next several hours reviewing his notes and frantically pacing the halls. “I think I made a groove in the floor,” Kennedy says.

By 3:30 pm, it was finally time. Kennedy’s answers started off slowly, as he worked to steady his nerves. Then, about an hour into his testimony, Exhibit 81 flashed on a screen inside the courtroom. It was a map of part of Pennsylvania’s seventh congressional district, but it might as well have been a chalk outline of a body.

“It was like a crime scene,” explains Daniel Jacobson, an attorney for Arnold & Porter, which represented the League of Women Voters in its bid to overturn Pennsylvania’s 2011 electoral map, drawn by the state’s majority Republican General Assembly. The edges of the district skitter in all manner of unnatural directions, drawing comparisons to a sketch of Goofy kicking Donald Duck.

As an expert witness for the League of Women Voters and a political scientist at West Chester University, Kennedy’s job was to show how the state’s map had evolved over time, and to prove that the General Assembly had drawn it specifically to ensure that Republicans would always win the most seats in Congress.

“Mr. Kennedy, what is this?” asked John Freedman, Jacobson’s colleague, referring to the tiny, single point that connects one sprawling side of the district to the other. Or, if you like, where Goofy’s toe meets Donald’s rear.

“A steakhouse,” Kennedy answered, according to the court transcript. “Creed's Seafood Steaks in King of Prussia.”

The only thing holding the district together, in other words, was a single ritzy seafood joint.

“If you were in the courtroom, it was just devastating,” Jacobson says.

Districts like Pennsylvania’s seventh don’t get drawn that way by accident. They’re designed by dint of the centuries-old practice of gerrymandering, in which the party in power carves up the electoral map to their favor. The playbook is simple: Concentrate as many of your opponents’ votes into a handful of districts as you can, a tactic known as "packing." Then spread the remainder of those votes thinly across a whole lot of districts, known as “cracking.” If it works as intended, the opposition will win a few districts by a landslide, but never have enough votes in the rest to win the majority of seats. The age of computer-generated data splicing has made this strategy easier than ever.

Pennsylvania’s map had been so aggressively gerrymandered for partisan purposes that it silenced the voices of Democratic voters in the state.

Until recently, courts have only moved to stop gerrymandering based on race. But now, the law is taking a closer look at partisan gerrymandering, too. On Monday, the Pennsylvania Supreme Court issued a brand new congressional map to replace the one Kennedy testified about. The new map follows a landmark decision last month, in which the three Pennsylvania Supreme Court justices overruled a lower-court decision and found that Pennsylvania’s 2011 map did in fact violate the state constitution’s guarantee of “free and equal elections.” The court ordered the Pennsylvania General Assembly to submit a new map, with approval of Pennsylvania’s Democratic governor Tom Wolf. Following unsuccessful appeals by the General Assembly, the court drafted and approved its own map, which will now be in effect for the midterm elections in November, opening up a new field of opportunity for Democrats in the state.

On Tuesday morning, President Trump urged Republicans in the state to "challenge the new 'pushed' Congressional Map, all the way to the Supreme Court, if necessary. Your Original was correct!"

According to Jacobson, given the Supreme Court of the United States already declined to stay the Pennsylvania Supreme Court's decision, it's unlikely they'll take up the case. It's already agreed to hear four other gerrymandering cases this term, which may well re-write the rules on this twisted system nationwide.

The change that's already come to Pennsylvania may not have been possible without the research Kennedy and three other expert witnesses brought to light. They took the stand with a range of analyses, some based in complex quantitative theory, others, like Kennedy’s, based in pure cartography. But they all reached the same conclusion: Pennsylvania’s map had been so aggressively gerrymandered for partisan purposes that it silenced the voices of Democratic voters in the state. Here's how each came to that conclusion—and managed to convince the court.

The Only Bad Restaurant in Town

Carnegie Mellon mathematician Wes Pegden had already written an academic paper proving that the Pennsylvania map was drawn with partisan intent. His challenge in the courtroom was to convince a room full of non-mathematicians. So he came armed with an analogy.

Imagine, Pegden told the court, you’ve touched down in a new city and asked your taxi driver to drop you at any restaurant, something that would give you a sense of the local culinary scene. You give the cabbie a fat tip, go inside the restaurant, and have a terrible meal. Did the driver bring you to a bad restaurant on purpose? Or is it a true reflection of all of the restaurants in the city?

To answer that question, you could always sample every single restaurant, but that would take too long. A more efficient, but still effective option: test every restaurant immediately surrounding the bad one. If they're all bad, the driver really did pick a representative dining establishment. If they’re all really good? The driver screwed you over.

That's essentially how Pegden tested the Pennsylvania map. He developed a computer program that begins with the current Pennsylvania map, then, instead of drawing an entirely new map from scratch, it automatically makes tiny changes to the existing one to create 1 trillion slightly different maps. In the analogy, these trillion maps are the nearby restaurants. The system only draws districts that a court might accept, meaning they’re contiguous, reasonably shaped, and have similar population sizes, among other things.

'It is really one of the most extreme partisan gerrymanders in modern American history.'

Christopher Warshaw, George Washington University

Then, Pegden analyzed the partisan slant of each new map compared to the original, using a well-known metric called the median versus mean test. In this case, Pegden compared the Republican vote share in each of Pennsylvania's 18 districts. For each map, he calculated the difference between the median vote share across all the districts and the mean vote share across all of the districts. The bigger the difference, the more of an advantage the Republicans had in that map.

After conducting his trillion simulations, Pegden found that the 2011 Pennsylvania map exhibited more partisan bias than 99.999999 percent of maps he tested. In other words, making even the tiniest changes in almost any direction to the existing map chiseled away at the Republican advantage.

“You can almost hear the mapmakers saying, ‘No don’t do that. I wanted that right there just like that,’” Pegden says. “It gets at the basic question of what citizens, judges, and courts want to know: Did these people go into a room and design these maps to suit their purposes?”

Until now, researchers have struggled to find truly random maps to compare to gerrymandered maps; the number of possible maps is so astronomically high, it’s impossible to try them all. But Pegden’s theorem proves you don’t have to try every restaurant in town to know you got a raw deal. You just need to take a walk around the block.

The Bright Red Dot

Unlike Kennedy and Pegden, Jowei Chen was no witness-stand novice. The political scientist at the University of Michigan, Ann Arbor has provided expert testimony in a litany of redistricting cases, including in North Carolina, where judges relied heavily on Chen's testimony in their decision to overturn the existing map.

Like Pegden, Chen uses computer programs to simulate alternative maps. But instead of starting with the original map and making small changes, Chen’s program develops entirely new maps, based on a series of geographic constraints. The maps should be compact in shape, preserve county and municipal boundaries, and have equal populations. They’re drawn, in other words, in some magical world where partisanship doesn’t exist. The only goal, says Chen, is that these maps be “geographically normal.”

Chen generated 500 such maps for Pennsylvania, and analyzed each of them based on how many Republican seats they would yield. He also looked at how many counties and municipalities were split across districts, a practice the Pennsylvania constitution forbids "unless absolutely necessary." Keeping counties and municipalities together, the thinking goes, keeps communities together. He compared those figures to the disputed map, and presented the results to the court.

The following chart shows how many seats the simulated maps and the disputed map generated for Republicans.

Most of the maps gave Republicans nine seats. Just two percent gave them 10 seats. None even came close to the disputed map, which gives Republicans a whopping 13 seats.

The chart showing the number of split municipalities and counties paints a similarly compelling picture.

Chen used two other metrics to measure the disputed map’s compactness relative to the simulations. The first, called the Reock score, analyzes the ratio of the district’s area to the area of the smallest circle that can be drawn to completely contain it. A district that’s a perfect circle, in other words, would have a Reock score of one. The more distorted the district’s shape gets, the lower the score.

Chen also put the map up to the so-called Popper-Polsby test, which is the ratio of the district’s area to the area of a circle whose circumference is the same length as the district’s perimeter. Again, the lower number, the less compact the district.

Here’s how the disputed map fared on both tests against the simulations:

Chen conducted another simulation with an additional 500 maps, this time, requiring that none of them pit two incumbents against each other. The goal was to see if the General Assembly drew the original map this way not based on partisanship, but based on protecting incumbents. But the results were largely the same. On every metric, the disputed map was an outlier.

“These charts are what really resonated with the Pennsylvania Supreme Court justices,” says Jacobson. “You see 500 black dots. Then you see the actual plan. It’s way out in nowhere land.”

The results, Chen says, complemented Pegden's evidence perfectly. “It’s not a question of whose metrics and methods do you like better,” he says. “The point is: Here’s a diversity of methods, and they are leading us to the same answer. Maybe that tells us something.”

Political Silencing

Another question before the court was whether the partisan map actually impacted representation in Congress. After all, just because most Pennsylvania representatives are Republicans doesn't mean they'll always vote with Republicans. But Christopher Warshaw, a political scientist at George Washington University, showed mathematically that the Republican advantage also meant that the state's Democrats had little chance of having their voices heard in DC.

To assess the map’s partisan nature, Warshaw used a metric called the efficiency gap, which researchers at the University of Chicago Law School and the Public Policy Institute of California devised in 2015. It measures the number of votes that each party “wastes” in a given election to gauge how packed and cracked its districts are. Every vote a party gets in a district that it loses counts as wasted. In districts the party takes, any vote over the total needed to win is considered waste as well.

“You want to get as many seats in a legislature with as few votes as possible,” Warshaw explains. “You want to get zero votes in the districts you lose.”

To determine Pennsylvania's efficiency gap, Warshaw calculated the difference between each party’s wasted votes and divided it by the number of total votes cast in the election. He found that the 2011 map not only gave Republicans a bigger advantage in Pennsylvania than they had before redistricting; it gave them an advantage like few the country has ever seen. “It is really one of the most extreme partisan gerrymanders in modern American history,” Warshaw says.

Warshaw analyzed the average efficiency gap in states with more than six representatives between 1972 and 2016, and found that the vast majority have historically had an efficiency gap hovering around zero.

He also found, however, that since 2010, the last year before districts were redrawn, maps have become increasingly skewed toward Republicans, as the party dominates state legislatures and governorships across the country.

Even so, the slide toward Republican advantage has been far more drastic in Pennsylvania. In 2012, Republican candidates won only 49 percent of the congressional vote in Pennsylvania, but gained 72 percent of the seats.

Finally, Warshaw deployed a commonly used model called the DW-Nominate score to show how partisanship has changed in Congress over time. This score ranks members of Congress on a scale from -1, being the most liberal, to +1, being the most conservative. As the chart shows, both parties have been creeping toward their respective poles steadily over time.

Warshaw doesn’t try to prove that gerrymandering created that partisanship in Congress. His point is merely that in Pennsylvania, where more Democratic votes are wasted, it becomes almost impossible for Democrats to see issues they support turn into federal policy. This degrades trust in government and in elections.

“Representative democracy should be largely responsive to what voters want, and if it’s not, it calls into question democratic bona fides,” says Warshaw. In societies where elections shut one entire subset out of power, he says, “all kinds of bad things can happen.”

“Ultimately, people think why are we even having elections?” Warshaw says. "There’s nothing inevitable about democracy.”

The Evolution of Maps

Though by far the least technical expert in the case, John Kennedy was perhaps the most compelling. In preparation for his nerve-wracking two hours on the stand, Kennedy, an expert in Pennsylvania elections, dug through decades of old maps dating back to the 1960s to assess how the shape of districts and their partisan outcomes have evolved over time.

He methodically walked through how Pennsylvania’s first congressional district, comprising much of Philadelphia, has been packed with Democrats, while Democrats in Harrisburg have been cracked between the fourth and eleventh congressional districts, creating Republican majorities in both places.

But it was the seventh congressional district—and the single seafood restaurant holding it together like a piece of Scotch tape—that clinched it. He showed the court how the district had morphed from a squarish shape to today's sprawling, cartoonish scene. “How do you justify the seventh congressional district?” Kennedy says. “It’s absurd.”

Where Chen and Pegden laid out the mathematical proof of partisanship, and Warshaw demonstrated how that partisanship translates to policy, Kennedy showed in the starkest terms just how obviously gerrymandered these maps looked even to the untrained eye.

As gerrymandering cases proliferate across the country, there’s been some talk in research circles of the need for one true metric to measure it. Overturning Pennsylvania's gerrymandered map, though, required detailed analysis from all angles. “Metrics are just evidence," says Jacobson. "It’s always helpful to have more evidence not less.”

In the Pennsylvania case, Judge P. Kevin Brobson of the Commonwealth Court agreed that Republicans had obviously and intentionally given themselves an advantage, but stopped short of saying they had violated the state’s constitution. In January, the Supreme Court disagreed, striking down the old Pennsylvania map.

In a matter of months, Pennsylvanians will head to the polls once more to elect 18 representatives to Congress, based on an entirely new electoral map that leans far less in one party’s favor. For Kennedy, an academic who spends most of his time studying history, it’s been a rare opportunity to make history, instead.

Ekaterina ‘Kate’ Lukasheva is an incredible Origami artist and designer from Moscow, Russia. The artist has had a fascination with puzzles and construction sets since childhood and first discovered origami in her teens. With its intricate folds and geometric patterns, there’s a lot of math in origami and Ekaterina would later graduate with honors from Moscow State Lomonosov University as a mathematician and programmer.

As Origami has come to describe a broad field with a number of niche disciplines, Lukasheva’s artwork focuses primarily around modular origami and Kusudama. She has even authored a number of books of her own original designs for others to try.

Below you will find a collection of some of her incredible works but you can find hundreds more at the links below

The Navier-Stokes equations capture in a few succinct terms one of the most ubiquitous features of the physical world: the flow of fluids. The equations, which date to the 1820s, are today used to model everything from ocean currents to turbulence in the wake of an airplane to the flow of blood in the heart.

Quanta Magazine

About

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

While physicists consider the equations to be as reliable as a hammer, mathematicians eye them warily. To a mathematician, it means little that the equations appear to work. They want proof that the equations are unfailing: that no matter the fluid, and no matter how far into the future you forecast its flow, the mathematics of the equations will still hold. Such a guarantee has proved elusive. The first person (or team) to prove that the Navier-Stokes equations will always work—or to provide an example where they don’t—stands to win one of seven Millennium Prize Problems endowed by the Clay Mathematics Institute, along with the associated $1 million reward.

Mathematicians have developed many ways of trying to solve the problem. New work posted online in September raises serious questions about whether one of the main approaches pursued over the years will succeed. The paper, by Tristan Buckmaster and Vlad Vicol of Princeton University, is the first result to find that under certain assumptions, the Navier-Stokes equations provide inconsistent descriptions of the physical world.

“We’re figuring out some of the inherent issues with these equations and why it’s quite possible [that] people have to rethink them,” said Buckmaster.

Buckmaster and Vicol’s work shows that when you allow solutions to the Navier-Stokes equations to be very rough (like a sketch rather than a photograph), the equations start to output nonsense: They say that the same fluid, from the same starting conditions, could end up in two (or more) very different states. It could flow one way or a completely different way. If that were the case, then the equations don’t reliably reflect the physical world they were designed to describe.

Blowing Up the Equations

To see how the equations can break down, first imagine the flow of an ocean current. Within it there may be a multitude of crosscurrents, with some parts moving in one direction at one speed and other areas moving in other directions at other speeds. These crosscurrents interact with one another in a continually evolving interplay of friction and water pressure that determines how the fluid flows.

Mathematicians model that interplay using a map that tells you the direction and magnitude of the current at every position in the fluid. This map, which is called a vector field, is a snapshot of the internal dynamics of a fluid. The Navier-Stokes equations take that snapshot and play it forward, telling you exactly what the vector field will look like at every subsequent moment in time.

The equations work. They describe fluid flows as reliably as Newton’s equations predict the future positions of the planets; physicists employ them all the time, and they’ve consistently matched experimental results. Mathematicians, however, want more than anecdotal confirmation—they want proof that the equations are inviolate, that no matter what vector field you start with, and no matter how far into the future you play it, the equations always give you a unique new vector field.

This is the subject of the Millennium Prize problem, which asks whether the Navier-Stokes equations have solutions (where solutions are in essence a vector field) for all starting points for all moments in time. These solutions have to provide the exact direction and magnitude of the current at every point in the fluid. Solutions that provide information at such infinitely fine resolution are called “smooth” solutions. With a smooth solution, every point in the field has an associated vector that allows you to travel “smoothly” over the field without ever getting stuck at a point that has no vector—a point from which you don’t know where to move next.

Smooth solutions are a complete representation of the physical world, but mathematically speaking, they may not always exist. Mathematicians who work on equations like Navier-Stokes worry about this kind of scenario: You’re running the Navier-Stokes equations and observing how a vector field changes. After some finite amount of time, the equations tell you a particle in the fluid is moving infinitely fast. That would be a problem. The equations involve measuring changes in properties like pressure, friction, and velocity in the fluid — in the jargon, they take “derivatives” of these quantities — but you can’t take the derivative of an infinite value any more than you can divide by zero. So if the equations produce an infinite value, you can say they’ve broken down, or “blown up.” They can no longer describe subsequent states of your fluid.

Blowup is also a strong hint that your equations are missing something about the physical world they’re supposed to describe. “Maybe the equation is not capturing all the effects of the real fluid because in a real fluid we don’t expect” particles to ever start moving infinitely fast, said Buckmaster.

Solving the Millennium Prize problem involves either showing that blowup never happens for the Navier-Stokes equations or identifying the circumstances under which it does. One strategy mathematicians have pursued to do that is to first relax just how descriptive they require solutions to the equations to be.

From Weak to Smooth

When mathematicians study equations like Navier-Stokes, they sometimes start by broadening their definition of what counts as a solution. Smooth solutions require maximal information — in the case of Navier-Stokes, they require that you have a vector at every point in the vector field associated with the fluid. But what if you slackened your requirements and said that you only needed to be able to compute a vector for some points or only needed to be able to approximate vectors? These kinds of solutions are called “weak” solutions. They allow mathematicians to start feeling out the behavior of an equation without having to do all the work of finding smooth solutions (which may be impossible to do in practice).

“From a certain point of view, weak solutions are even easier to describe than actual solutions because you have to know much less,” said Camillo De Lellis, coauthor with László Székelyhidi of several important papers that laid the groundwork for Buckmaster and Vicol’s work.

Weak solutions come in gradations of weakness. If you think of a smooth solution as a mathematical image of a fluid down to infinitely fine resolution, weak solutions are like the 32-bit, or 16-bit, or 8-bit version of that picture (depending on how weak you allow them to be).

In 1934 the French mathematician Jean Leray defined an important class of weak solutions. Rather than working with exact vectors, “Leray solutions” take the average value of vectors in small neighborhoods of the vector field. Leray proved that it’s always possible to solve the Navier-Stokes equations when you allow your solutions to take this particular form. In other words, Leray solutions never blow up.

Leray’s achievement established a new approach to the Navier-Stokes problem: Start with Leray solutions, which you know always exist, and see if you can convert them into smooth solutions, which you want to prove always exist. It’s a process akin to starting with a crude picture and seeing if you can gradually dial up the resolution to get a perfect image of something real.

“One possible strategy is to show these weak Leray solutions are smooth, and if you show they’re smooth, you’ve solved the original Millennium Prize problem,” said Buckmaster.

There’s one more catch. Solutions to the Navier-Stokes equations correspond to real physical events, and physical events happen in just one way. Given that, you’d like your equations to have only one set of unique solutions. If the equations give you multiple possible solutions, they’ve failed.

Because of this, mathematicians will be able to use Leray solutions to solve the Millennium Prize problem only if Leray solutions are unique. Nonunique Leray solutions would mean that, according to the rules of Navier-Stokes, the exact same fluid from the exact same starting conditions could end up in two distinct physical states, which makes no physical sense and implies that the equations aren’t really describing what they’re supposed to describe.

Buckmaster and Vicol’s new result is the first to suggest that, for certain definitions of weak solutions, that might be the case.

Many Worlds

In their new paper, Buckmaster and Vicol consider solutions that are even weaker than Leray solutions—solutions that involve the same averaging principle as Leray solutions but also relax one additional requirement (known as the “energy inequality”). They use a method called “convex integration,” which has its origins in work in geometry by the mathematician John Nash and was imported more recently into the study of fluids by De Lellis and Székelyhidi.

Using this approach, Buckmaster and Vicol prove that these very weak solutions to the Navier-Stokes equations are nonunique. They demonstrate, for example, that if you start with a completely calm fluid, like a glass of water sitting still by your bedside, two scenarios are possible. The first scenario is the obvious one: The water starts still and remains still forever. The second is fantastical but mathematically permissible: The water starts still, erupts in the middle of the night, then returns to stillness.

“This proves nonuniqueness because from zero initial data you can construct at least two objects,” said Vicol.

Buckmaster and Vicol prove the existence of many nonunique weak solutions (not just the two described above) to the Navier-Stokes equations. The significance of this remains to be seen. At a certain point, weak solutions might become so weak that they stop really bearing on the smoother solutions they’re meant to imitate. If that’s the case, then Buckmaster and Vicol’s result might not lead far.

“Their result is certainly a warning, but you could argue it’s a warning for the weakest notion of weak solutions. There are many layers [of stronger solutions] on which you could still hope for much better behavior” in the Navier-Stokes equations, said De Lellis.

Buckmaster and Vicol are also thinking in terms of layers, and they have their sights set on Leray solutions—proving that those, too, allow for a multitrack physics in which the same fluid from the same position can take on more than one future form.

“Tristan and I think Leray solutions are not unique. We don’t have that yet, but our work is laying the foundation for how you’d attack the problem,” said Vicol.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Related Video

Science

Tap Water

You can't see it, but there's a lot going on inside a glass of tap water. The faucet aqua is essentially an invisible cocktail of sulfates, resins, varying levels of lead, and so much more. Find out what else is inside.

Glitzy ceremony honours work including that on mapping post-big bang primordial light, cell biology, plant science and neurodegenerative diseases

The most glitzy event on the scientific calendar took place on Sunday night when the Breakthrough Foundation gave away $22m (16.3m) in prizes to dozens of physicists, biologists and mathematicians at a ceremony in Silicon Valley.

The winners this year include five researchers who won $3m (2.2m) each for their work on cell biology, plant science and neurodegenerative diseases, two mathematicians, and a team of 27 physicists who mapped the primordial light that warmed the universe moments after the big bang 13.8 billion years ago.

Now in their sixth year, the Breakthrough prizes are backed by Yuri Milner, a Silicon Valley tech investor, Mark Zuckerberg of Facebook and his wife Priscilla Chan, Anne Wojcicki from the DNA testing company 23andMe, and Googles Sergey Brin. Launched by Milner in 2012, the awards aim to make rock stars of scientists and raise their profile in the public consciousness.

The annual ceremony at Nasas Ames Research Center in California provides a rare opportunity for some of the worlds leading minds to rub shoulders with celebrities, who this year included Morgan Freeman as host, fellow actors Kerry Washington and Mila Kunis, and Miss USA 2017 Kra McCullough. When Joe Polchinski at the University of California in Santa Barbara shared the physics prize last year, he conceded his nieces and nephews would know more about the A-list attendees than he would.

Oxford University geneticist Kim Nasmyth won for his work on chromosomes but said he had not worked out what to do with the windfall. Its a wonderful bonus, but not something you expect, he said. Its a huge amount of money, I havent had time to think it through. On being recognised for what amounts to his lifes work, he added: You have to do science because you want to know, not because you want to get recognition. If you do what it takes to please other people, youll lose your moral compass. Nasmyth has won lucrative awards before and channelled some of his winnings into Gregor Mendels former monastery in Brno.

Another life sciences prizewinner, Joanne Chory at the Salk Institute in San Diego, was honoured for three decades of painstaking research into the genetic programs that flip into action when plants find themselves plunged into shade. Her work revealed that plants can sense when a nearby competitor is about to steal their light, sparking a growth spurt in response. The plants detect threatening neighbours by sensing a surge in the particular wavelengths of red light that are given off by vegetation.

Chory now has ambitious plans to breed plants that can suck vast quantities of carbon dioxide out of the atmosphere in a bid to combat climate change. She believes that crops could be selected to absorb 20 times more of the greenhouse gas than they do today, and convert it into suberin, a waxy material found in roots and bark that breaks down incredibly slowly in soil. If we can do this on 5% of the landmass people are growing crops on, we can take out 50% of global human emissions, she said.

Three other life sciences prizes went to Kazutoshi Mori at Kyoto University and Peter Walter for their work on quality control mechanisms that keep cells healthy, and to Don Cleveland at the University of California, San Diego, for his research on motor neurone disease.

The $3m Breakthrough prize in mathematics was shared by two British-born mathematicians, Christopher Hacon at the University of Utah and James McKernan at the University of California in San Diego. The pair made major contributions to a field of mathematics known as birational algebraic geometry, which sets the rules for projecting abstract objects with more than 1,000 dimensions onto lower-dimensional surfaces. It gets very technical, very quickly, said McKernan.

Speaking before the ceremony, Hacon was feeling a little unnerved. Its really not a mathematician kind of thing, but Ill probably survive, he said. Ive got a tux ready, but Im not keen on wearing it. Asked what he might do with his share of the winnings, Hacon was nothing if not realistic. Ill start by paying taxes, he said. And I have six kids, so the rest will evaporate.

Chuck Bennett, an astrophysicist at Johns Hopkins University in Baltimore, led a Nasa mission known as the Wilkinson Microwave Anisotropy Probe (WMAP) to map the faint afterglow of the big bangs radiation that now permeates the universe. The achievement, now more than a decade old, won the 27-strong science team the $3m Breakthrough prize in fundamental physics. When we made our first maps of the sky, I thought these are beautiful, Bennett told the Guardian. It is still absolutely amazing to me. We can look directly back in time.

Bennett believes that the prizes may help raise the profile of science at a time when it is sorely needed. The point is not to make rock stars of us, but of the science itself, he said. I dont think people realise how big a role science plays in their lives. In everything you do, from the moment you wake up to the moment you go to sleep, theres something about what youre doing that involves scientific advances. I dont think people think about that at all.

Using a single thread roughly 1-2 km long (0.6 – 1.2 mi), Petros Vrellis continuously wraps the thread in straight, continuous lines, from one peg to its direct opposite peg in a circular, 28″ loom with 200 evenly spaced anchor pegs on its circumference. Thus each artwork is made from 3,000 – 4,000 continuously intersecting straight lines of a single thread.

Interestingly, knitting is done by hand, with step-by-step instructions dictated by a computer algorithm designed by the new media artist. Vrellis explains:

“The pattern is generated from a specially designed algorithm, coded in openframeworks. The algorithm takes as input a digital photograph and outputs the knitting pattern. Over 2 billion calculations are needed to produce each pattern.”

For ‘inputs’, Vrellis used famous portraits by the famous Spanish Renaissance artist El Greco. Below you can see a timelapse video along with close-ups of Petros’ experimental knitting project. For more information check out his official website. If you’re interested in purchasing any of the original artworks you can see what’s currently available on Saatchi Art.