### kottke.org posts about mathematics

Web performance and security company Cloudflare uses a wall of lava lamps to generate random numbers to help keep the internet secure. Random numbers generated by computers are often not exactly random, so what Cloudflare does is take photos of the lamps’ activities and uses the uncertainty of the lava blooping up and down to generate truly random numbers. Here’s a look at how the process works:

At Cloudflare, we have thousands of computers in data centers all around the world, and each one of these computers needs cryptographic randomness. Historically, they got that randomness using the default mechanism made available by the operating system that we run on them, Linux.

But being good cryptographers, we’re always trying to hedge our bets. We wanted a system to ensure that even if the default mechanism for acquiring randomness was flawed, we’d still be secure. That’s how we came up with LavaRand.

LavaRand is a system that uses lava lamps as a secondary source of randomness for our production servers. A wall of lava lamps in the lobby of our San Francisco office provides an unpredictable input to a camera aimed at the wall. A video feed from the camera is fed into a CSPRNG [cryptographically-secure pseudorandom number generator], and that CSPRNG provides a stream of random values that can be used as an extra source of randomness by our production servers. Since the flow of the “lava” in a lava lamp is very unpredictable, “measuring” the lamps by taking footage of them is a good way to obtain unpredictable randomness. Computers store images as very large numbers, so we can use them as the input to a CSPRNG just like any other number.

(via open culture)

More about...

DJ Earworm has made a chronological mix of songs, one from each year from 1970 to 2020.^{1} The Jackson 5 flows into Rod Stewart, Def Leppard into Milli Vanilli, Eric Clapton into Chumbawamba into The Verve, Shakira into Rihanna, and Ed Sheeran into Justin Bieber. Go on then, take a ride.

More about...

How many holes does a donut have? That’s pretty easy: one. What about a straw? Two (one at each end) or just one? (Isn’t a straw just an elongated donut?) Does a coffee mug have one hole or two? Does a bowl have a hole? If no, then what about a hole in the ground or a hole in a wall that doesn’t pass all the way through? Does a basketball have a hole? A Reddit user asked 1600 people how many holes were in various objects and the results are fantastically all over the place.

This is a trivial question, but it reveals something interesting about people’s perceptions. The dictionary definition of “hole” includes two main meanings for the purposes of this question: “an opening through something” and “a hollowed-out place”. Mathematics offers another possible meaning:

A hole in a mathematical object is a topological structure which prevents the object from being continuously shrunk to a point. When dealing with topological spaces, a disconnectivity is interpreted as a hole in the space. Examples of holes are things like the “donut hole” in the center of the torus, a domain removed from a plane, and the portion missing from Euclidean space after cutting a knot out from it.

But a hole isn’t clearly defined in math or topology. From What We Talk about When We Talk about Holes in Scientific American:

Here’s my short answer that is also the reason I’m not an algebraic topologist. If you can put it on a necklace, it has a one-dimensional hole. If you can fill it with toothpaste, it has a two-dimensional hole. For holes of higher dimensions, you’re on your own.

That answer isn’t very satisfying. Is there a better way to describe holes? I talked with some of my topologist friends and discovered two things: topologists don’t all agree on what a hole is, and it’s fun and interesting to think about different interpretations of a word whose mathematical definition isn’t completely settled. I think my larger conclusion, in the spirit of the season, is that holes are like Santa Claus: the true meaning is in your heart.

No wonder those poll results are all over the place. But at the same time, it’s interesting that many more people say that donuts have a hole than washers or rubber bands. I guess donut holes have better marketing? As for straws — reason tells me they only have one hole but I know in my heart they have two. (via the whippet)

More about...

The director of the National Institute of Allergy and Infectious Diseases, Anthony Fauci, told a Senate committee today that the US could be heading towards *100,000* new reported cases of Covid-19 *per day*. 100,000 cases per day. Yesterday the US recorded about 40,000 new cases.

“It is going to be very disturbing, I will guarantee you that,” he said.

“What was thought to be unimaginable turns out to be the reality we’re facing right now,” Fauci said, adding that “outbreaks happen, and you have to deal with them in a very aggressive, proactive way.”

Fewer than 20 countries have recorded more than 100,000 cases in total. Canada, for instance, has confirmed about 106,000 Covid-19 cases since the outbreak began.

Public health and infectious diseases experts, who have been gravely concerned about the way the U.S. response has unfolded, concurred with Fauci’s assessment.

Bars and restaurants are reopening around the country without any serious effort to test/trace/isolate/support. In the absence of strident guidance from the federal government, people are worrying less about social distancing and wearing masks to protect others. As this guy says, it’s just a matter of math:

“It’s unfortunately just a simple consequence of math plus a lack of action,” said Marm Kilpatrick, an infectious diseases dynamics researcher at the University of California, Santa Cruz. “On the one hand it comes across as ‘Oh my God, 100,000 cases per day!’ But then if you actually look at the current case counts and trends, how would you not get that?”

Absolutely nothing has changed about the virus, so its spread is determined by pretty simple exponential growth.

Limiting person-to-person exposure and decreasing the probability of exposures becoming infections can have a huge effect on the total number of people infected because the growth is exponential. If large numbers of people start doing things like limiting travel, cancelling large gatherings, social distancing, and washing their hands frequently, the total number of infections could fall by several orders of magnitude, making the exponential work for us, not against us. Small efforts have huge results.

We’ve known for months (and epidemiologists and infectious disease experts have known for their entire careers) what works and yet the federal government and many state governments have not listened and, in some cases, have actively suppressed use of such measures. So the pandemic will continue to escalate in the United States until proper measures are put in place by governments and people follow them. The virus will not change, the mathematics will not change, so we must.

Graph at the top of the post via Rishi Desai.

More about...

This lovely short film by Cristóbal Vila shows how the simple Fibonacci sequence manifests itself in natural forms like sunflowers, nautilus shells, and dragonfly wings.

See also Arthur Benjamin’s TED Talk on the Fibonacci numbers and the golden ratio and the Fibonacci Shelf. (via @stevenstrogatz)

More about...

As someone who suspects I may have had a mild case of Covid-19 a couple of months ago, I’ve been thinking about getting tested for antibodies. But as this video from ProPublica shows, even really accurate tests may not actually tell you all that much.

And the thing is, the “do I have Covid-19 right now” tests are plagued by the same issue.

For patients getting tested, the main concern is how to interpret the outcome: If I test negative with an RT-PCR genetic test, what are the chances I actually have the virus? Or if I test positive with an antibody test, does it actually mean I have the antibodies?

It turns out that the answers to these questions don’t just hinge on the accuracy of the test. “Mathematically, the way that works out, that actually depends on how many people in your area have Covid,” Eleanor Murray, an assistant professor of epidemiology at the Boston University School of Public Health, said.

The rarer the disease in the population, the less you’ll learn by testing.

Let’s say we have a hypothetical Covid-19 test for antibodies that is both 99 percent sensitive — meaning almost all people with antibodies will test positive — and 99 percent specific, meaning almost all people who were never infected will yield a negative result.

If you test a group of 100 uninfected people, odds are one of them will still test positive even though they don’t have the virus. Conversely, if you test 100 people who were infected, it’s likely one of them will still test negative.

Now let’s presume the virus has a prevalence rate of 1 percent, so one person in 100 carries antibodies to it. If you test 100 random people and get a positive result, what is the chance that this person was truly infected?

Deborah Birx, the White House Covid-19 response coordinator, explained the answer at a press conference on April 20: “So if you have 1 percent of your population infected and you have a test that’s only 99 percent specific, that means that when you find a positive, 50 percent of the time will be a real positive and 50 percent of the time it won’t be.”

So even if I test positive for antibodies and I assume that confers immunity, given that the number of confirmed infections in Vermont is so low (~900 statewide), it doesn’t seem like I would be justified in changing my behavior at all. I would still have to act as though I’ve never had the virus, both for my own health and the health of those around me. Maybe if I had two or three corroborating tests could I be more certain…

More about...

Back when the COVID-19 pandemic was beginning to be taken seriously by the American public, 3blue1brown’s Grant Sanderson released a video about epidemics and exponential growth. (It’s excellent — I recommend watching it if you’re still a little unclear on how things are got so out of hand so quickly in Italy and, very soon, in NYC.) In his latest video, Sanderson digs a bit deeper into simulating epidemics using a variety of scenarios.

Like, if people stay away from each other I get how that will slow the spread, but what if despite mostly staying away from each other people still occasionally go to a central location like a grocery store or a school?

Also, what if you are able to identify and isolate the cases? And if you can, what if a few slip through, say because they show no symptoms and aren’t tested?

How does travel between separate communities affect things? And what if people avoid contact with others for a while, but then they kind of get tired of it and stop?

These simulations are fascinating to watch. Many of the takeaways boil down to: early & aggressive actions have a huge effect in the number of people infected, how long an epidemic lasts, and (in the case of a disease like COVID-19 that causes fatalities) the number of deaths. This is what all the epidemiologists have been telling us — because the math, while complex when you’re dealing with many factors (as in a real-world scenario), is actually pretty straightforward and unambiguous.

The biggest takeaway? That the effective identification and isolation of cases has the largest effect on cutting down the infection rate. Testing and isolation, done as quickly and efficiently as possible.

See also these other epidemic simulations: Washington Post and Kevin Simler.

*Note: Please keep in mind that these are simulations to help us better understand how epidemics work in general — it’s not about how the COVID-19 pandemic is proceeding or will proceed in the future.*

More about...

By manipulating values like R0, incubation time, and hospitalization rate with this this epidemic graphing calculator, you really get a sense of how effective early intervention and aggressive measures can be in curbing infection & saving lives in an exponential crisis like the COVID-19 pandemic.

More about...

Over the past week or so, echoing public health officials & epidemiologists, I’ve been trying to illustrate the often counterintuitive concept of exponential growth that you see in an epidemic and how flattening the curve can help keep people healthy and alive. But I think people have a hard time grasping what that means, personally, to them. Like, what’s one person in the face of a pandemic?

Well, epidemiologist Britta Jewell had a similar thought and came up with this brilliantly simple graph, one of the best I’ve seen in illustrating the power of exponential growth and how we as individuals can affect change:

Jewell explains a bit more about what we’re looking at:

The graph illustrates the results of a thought experiment. It assumes constant 30 percent growth throughout the next month in an epidemic like the one in the U.S. right now, and compares the results of stopping one infection today — by actions such as shifting to online classes, canceling of large events and imposing travel restrictions — versus taking the same action one week from today.

The difference is stark. If you act today, you will have averted four times as many infections in the next month: roughly 2,400 averted infections, versus just 600 if you wait one week. That’s the power of averting just one infection, and obviously we would like to avert more than one.

So that’s 1800 infections averted from the actions of *just one person*. Assuming a somewhat conservative death rate of 1% for COVID-19, that’s 18 deaths averted. Think about that before you head out to the bar tonight or convene your book group as usual. Your actions have a lot of power in this moment; take care in how you wield it.

More about...

From 3blue1brown’s Grant Sanderson, this is an excellent quick explanation of exponential growth and how we should think about it in relation to epidemics like COVID-19. Depending on how rusty your high school math is, you might need to rewind a couple of times to fully grasp the explanation, but you should persevere and watch the whole thing.

The most important bit is at the end, right around the 7:45 mark, when he talks about how limiting person-to-person exposure and decreasing the probability of exposures becoming infections can have a huge effect on the total number of people infected because the growth is exponential. If large numbers of people start doing things like limiting travel, cancelling large gatherings, social distancing, and washing their hands frequently, the total number of infections could fall by several orders of magnitude, making the exponential work for us, not against us. Small efforts have huge results. If, as in the video, you’re talking about 100 million infected in two months (at the current transmission rate) vs. 400,000 (at the lowered rate) and if the death rate of COVID-19 is between 1-3%, you’re looking at *1-3 million dead vs. 4-12,000 dead*.

So let’s start flattening that exponential curve. South Korea and China both seem to have done it, so there’s no reason the rest of the world can’t through aggressive action. (thx, david)

**Update:** Vox has a nice explainer on what epidemiologists refer to as “flattening the curve”.

Yet the speed at which the outbreak plays out matters hugely for its consequences. What epidemiologists fear most is the health care system becoming overwhelmed by a sudden explosion of illness that requires more people to be hospitalized than it can handle. In that scenario, more people will die because there won’t be enough hospital beds or ventilators to keep them alive.

A disastrous inundation of hospitals can likely be averted with protective measures we’re now seeing more of — closing schools, canceling mass gatherings, working from home, self-quarantine, avoiding crowds - to keep the virus from spreading fast.

Epidemiologists call this strategy of preventing a huge spike in cases “flattening the curve”.

Here’s the relevant graphic explanation from Our World in Data’s COVID-19 package:

More about...

If you’re a computer, it turns out that the fastest way to multiply two numbers, especially two very large numbers, is not by the grade school method of stacking the two numbers and then multiplying each digit in the top number by each digit in the bottom number and adding the results. Since 1960, mathematicians have been discovering ever faster methods to multiply and recently, a pair of mathematicians discovered a method that is perhaps the fastest way possible.

Their method is a refinement of the major work that came before them. It splits up digits, uses an improved version of the fast Fourier transform, and takes advantage of other advances made over the past forty years. “We use [the fast Fourier transform] in a much more violent way, use it several times instead of a single time, and replace even more multiplications with additions and subtractions,” van der Hoeven said.

What’s interesting is that independently of these discoveries, computers have become a lot better at multiplication:

In addition, the design of computer hardware has changed. Two decades ago, computers performed addition much faster than multiplication. The speed gap between multiplication and addition has narrowed considerably over the past 20 years to the point where multiplication can be even faster than addition in some chip architectures.

(via @macgbrown, who passed this along after I posted this video on Russian multiplication)

More about...

I’ve loved math since I was a kid. One of the big reasons for this is that there’s always more than one way to solve a particular problem and in discovering those solutions, you learn something about mathematics and the nature of numbers.^{1}

In this video, math fan Johnny Ball shows us a different method of multiplication. In Russian multiplication (also called Ethiopian multiplication and related to ancient Egyptian multiplication), you can multiply any two numbers together through simple addition and doubling & halving numbers. The technique works by converting the numbers to binary and turning it into an addition problem.

I loved learning about this so much that I scribbled an explanation out on a napkin at brunch yesterday to show a friend how it worked. We’re friends because she was just as excited as I was about it. (via the kid should see this)

More about...

All art is bounded by one constraint or another. Mathematician Robert Bosch makes what he calls “optimization art”, which is best embodied by these images produced as solutions to the travelling salesman problem. Each image is made up of a continuous line that is the shortest possible route through a series of points without revisiting any single point, much like the optimal route of a travelling salesperson visiting cities. The rendition of a van Gogh self-portrait uses a solution for 120,000 “cities” while the single line forming the Girl with the Pearl Earring visits 200,000 cities.

I would love to see an Observable notebook where you could upload any photo to make images like these. (via @Ianmurren)

More about...

Jessica Wynne has been taking photos of mathematicians’ blackboards for the past year or so, some of which were featured recently in the NY Times. I love the variety in density, style, color, and tidiness.

“I am also fascinated by the process of working on the chalkboard. Despite technological advances, and the creation of computers, this is how the masters choose to work.”

In their love of blackboards and chalk, mathematicians are among the last holdouts. In many fields of science and investigation, blackboards have been replaced with whiteboards or slide show presentations. But chalk is cheaper and biodegradable. It smells better than whiteboard markers and is easier to clean up, mathematicians say. It is also more fun to write with.

A book of Wynne’s chalkboard photos called Do Not Erase will be released next year.

More about...

The folks behind the Nevertheless podcast commissioned a set of seven posters of STEM role models, people who have made significant contributions to science, technology, engineering, and mathematics. The posters are free to download and print out in eight different languages (including English, Spanish, and Simplified Chinese).

More about...

The story goes that modern chaos theory was birthed by Edward Lorenz’s paper about his experiments with weather simulation on a computer. The computing power helped Lorenz nail down hidden patterns that had been hinted at by computer-less researchers for decades. But the early tenets of chaos theory were not the only things that were hidden. The women who wrote the programs that enabled Lorenz’s breakthroughs haven’t received their proper due.

But in fact, Lorenz was not the one running the machine. There’s another story, one that has gone untold for half a century. A year and a half ago, an MIT scientist happened across a name he had never heard before and started to investigate. The trail he ended up following took him into the MIT archives, through the stacks of the Library of Congress, and across three states and five decades to find information about the women who, today, would have been listed as co-authors on that seminal paper. And that material, shared with Quanta, provides a fuller, fairer account of the birth of chaos.

The two women who programmed the computer for Lorenz were Ellen Gille (née Fetter) and Margaret Hamilton. Yes, that Margaret Hamilton, whose already impressive career starts to look downright bonkers when you add in her contributions to chaos theory.

More about...

If you could somehow fold a piece of paper in half 103 times, the paper would be as thick as the observable universe.

Such is the power (*cough*) of exponential growth, but of course you’d never get anywhere close to that many folds. The theoretical limit for folding paper was long thought to be seven or eight folds. You can see why watching this hydraulic press attempt the 7th fold…the paper basically turns to dust.

But in 2002, high school student Britney Gallivan proved that you could fold a piece of paper 12 times. Here’s Gallivan explaining the math involved and where the limits come in when folding:

(thx, porter)

More about...

Nidhal Selmi combined the fractal Sierpinski triangle with the impossible Penrose triangle to create the M.C. Escher-like the Selmi triangle.

More about...

The ultimate form of argument, and for some, the most absolute form of truth, is mathematical proof. But short of a conclusive proof of a theorem, mathematicians also consider evidence that might 1) disprove a thesis or 2) suggest its possible truth or even avenues for proving that it’s true. But in a not-quite-empirical field, what the heck counts as evidence?

The twin primes conjecture is one example where evidence, as much as proof, guides our mathematical thinking. Twin primes are pairs of prime numbers that differ by 2 — for example, 3 and 5, 11 and 13, and 101 and 103 are all twin prime pairs. The twin primes conjecture hypothesizes that there is no largest pair of twin primes, that the pairs keep appearing as we make our way toward infinity on the number line.

The twin primes conjecture is not the Twin Primes Theorem, because, despite being one of the most famous problems in number theory, no one has been able to prove it. Yet almost everyone believes it is true, because there is lots of evidence that supports it.

For example, as we search for large primes, we continue to find extremely large twin prime pairs. The largest currently known pair of twin primes have nearly 400,000 digits each. And results similar to the twin primes conjecture have been proved. In 2013, Yitang Zhang shocked the mathematical world by proving that there are infinitely many prime number pairs that differ by 70 million or less. Thanks to a subsequent public “Polymath” project, we now know that there are infinitely many pairs of primes that differ by no more than 246. We still haven’t proved that there are infinitely many pairs of primes that differ by 2 — the twin primes conjecture — but 2 is a lot closer to 246 than it is to infinity.

This starts to get really complicated once you leave the relatively straightforward arithmetical world of prime numbers behind, with its clearly empirical pairs and approximating conjectures, and start working with computer models that generate arbitrarily large numbers of mathematical statements, all of which can be counted as evidence.

Patrick Hanner, the author of this article, gives what seems like a simple example: are all lines parallel or intersecting? Then he shows how the models one can use to answer this question vary wildly based on their initial assumptions, in this case, whether one is considering lines in a single geometric plane or lines in an n-dimensional geometric space. As always in mathematics, it comes back to one’s initial set of assumptions; you can “prove” (i.e., provide large quantities of evidence for) a statement with one set of rules, but that set of rules is not the universe.

More about...

Happy Pi Day! In celebration of this gloriously nerdy event, mathematician Steven Strogatz wrote about how pi was humanity’s first glimpse of the power of calculus and an early effort to come to grips with the idea of infinity.

As a ratio, pi has been around since Babylonian times, but it was the Greek geometer Archimedes, some 2,300 years ago, who first showed how to rigorously estimate the value of pi. Among mathematicians of his time, the concept of infinity was taboo; Aristotle had tried to banish it for being too paradoxical and logically treacherous. In Archimedes’s hands, however, infinity became a mathematical workhorse.

He used it to discover the area of a circle, the volume of a sphere and many other properties of curved shapes that had stumped the finest mathematicians before him. In each case, he approximated a curved shape by using a large number of tiny straight lines or flat polygons. The resulting approximations were gemlike, faceted objects that yielded fantastic insights into the original shapes, especially when he imagined using infinitely many, infinitesimally small facets in the process.

Here’s a video that runs through Archimedes’ method for calculating pi:

Strogatz’s piece is an excerpt from his forthcoming book, Infinite Powers: How Calculus Reveals the Secrets of the Universe.

More about...

Queueing theory is the scientific study of waiting in line. It can apply to familiar lines like those at the grocery store or bank but also to things like web servers, highway traffic, and telecommunications…basically any situation where you have things entering a system, being processed by a system for a certain period of time, and leaving the system.

The study of queueing is necessary because the effects of waiting in line often run counter to our intuition (which causes people to get cranky about it). Take this example from John Cook of tellers serving customers at a bank:

Suppose a small bank has only one teller. Customers take an average of 10 minutes to serve and they arrive at the rate of 5.8 per hour. What will the expected waiting time be? What happens if you add another teller?

We assume customer arrivals and customer service times are random (details later). With only one teller, customers will have to wait nearly five hours on average before they are served.

Five hours?! I would not have guessed anywhere close to that, would you? Now, add a second teller into the mix. How long is the average wait now? 2.5 hours? 1 hour? According to Cook, much lower than that:

But if you add a second teller, the average waiting time is not just cut in half; it goes down to about 3 minutes. The waiting time is reduced by a factor of 93x

Our lack of intuition about queues has to do with how much the word “average” is hiding…the true story is much more complex.

Aside from the math, designers of queueing systems also have to take human psychology into account.

There are three givens of human nature that queuing psychologists must address: 1) We get bored when we wait in line. 2) We really hate it when we expect a short wait and then get a long one. 3) We really, really hate it when someone shows up after us but gets served before us.

The boredom issue has been tackled in myriad ways — from the mirrors next to elevator banks to the TVs in dentist’s waiting rooms. Larson mentions a clever solution from the Manhattan Savings Bank, which once hired a concert pianist to play in its lobby as customers waited for tellers. “But Disney has been the absolute master of this aspect of queue psychology,” says Larson. “You might wait 45 minutes for an 8-minute ride at Disney World. But they’ll make you feel like the ride has started while you’re still on line. They build excitement and provide all kinds of diversions in the queue channel.” Video screens tease the thrills ahead, and a series of varied chambers that the queue moves through creates a sense of progress. Another solution: those buzzing pagers that restaurants in malls sometimes give you while you’re waiting for a table. Instead of focusing on the misery of the wait, you can go off and entertain yourself-secure in the knowledge that you’ll be alerted when it’s your turn.

Whole Foods had to work around our expectations when it switched to “serpentine” lines that seemed longer but actually served customers more quickly.

By 7 p.m. on a weeknight, the lines at each of the four Whole Foods stores in Manhattan can be 50 deep, but they zip along faster than most lines with 10 shoppers.

Because people stand in the same line, waiting for a register to become available, there are no “slow” lines, delayed by a coupon-counting customer or languid cashier. And since Whole Foods charges premium prices for its organic fare, it can afford to staff dozens of registers, making the line move even faster.

“No way,” is how Maggie Fitzgerald recalled her first reaction to the line at the Whole Foods in Columbus Circle. For weeks, Ms. Fitzgerald, 26, would not shop there alone, assigning a friend to fill a grocery cart while she stood in line.

When she discovered the wait was about 4 minutes, rather than 20, she began shopping by herself, and found it faster than her old supermarket.

See also How to Pick the Fastest Line at the Supermarket, Queue Theory and Design from 99% Invisible, and this paper from Bob Wescott, Seven Insights Into Queueing Theory. One of his insights:

It’s very hard to use the last 15% of anything. As the service center gets close to 100% utilization the response time will get so bad for the average transaction that nobody will be having any fun. The graph below is exactly the same situation as the previous graph except this graph is plotted to 99% utilization. At 85% utilization the response time is about 7x and it just gets worse from there.

For grocery stores or call centers, that means you’re going to have operators or cashiers sitting there “doing nothing” sometimes because if you don’t, you’re gonna be in trouble when a rush hits.

**Update:** John Frost shares an anecdote about his grandfather’s team designed the queueing system for the Matterhorn Bobsleds at Disneyland:

Another fun family story is the invention of the Matterhorn’s first of its kind switchback queue. Vic Greene and his team of Imagineers developed a system that would have the entrance to the switchback part of the queue be lower than the exit. When you stood at the entrance, the exit would appear closer to you in an optical illusion. The idea was to make your wait seem less cumbersome by visually shortening the queue.

More about...

Mathematician Michael Atiyah claims that he’s solved the Riemann hypothesis, one of the great unsolved problems in math, and will deliver a talk about the proof on Monday.

In it, he pays tribute to the work of two great 20th century mathematicians, John von Neumann and Friedrich Hirzebruch, whose developments he claims laid the foundations for his own proposed proof. “It fell into my lap, I had to pick it up,” he says.

The Riemann hypothesis, which is one of the $1 million Millennium Prize problems, deals with prime numbers. Even though it was suggested back in 1859 and “has been checked for the first 10,000,000,000,000 solutions”, no one has yet come up with a proof.

Here’s an educated guess about a part of Atiyah’s proof.

**Update:** For the hard-core mathematicians in the audience, here is a video of Atiyah’s lecture and a paper containing what looks like a very high-level overview of his solution.

**Update:** Several people wrote in wanting me to highlight the skepticism that surrounds Atiyah’s Riemann claims.

A giant in his field, Atiyah has made major contributions to geometry, topology, and theoretical physics. He has received both of math’s top awards, the Fields Medal in 1966 and the Abel Prize in 2004. But despite a long and prolific career, the Riemann claim follows on the heels of more recent, failed proofs.

In 2017, Atiyah told The Times of London that he had converted the 255-page Feit-Thompson theorem, an abstract theory dealing with groups of numbers first proved in 1963, into a vastly simplified 12-page proof. He sent his proof to 15 experts in the field and was met with skepticism or silence, and the proof was never printed in a journal. A year earlier, Atiyah claimed to have solved a famous problem in differential geometry in a paper he posted on the preprint repository ArXiv, but peers soon pointed out inaccuracies in his approach and the proof was never formally published.

Science contacted several of Atiyah’s colleagues. They all expressed concern about his desire to come out of retirement to present proofs based on shaky associations and said it was unlikely that his proof of the Riemann hypothesis would be successful. But none wanted to publicly criticize their mentor or colleague for fear of jeopardizing the relationship

**Update:** Some final thoughts on Atiyah’s failed proof:

Firstly, it’s become clear that the work presented by Atiyah doesn’t constitute a proof of the Riemann Hypothesis, so the Clay Institute can rest easy with their 1 million dollars, and encryption on the internet remains safe. The argument Atiyah put forward rests on his function 𝑇(𝑠) having certain properties, and many have concluded that no function with such properties can exist, including in a comment on our own post from an academic who’s written about the Riemann Hypothesis extensively. Dick Lipton and Ken Regan have written a blog post looking at the detail of how 𝑇 is supposed to behave. According to some sources, Atiyah has stated he’ll be publishing a more detailed paper shortly, but not many are holding their breath.

This is not easy mathematics, and even top-level mathematicians sometimes find their proofs don’t hold together; it’s no surprise that a solution to this problem wouldn’t be found in a few lines of mathematics, and it’s just a shame that this was so built up and sensationalised when it’s becoming obvious that Atiyah didn’t consult with anyone else about his proof before presenting it in a public forum.

And with that, Betteridge’s law of headlines holds up yet again. QED.

More about...

Tree Mountain is a man-made mountain 125 feet high covered in 11,000 trees planted in a configuration according to the Golden Ratio. This art installation was conceived and built by artist Agnes Denes in Finland and is designed to endure for 400 years.

A mountain needed to be built to design specifications, which by itself took over four years and was the restitution work of a mine that had destroyed the land through resource extraction. The process of bioremediation restores the land from resource extraction use to one in harmony with nature, in this case, the creation of a virgin forest. The planting of trees holds the land from erosion, enhances oxygen production and provides home for wildlife. This takes time and it is one of the reasons why Tree Mountain must remain undisturbed for centuries. The certificate the planters received are numbered and reach 400 years into the future as it takes that long for the ecosystem to establish itself. It is an inheritable document that connects the eleven thousand planters and their descendents reaching into millions, connected by their trees.

Here’s Tree Mountain on Google Maps and a lovely video of the mountain shot from a drone:

You may have seen another of Denes’ projects: a 2-acre wheat field she planted in 1982 near the World Trade Center in Manhattan.

(via shane)

More about...

Today’s question is surprisingly tricky, as even the letter writer acknowledges:

*My question is one I’m fumbling to articulate. I’m a math teacher and writer. (I’m a writer in the sense that I write, not in the sense that I get published or paid for writing.) I write a lot about teaching, but I’ve also been trying to get a handle on how I can write about math. *

*Here’s the question: ***is it possible to write about math in a deep and accessible way? **

*This is a question that sends me off on a lot of different questions. What does it mean to understand math? What does it mean to understand a metaphor? Are there are great literary works that are also mathematical? *

*Ultimately, though, I don’t know how to think about this yet. I’m hoping to eventually figure this out by learning math and writing about it…but that’s slow, so maybe Dr. Time can offer advice?*

The obvious answer to this question is yes, of course it’s possible to write about math in a deep and accessible way. Bertrand Russell won a Nobel Prize in Literature. *Godel, Escher, Bach* is a 777-page doorstop that’s also a beloved bestseller. If you’re looking to satisfy an existence requirement, that book has your back. I’ll even stipulate that for every intellectual subject, not just mathematics, there exists a work that satisfies this deep-but-accessible requirement. It’s just like how there’s always a bigger prime number. It’s out there; we just have to find it.

On the other hand, *math seems hard*. And I think it seems hard for Reasons. Here’s a big one: mathematicians and popularizers of mathematics are perhaps understandably obsessed with understanding mathematics as such. The want to explain the totality of mathematics, or the essence, rather than finer problems like distinguishing between totalities and essences.

If you look at the other sciences, they don’t do this. It’s only very rarely that you get a Newton, Darwin, or Einstein who sets out to grab his or her entire subject with both hands and rethink our fundamental understanding of its foundations. Imagine a biologist who wants to explain life, in its essence and totality, at the micro and macro level. They’d be understandably stumped. Even physicists, when they want to explain something big and weird to the public, stick to things like a subatomic particle they’re hoping to discover or the behavior of one of Saturn’s moons. They don’t try to explain physics. They explain a problem in physics.

When mathematicians do that, they’re usually pretty successful. The Konigsberg Bridge Problem is charming as hell. Russell’s and Godel’s paradoxes have whole books written about them, but can also be told in the form of jokes. Even Fourier Transforms can be broken down and made beautiful with a little bit of technical help.

So I think the key, in part, is to resist that mathematicians’ tendency to abstract away individual problems into general solutions or categories of solutions or entire subfields, and spend some time with the specific problems that mathematicians are or have been interested in. But it also helps a lot if, in that specific problem, you get that mathematical move of discarding whatever doesn’t matter to the structure of the problem. After all, that’s a big part of what you’re trying to teach: how to think like a mathematician. You just to have to unlearn what a mathematician already assumes first.

More about...

The Fourier Transform is an incredibly useful mathematical function that can be used to show the different parts of a continuous signal. As you can see from the Wikipedia page, the formula and the mathematical explanation of the Fourier Transform can get quite complicated. But as with many complex mathematical subjects, the FT can also be explained visually. In the video above, 3blue1brown breaks down the Fourier Transform into a really intuitive visual system that’s surprisingly easy to follow if you’re not a science or math person. This would have been *super* helpful in my physics and math classes in college.

See also Better Explained’s interactive guide to the Fourier Transform, which describes the FT metaphorically like so:

What does the Fourier Transform do? Given a smoothie, it finds the recipe.

How? Run the smoothie through filters to extract each ingredient.

Why? Recipes are easier to analyze, compare, and modify than the smoothie itself.

How do we get the smoothie back? Blend the ingredients.

The guide includes interactive graphs that you can play around with. Stuff like this always gets me so fired up about math and science. Ah, the path not taken…

More about...

In the first in a series of videos, Kurzgesagt tackles one of my favorite scientific subjects: how the sizes of animals governs their behaviors, appearance, and abilities. For instance, because the volume (and therefore mass) of an organism increases according to the cube of the increase in length (e.g. if you double the length/height of a dog, its mass roughly increases by 8 times), when you drop differently sized animals from high up, the outcomes are vastly different (a mouse lands safely, an elephant splatters everywhere).

The bit in the video about how insects can breathe underwater because of the interplay between the surface tension of water and their water-repellant outer layers is fascinating. The effect of scale also comes into play when considering the longevity of NBA big men, how fast animals move, how much animals’ hearts beat, the question of fighting 100 duck-sized horses or 1 horse-sized duck, and shrinking people down to conserve resources.

When humans get smaller, the world and its resources get bigger. We’d live in smaller houses, drive smaller cars that use less gas, eat less food, etc. It wouldn’t even take much to realize gains from a Honey, I Shrunk Humanity scheme: because of scaling laws, a height/weight proportional human maxing out at 3 feet tall would not use half the resources of a 6-foot human but would use somewhere between 1/4 and 1/8 of the resources, depending on whether the resource varied with volume or surface area. Six-inch-tall humans would potentially use 1728 times fewer resources.

See also The Biology of B-Movie Monsters, which is perhaps the most-linked article in the history of kottke.org.

More about...

This short video shows several ways in which systemic racism is still very much alive and well in the United States in 2017. See also Race Forward’s video series featuring Jay Smooth.

“What Is Systemic Racism?” is an 8-part video series that shows how racism shows up in our lives across institutions and society: Wealth Gap, Employment, Housing Discrimination, Government Surveillance, Incarceration, Drug Arrests, Immigration Arrests, Infant Mortality… yes, systemic racism is really a thing.

The reason why this matters should be obvious. Just like extra effort can harness the power of compound interest in knowledge and productivity, even tiny losses that occur frequently can add up to a large deficit. If you are constantly getting dinged in even small ways just for being black, those losses add up and compound over time. Being charged more for a car and other purchases means less life savings. Less choice in housing results in higher prices for property in less desirable neighborhoods, which can impact choice of schools for your kids, etc. Fewer callbacks for employment means you’re less likely to get hired. Even if you do get the job, if you’re late for work even once every few months because you get stopped by the police, you’re a little more likely to get fired or receive a poor evaluation from your boss. Add up all those little losses over 30-40 years, and you get *exponential losses* in income and social status.

And these losses often aren’t small at all, to say nothing of drug offenses and prison issues; those are massive life-changing setbacks. The war on drugs and racially selective enforcement have hollowed out black America’s social and economic core. There’s a huge tax on being black in America and unless that changes, the “American Dream” will remain unavailable to many of its citizens.

More about...

How are some people more productive than others? Are they smarter or do they just work a little bit harder than everyone else? In 1986, mathematician and computer scientist Richard Hamming gave a talk at Bell Communications Research about how people can do great work, “Nobel-Prize type of work”. One of the traits he talked about was possessing great drive:

Now for the matter of drive. You observe that most great scientists have tremendous drive. I worked for ten years with John Tukey at Bell Labs. He had tremendous drive. One day about three or four years after I joined, I discovered that John Tukey was slightly younger than I was. John was a genius and I clearly was not. Well I went storming into Bode’s office and said, “How can anybody my age know as much as John Tukey does?” He leaned back in his chair, put his hands behind his head, grinned slightly, and said, “You would be surprised Hamming, how much you would know if you worked as hard as he did that many years.” I simply slunk out of the office!

What Bode was saying was this: “Knowledge and productivity are like compound interest.” Given two people of approximately the same ability and one person who works ten percent more than the other, the latter will more than twice outproduce the former. The more you know, the more you learn; the more you learn, the more you can do; the more you can do, the more the opportunity — it is very much like compound interest. I don’t want to give you a rate, but it is a very high rate. Given two people with exactly the same ability, the one person who manages day in and day out to get in one more hour of thinking will be tremendously more productive over a lifetime.

Thinking of life in terms of compound interest could be very useful. Early and intensive investment in something you’re interested in cultivating — relationships, money, knowledge, spirituality, expertise, etc. — often yields exponentially better results than even marginally less effort.

See also this metaphor for how cultural, technological, and scientific changes happen. (via mr)

More about...

Records of when the cherry blossoms appear in Kyoto date back 1200 years. (Let’s boggle at this fact for a sec…) But as this chart of peak-bloom dates shows, since the most recent peak in 1829, the cherry blossoms have been arriving earlier and earlier in the year.

From its most recent peak in 1829, when full bloom could be expected to come on April 18th, the typical full-flowering date has drifted earlier and earlier. Since 1970, it has usually landed on April 7th. The cause is little mystery. In deciding when to show their shoots, cherry trees rely on temperatures in February and March. Yasuyuki Aono and Keiko Kazui, two Japanese scientists, have demonstrated that the full-blossom date for Kyoto’s cherry trees can predict March temperatures to within 0.1°C. A warmer planet makes for warmer Marches.

Temperature and carbon-related charts like this one are clear portraits of the Industrial Revolution, right up there with oil paintings of the time. I also enjoyed the correction at the bottom of the piece:

An earlier version of this chart depicted cherry blossoms with six petals rather than five. This has been amended. Forgive us this botanical sin.

Gotta remember that flower petals are very often numbered according to the Fibonacci sequence.

More about...

The abacus counting device dates back thousands of years but has, in the past century, been replaced by calculators and computers. But studies show that abacus use can have an effect on how well people learn math. In this excerpt adapted from his new book Learn Better, education researcher Ulrich Boser writes about the abacus and how people learn.

Researchers from Harvard to China have studied the device, showing that abacus students often learn more than students who use more modern approaches.

UC San Diego psychologist David Barner led one of the studies, and he argues that abacus training can significantly boost math skills with effects potentially lasting for decades.

“Based on everything we know about early math education and its long-term effects, I’ll make the prediction that children who thrive with abacus will have higher math scores later in life, perhaps even on the SAT,” Barner told me.

Ignore the hyperbolic “and it changed my life” in the title…this piece is interesting throughout. For example, this passage on the strength of the mind-body connection and the benefits of learning by doing:

When first I watched high school abacus whiz Serena Stevenson, her hand gestures seemed like a pretentious affect, like people who wear polka-dot bow ties. But it turned out that her finger movements weren’t really all that dramatic, and on YouTube, I watched students with even more theatrical gesticulations. What’s more, the hand movements turned out to be at the heart of the practice, and without any arm or finger motions, accuracy can drop by more than half.

Part of the explanation for the power of the gestures goes to the mind-body connection. But just as important is the fact that abacus makes learning a matter of doing. It’s an active, engaging process. As one student told me, abacus is like “intellectual powerlifting.”

Psychologist Rich Mayer has written a lot about this idea, and in study after study he has shown that people gain expertise by actively producing what they know. As he told me: “Learning is a generative activity.”

I’d never heard of the concept of overlearning before:

Everybody from actors learning lines, to musicians learning new songs, to teachers trying to impart key facts to students has observed that learning has to “sink in” in the brain. Prior studies and also the new one, for example, show that when people learn a new task and then learn a similar one soon afterward, the second instance of learning often interferes with and undermines the mastery achieved on the first one.

The new study shows that overlearning prevents against such interference, cementing learning so well and quickly, in fact, that the opposite kind of interference happens instead. For a time, overlearning the first task prevents effective learning of the second task — as if learning becomes locked down for the sake of preserving master of the first task. The underlying mechanism, the researchers discovered, appears to be a temporary shift in the balance of two neurotransmitters that control neural flexibility, or “plasticity,” in the part of the brain where the learning occurred.

“These results suggest that just a short period of overlearning drastically changes a post-training plastic and unstable [learning state] to a hyperstabilized state that is resilient against, and even disrupts, new learning,” wrote the team led by corresponding author Takeo Watanabe, the Fred M. Seed Professor of Cognitive Linguistic and Psychological Sciences at Brown.

More about...

Older posts