How do scientists go about calculating pi to umpteen decimal places?
I read recently that two supercomputer manufacturers were in a contest to determine who could calculate pi to the most digits. My simple question, simple for you at least, is, what data do they input to begin these calculations? Every schoolchild knows that pi is the ratio of a circle's circumference to its diameter. Obviously mathematicians do not draw a circle and then measure out the circumference with increasingly tiny rulers. But what do they do instead?
Dreaming up "algorithms" (techie talk for "methods") to compute pi has occupied the world's great minds for more than two millennia. Clearly these aren't guys you'd want to go on a long fishing trip with. The ancient Greeks used a simple method: You draw polygons (e.g., hexagons) around a circle with a diameter of one — one hexagon inside the circle, one out. Calculate the perimeter of the polygons (which is pretty straightforward), take an average, and you get a rough idea of pi. Use polygons with more sides and your approximation of pi gets closer and closer. The mathematician Archimedes got as far as 96 sides, calculating that pi was between 3.1408 and 3.1428.
Today mathematicians use far more sophisticated algorithms involving converging infinite series. A converging infinite series is a mathematical sequence that approaches (but never actually reaches) a target number called a limit. For example, the limit of the series 1 + 1/2 + 1/4 + 1/8 + … is 2.
Long ago it was realized that certain infinite series converge on fractions or reciprocals of pi. For example, in 1671 mathematician Gottfried Leibniz discovered that the series 1 - 1/3 + 1/5 - 1/7 + … converges on pi/4. This may seem strange — I mean, what do fractions have to do with the circumference of a circle? — but take my word for it, it happens.
The discovery of ever more "efficient" infinite series — that is, that converge on pi faster for each term you add — coupled with the development of bigger and better computers has made it possible to calculate pi to thousands, millions, and now billions of decimal places. Cecil, knowing his readers' love of higher mathematics, would be pleased to reprint one of these magic pi recipes in full, but there isn't room and besides, I gave up Greek subscripts for Lent.
Why compute one billion digits? God knows. As one learned treatise notes, "thirty-nine places of pi suffice for computing the circumference of a circle girdling the known universe with an error no greater than the radius of a hydrogen atom." One pi-wars participant rationalizes by saying once you get beyond a billion digits subtle patterns may begin to emerge in the numbers, but give me a break. The real reason, many feel, is "because it's there." So immature. Thank God the rest of us have put such foolishness behind us.
Show some respect
Most of your comments on the pi calculation were reasonable, but the last few sentences were not. Gregory Chudnovsky is one of the people doing these calculations, and he is the wisest man I know. Adversity can lead to deep growth. He got myasthenia gravis when was 10 or 12, and has spent the last 25 years in bed or a wheelchair. The KGB worked the family over in their usual way before they were allowed to leave Kiev about 12 years ago. He was the one mathematician in the first group of MacArthur fellows and the best of all the mathematics appointments. As Herbert Robbins once said, Gregory seems to have come directly from a Dostoievski novel.
There are serious questions about what "random" is, and David and Gregory Chudnovsky care about it. Their pi calculations are concerned with this, and with certain deep problems in transcendental number theory. The right way for you to have answered the letter would have been to say the people who do these calculations know why they're doing it but you don't. Even the Straight Dope doesn't have all the answers.
Well, excuse me. However, see below.
Thanks for the beaut on calculating pi to umpteen decimal places. Absurd though it is, I can think of one good reason why some computer guys may do it--they may need some standard, endless task with which to calibrate their computer's speed.
It's even more absurd that the "learned treatise" you quoted said "thirty-nine places of pi suffice for computing the circumference of a circle girdling the known universe with an error no greater than the radius of a hydrogen atom." Actually, only 35 places are required. Here's why: a reasonable value for the radius of the universe is 2 x 10^34 angstroms. That's just 20 billion years (the time since the big bang) times the speed of light (the upper limit on the rate of expansion). Since pi equals the circumference divided by twice the radius, the uncertainty in pi equals the uncertainty in the circumference (one half angstrom, the radius of a hydrogen atom) divided by twice the radius. That's (1/2 / (2[2 x 10^34]) or 1/(8 x 10^34) or about 10^-35. Knowing pi to 39 decimal places would nearly suffice for computing the circumference of a circle enclosing the known universe with an error no greater than the nucleus of a hydrogen atom, and that's a whole lot smaller than the entire atom. I'm sure you'd want to get a thing like that straight.
I knew that. But I always like it better when you guys figure things out for yourselves.