How are the known digits of $pi$ guaranteed?
$begingroup$
When discussing with my son a few of the many methods to calculate the digits of $pi$ (15 yo school level), I realized that the methods I know more or less (geometric approximation, Monte Carlo and basic series) are all convergent but none of them explicitly states that the $n$-th digit calculated at some point is indeed a true digit (that it will not change in further calculations).
To take an example, the Gregory–Leibniz series gives us, for each step:
$$
begin{align}
frac{4}{1} & = 4\
frac{4}{1}-frac{4}{3} & = 2.666666667...\
frac{4}{1}-frac{4}{3}+frac{4}{5} & = 3.466666667...\
frac{4}{1}-frac{4}{3}+frac{4}{5}-frac{4}{7} & = 2.895238095...
end{align}
$$
The integer part has changed four times in four steps. Why would we know that $3$ is the correct first digit?
Similarly in Monte Carlo: the larger the sample, the better the result but do we mathematically know that "now that we tried [that many times]
, we are mathematically sure that $pi$ starts with $3$".
In other words:
- does each of the techniques to calculate $pi$ (or at least the major ones) have a proof that a given digit is now correct?
- if not, what are examples of the ones which do and do not have this proof?
Note: The great answers sor far (thank you!) mention a proof on a specific technique, and/or a proof that a specific digit is indeed the correct one. I was more interested to understand if this applies to all of the (major) techniques (= whether they all certify that this digit is guaranteed correct).
Or that we have some which do (the ones in the two first answers for instance) and others do not (the further we go, the more precise the number but we do not know if something will not jump in at some step and change a previously stable digit. When typing this in and thinking on the fly, I wonder if this would not be a very bad technique in itself, due to that lack of stability)
pi decimal-expansion
$endgroup$
add a comment |
$begingroup$
When discussing with my son a few of the many methods to calculate the digits of $pi$ (15 yo school level), I realized that the methods I know more or less (geometric approximation, Monte Carlo and basic series) are all convergent but none of them explicitly states that the $n$-th digit calculated at some point is indeed a true digit (that it will not change in further calculations).
To take an example, the Gregory–Leibniz series gives us, for each step:
$$
begin{align}
frac{4}{1} & = 4\
frac{4}{1}-frac{4}{3} & = 2.666666667...\
frac{4}{1}-frac{4}{3}+frac{4}{5} & = 3.466666667...\
frac{4}{1}-frac{4}{3}+frac{4}{5}-frac{4}{7} & = 2.895238095...
end{align}
$$
The integer part has changed four times in four steps. Why would we know that $3$ is the correct first digit?
Similarly in Monte Carlo: the larger the sample, the better the result but do we mathematically know that "now that we tried [that many times]
, we are mathematically sure that $pi$ starts with $3$".
In other words:
- does each of the techniques to calculate $pi$ (or at least the major ones) have a proof that a given digit is now correct?
- if not, what are examples of the ones which do and do not have this proof?
Note: The great answers sor far (thank you!) mention a proof on a specific technique, and/or a proof that a specific digit is indeed the correct one. I was more interested to understand if this applies to all of the (major) techniques (= whether they all certify that this digit is guaranteed correct).
Or that we have some which do (the ones in the two first answers for instance) and others do not (the further we go, the more precise the number but we do not know if something will not jump in at some step and change a previously stable digit. When typing this in and thinking on the fly, I wonder if this would not be a very bad technique in itself, due to that lack of stability)
pi decimal-expansion
$endgroup$
$begingroup$
Check your arithmetic on the series again.
$endgroup$
– saulspatz
Nov 13 '18 at 9:57
1
$begingroup$
@saulspatz: thanks! (shame on at least four generations), I corrected that and will make a nicer set of equations once I learn how to typeset them
$endgroup$
– WoJ
Nov 13 '18 at 10:04
4
$begingroup$
Regarding the Monte Carlo method, since it's based on random sampling, there can be no guarantees of any kind no matter how many samples you take. You could take a trillion samples and the result might still be zero. It's extremely unlikely, but there's a finite non-zero probability.
$endgroup$
– user1008646
Nov 13 '18 at 13:12
$begingroup$
With thanks to the R- packagegmp
, I can happily tell you that $frac{ 884279719003555 }{2^{48}}$ is equal to $pi$ to within a part in 10^16
$endgroup$
– Carl Witthoft
Nov 13 '18 at 16:37
$begingroup$
Using any technique to evaluate digits of $pi$ also includes an analysis of the error involved. For example if one uses series one knows the amount of error involved in summing $n$ terms. And the error estimate tells you the number of correct digits obtained.
$endgroup$
– Paramanand Singh
Nov 14 '18 at 7:36
add a comment |
$begingroup$
When discussing with my son a few of the many methods to calculate the digits of $pi$ (15 yo school level), I realized that the methods I know more or less (geometric approximation, Monte Carlo and basic series) are all convergent but none of them explicitly states that the $n$-th digit calculated at some point is indeed a true digit (that it will not change in further calculations).
To take an example, the Gregory–Leibniz series gives us, for each step:
$$
begin{align}
frac{4}{1} & = 4\
frac{4}{1}-frac{4}{3} & = 2.666666667...\
frac{4}{1}-frac{4}{3}+frac{4}{5} & = 3.466666667...\
frac{4}{1}-frac{4}{3}+frac{4}{5}-frac{4}{7} & = 2.895238095...
end{align}
$$
The integer part has changed four times in four steps. Why would we know that $3$ is the correct first digit?
Similarly in Monte Carlo: the larger the sample, the better the result but do we mathematically know that "now that we tried [that many times]
, we are mathematically sure that $pi$ starts with $3$".
In other words:
- does each of the techniques to calculate $pi$ (or at least the major ones) have a proof that a given digit is now correct?
- if not, what are examples of the ones which do and do not have this proof?
Note: The great answers sor far (thank you!) mention a proof on a specific technique, and/or a proof that a specific digit is indeed the correct one. I was more interested to understand if this applies to all of the (major) techniques (= whether they all certify that this digit is guaranteed correct).
Or that we have some which do (the ones in the two first answers for instance) and others do not (the further we go, the more precise the number but we do not know if something will not jump in at some step and change a previously stable digit. When typing this in and thinking on the fly, I wonder if this would not be a very bad technique in itself, due to that lack of stability)
pi decimal-expansion
$endgroup$
When discussing with my son a few of the many methods to calculate the digits of $pi$ (15 yo school level), I realized that the methods I know more or less (geometric approximation, Monte Carlo and basic series) are all convergent but none of them explicitly states that the $n$-th digit calculated at some point is indeed a true digit (that it will not change in further calculations).
To take an example, the Gregory–Leibniz series gives us, for each step:
$$
begin{align}
frac{4}{1} & = 4\
frac{4}{1}-frac{4}{3} & = 2.666666667...\
frac{4}{1}-frac{4}{3}+frac{4}{5} & = 3.466666667...\
frac{4}{1}-frac{4}{3}+frac{4}{5}-frac{4}{7} & = 2.895238095...
end{align}
$$
The integer part has changed four times in four steps. Why would we know that $3$ is the correct first digit?
Similarly in Monte Carlo: the larger the sample, the better the result but do we mathematically know that "now that we tried [that many times]
, we are mathematically sure that $pi$ starts with $3$".
In other words:
- does each of the techniques to calculate $pi$ (or at least the major ones) have a proof that a given digit is now correct?
- if not, what are examples of the ones which do and do not have this proof?
Note: The great answers sor far (thank you!) mention a proof on a specific technique, and/or a proof that a specific digit is indeed the correct one. I was more interested to understand if this applies to all of the (major) techniques (= whether they all certify that this digit is guaranteed correct).
Or that we have some which do (the ones in the two first answers for instance) and others do not (the further we go, the more precise the number but we do not know if something will not jump in at some step and change a previously stable digit. When typing this in and thinking on the fly, I wonder if this would not be a very bad technique in itself, due to that lack of stability)
pi decimal-expansion
pi decimal-expansion
edited Nov 13 '18 at 17:26
miracle173
7,32322247
7,32322247
asked Nov 13 '18 at 9:54
WoJWoJ
29837
29837
$begingroup$
Check your arithmetic on the series again.
$endgroup$
– saulspatz
Nov 13 '18 at 9:57
1
$begingroup$
@saulspatz: thanks! (shame on at least four generations), I corrected that and will make a nicer set of equations once I learn how to typeset them
$endgroup$
– WoJ
Nov 13 '18 at 10:04
4
$begingroup$
Regarding the Monte Carlo method, since it's based on random sampling, there can be no guarantees of any kind no matter how many samples you take. You could take a trillion samples and the result might still be zero. It's extremely unlikely, but there's a finite non-zero probability.
$endgroup$
– user1008646
Nov 13 '18 at 13:12
$begingroup$
With thanks to the R- packagegmp
, I can happily tell you that $frac{ 884279719003555 }{2^{48}}$ is equal to $pi$ to within a part in 10^16
$endgroup$
– Carl Witthoft
Nov 13 '18 at 16:37
$begingroup$
Using any technique to evaluate digits of $pi$ also includes an analysis of the error involved. For example if one uses series one knows the amount of error involved in summing $n$ terms. And the error estimate tells you the number of correct digits obtained.
$endgroup$
– Paramanand Singh
Nov 14 '18 at 7:36
add a comment |
$begingroup$
Check your arithmetic on the series again.
$endgroup$
– saulspatz
Nov 13 '18 at 9:57
1
$begingroup$
@saulspatz: thanks! (shame on at least four generations), I corrected that and will make a nicer set of equations once I learn how to typeset them
$endgroup$
– WoJ
Nov 13 '18 at 10:04
4
$begingroup$
Regarding the Monte Carlo method, since it's based on random sampling, there can be no guarantees of any kind no matter how many samples you take. You could take a trillion samples and the result might still be zero. It's extremely unlikely, but there's a finite non-zero probability.
$endgroup$
– user1008646
Nov 13 '18 at 13:12
$begingroup$
With thanks to the R- packagegmp
, I can happily tell you that $frac{ 884279719003555 }{2^{48}}$ is equal to $pi$ to within a part in 10^16
$endgroup$
– Carl Witthoft
Nov 13 '18 at 16:37
$begingroup$
Using any technique to evaluate digits of $pi$ also includes an analysis of the error involved. For example if one uses series one knows the amount of error involved in summing $n$ terms. And the error estimate tells you the number of correct digits obtained.
$endgroup$
– Paramanand Singh
Nov 14 '18 at 7:36
$begingroup$
Check your arithmetic on the series again.
$endgroup$
– saulspatz
Nov 13 '18 at 9:57
$begingroup$
Check your arithmetic on the series again.
$endgroup$
– saulspatz
Nov 13 '18 at 9:57
1
1
$begingroup$
@saulspatz: thanks! (shame on at least four generations), I corrected that and will make a nicer set of equations once I learn how to typeset them
$endgroup$
– WoJ
Nov 13 '18 at 10:04
$begingroup$
@saulspatz: thanks! (shame on at least four generations), I corrected that and will make a nicer set of equations once I learn how to typeset them
$endgroup$
– WoJ
Nov 13 '18 at 10:04
4
4
$begingroup$
Regarding the Monte Carlo method, since it's based on random sampling, there can be no guarantees of any kind no matter how many samples you take. You could take a trillion samples and the result might still be zero. It's extremely unlikely, but there's a finite non-zero probability.
$endgroup$
– user1008646
Nov 13 '18 at 13:12
$begingroup$
Regarding the Monte Carlo method, since it's based on random sampling, there can be no guarantees of any kind no matter how many samples you take. You could take a trillion samples and the result might still be zero. It's extremely unlikely, but there's a finite non-zero probability.
$endgroup$
– user1008646
Nov 13 '18 at 13:12
$begingroup$
With thanks to the R- package
gmp
, I can happily tell you that $frac{ 884279719003555 }{2^{48}}$ is equal to $pi$ to within a part in 10^16$endgroup$
– Carl Witthoft
Nov 13 '18 at 16:37
$begingroup$
With thanks to the R- package
gmp
, I can happily tell you that $frac{ 884279719003555 }{2^{48}}$ is equal to $pi$ to within a part in 10^16$endgroup$
– Carl Witthoft
Nov 13 '18 at 16:37
$begingroup$
Using any technique to evaluate digits of $pi$ also includes an analysis of the error involved. For example if one uses series one knows the amount of error involved in summing $n$ terms. And the error estimate tells you the number of correct digits obtained.
$endgroup$
– Paramanand Singh
Nov 14 '18 at 7:36
$begingroup$
Using any technique to evaluate digits of $pi$ also includes an analysis of the error involved. For example if one uses series one knows the amount of error involved in summing $n$ terms. And the error estimate tells you the number of correct digits obtained.
$endgroup$
– Paramanand Singh
Nov 14 '18 at 7:36
add a comment |
10 Answers
10
active
oldest
votes
$begingroup$
I think the general answer you're looking for is:
Yes, proving that a method for calculating $pi$ works requires also describing (and proving) a rule for when you can be sure of a digit you have produced. If the method is based on "sum such-and-such series", this means that one needs to provide an error bound for the series. Before you have that, what you're looking at is not yet a "method for calculating $pi$".
So the answer to your first question is "Yes; because otherwise they wouldn't count as techniques for calculating $pi$ at all".
Sometimes the error bound can be left implicit because the reader is supposed to know some general theorems that leads to an obvious error bound. For example, the Leibniz series you're using is an absolutely decreasing alternating series, and therefore we can avail ourselves of a general theorem saying that the limit of such a series is always strictly between the last two partial sums. Thus, if you get two approximations in succession that start with the same $n$ digits, you can trust those digits.
(The Leibniz series is of course a pretty horrible way to calculate $pi$ -- for example you'll need at least two million terms before you have any hope of the first six digits after the point stabilizing, and the number of terms needed increases exponentially when you want more digits).
In other cases where an error bound is not as easy to see, one may need to resort to ad-hoc cleverness to find and prove such a bound -- and then this cleverness is part of the method.
$endgroup$
add a comment |
$begingroup$
Note that $pi=6arcsinleft(frac12right)$. So, since$$arcsin(x)=sum_{n=0}^infty frac1{2^{2n}}binom{2n}nfrac{ x^{2n+1}}{2n+1},$$you have$$pi=sum_{n=0}^inftyfrac6{2^{4n+1}(2n+1)}binom{2n}n.$$Now, for each $Ninmathbb{Z}^+$, let$$S_N=sum_{n=0}^Nfrac6{2^{4n+1}(2n+1)}binom{2n}ntext{ and let }R_N=sum_{n=N+1}^inftyfrac6{2^{4n+1}(2n+1)}binom{2n}n.$$Then:
$(forall Ninmathbb{Z}^+):pi=S_N+R_N$;- the sequence $(S_N)_{Ninmathbb{Z}_+}$ is strictly encreasing and $lim_{Ntoinfty}S_N=pi$. In particular, each $S_N$ is a better approximation of $pi$ than the previous one.
Since$$(forall ninmathbb N):binom{2n}n<4^n=2^{2n},$$you have$$R_N<sum_{n=N+1}^inftyfrac6{2^{2n+1}}=frac1{4^N}.$$So, taking $N=0$, you get that $pi=S_0+R_0$. But $S_0=3$ and $R_0<1$. So, the first digit of $pi$ is $3$. If you take $N=3$, then $pi=S_3+R_3$. But $S_3approx3.14116$ and $R_3<0.015625$. So, the second digit is $1$. And so on…
$endgroup$
13
$begingroup$
I think this could be made clearer to the less mathematically literate by mentioning your partial sums are monotonically increasing.
$endgroup$
– Yakk
Nov 13 '18 at 15:05
1
$begingroup$
@Yakk Oh, yes - that makes all the difference :-p
$endgroup$
– Strawberry
Nov 13 '18 at 16:47
$begingroup$
@Strawberry Ok, I'll translate the rest: "We can find a sequence that only increases and we know converges to pi. We can also bound its error, how far off it is, with an upper bound. So when its ones digit raches 3 plus some fraction (0 in the above case) and its error sufficiently less 1 minus the same fraction, we know the one's digit is 3. We can repeat this for each digit."
$endgroup$
– Yakk
Nov 13 '18 at 16:50
$begingroup$
@Yakk Is it like saying that while we can see that a hexagon is a better approximation of a circle than a triangle, we can also deduce that only a polygon of infinite sides will ever truly approximate a circle.
$endgroup$
– Strawberry
Nov 13 '18 at 16:58
add a comment |
$begingroup$
The simplest method to explain to a child is probably the polygon method, which states that the circumference of a circle is bounded from below by the circumference of an inscribed regular $n$-polygon and from above by the circumference of a circumscribed polygon.
Once you have a bound from below and above, you can guarantee some digits. For example, any number between $0.12345$ and $0.12346$ will begin with $0.1234$.
$endgroup$
2
$begingroup$
Note that this is implied by the monoticity of the sequences of the aeras.
$endgroup$
– nicomezi
Nov 13 '18 at 10:02
2
$begingroup$
Yes, this is what I actually used (the polygon method is what they learned) but this is exactly what brought the more general question about other methods.
$endgroup$
– WoJ
Nov 13 '18 at 10:06
$begingroup$
Can you explain to me how the circumference of a polygon that circumscribes the circle bounds the circumference of a circle from above? I can't see this. I can see that the circumference of a polygon inscribing the circle is smaller than the circumference of the circle, because the the length of the line from one point to another is smaller than the length of any other curve connecting two points. But I cannot see this for the circumscribing polygon. If we use the areas instead of the length the situation is simple. But initially pi is defined by the circumference.
$endgroup$
– miracle173
Nov 13 '18 at 10:59
2
$begingroup$
There are a few more details to that method than are usually taught. You can take a very similar approach where you don't use regular polygons but instead use inscribed and circumscribed approximations made up of square shaped pixels. As you increase resolution of those pixels the shapes will converge towards the circle. But if you use that approach you'll end up showing that pi equals 4. So in order for this method to produce a valid proof one has to show why approximating with regular polygons is valid but other shapes are not.
$endgroup$
– kasperd
Nov 13 '18 at 15:31
1
$begingroup$
@pipe Is this the sort of thing you're looking for?
$endgroup$
– Richard Ward
Nov 13 '18 at 16:39
|
show 8 more comments
$begingroup$
In his answer, José shows how to calculate pi via a specific approximation and why that works. I believe the why is rather overlooked there and I wanted to clarify and make it less specific to the computation of $pi$.
Imagine you compute $S = Sigma_0^{infty} a_n$ for some series $a_n$. And, after summing the first few terms, let's say $bar S_i = Sigma_0^i a_n$, you can also prove that the rest of the sum is below some bounds $R_i^- le Sigma_{i + 1}^infty a_n le R_i^+$. Then you know also that $bar S_i + R_i^- le S le bar S_i + R_i^+$. See how that bounds the exact sum $S$ from above and below? If now both above and below have the same leading digits, we can be sure that those are also the leading digits of $S$.
Now, have another look at what José does: he computes the sum over a series up to term $N$ - the exact series is not important here. He approximates the errors $R_N^- = 0$ - all terms are positive - and $R_N^+ = frac{1}{4^N}$. So after you summed the first $N$ terms, what I called $bar S_N$ you can definitly say $bar S_N le S le bar S_N + frac{1}{4^N}$.
$endgroup$
add a comment |
$begingroup$
The answers so far to this great question illustrate a problem that we should redress in this forum: We rush in good faith to say something smart, something that other mathematicians may enjoy for its cleverness, but something which often is difficult to digest to the OP.
*steps off the soap box
Let me try a different take that will be of use to a 15 yo. There are two parts to the question: a) Do all known methods get arbitrarily many digits correct, b) how to tell that a digit is already correct.
a) Throughout history, people have found many ingenious ways to approximate $pi$, say as $22/7$ or $sqrt{10}$. Sometimes they knew they had an approximation, sometimes they mistakenly assumed they had the actual value.
When in modern mathematics a formula is presented for $pi$, it is guaranteed to give (eventually) as many digits as desired. The keyword is to say that the formula converges.
Please note that mathematicians word things differently; we do not care that “we get arbitrarily many digits correctly”, but rather that the value computed “is arbitrarily close to the target value”. These are equivalent, but the second is not dependent on writing numbers in base 10.
b) Every formula converges at its own pace, so there is no universal way to decide when a digit given by one or another is settled. However, there are general techniques to prove convergence, and often it is possible to see at a glance (or after a brief computation) that the formula converges. Other times it is not so straightforward...
So let’s take a look at only one example; namely the formula mentioned in the question:
$$4-4/3+4/5-4/7+ldots$$
This is particularly slow, but offers a great insight into convergence. It is an example of an alternating series; i.e., you add, then subtract, then add, then subtract, in perfect alternation. Moreover, each term is smaller than the previous one, as in $4/3>4/5>4/7>ldots$. Moreover, these terms get arbitrarily small, as in
$$4/4000001 < 4/4000000 = 0.000001$$
Now given these three conditions, we know the infinite sum will converge to a final vale (which we are told is $pi$). Why? Plot the consecutive sums on the real line to see what happens. You get 4, then 2.6666, then 3.46666, etc. More, then less, then more, so that the values are nested (because each term is smaller than the previous), and overshoot the final value of $pi$. Since the terms get small, the sums are forced to get closer and closer to the final value.
Here is the kicker: when you add $4/41$ (for instance), you overshot your mark, so the current sum is closer to $pi$ than $4/41$, and similarly for any other summand.
In particular, when you add $4/4000001$, you are closer to target than 0.000001, and the first 5 digits will be guaranteed.
Disclaimer. This does not show that the final value is $pi$. That requires more math. The argument only shows that the sum converges to a final value.
$endgroup$
1
$begingroup$
(-1) A series converging is not equivalent to getting arbitrarily many digits correct. Consider $sum_{i=1}^{infty}frac 1{2^i}$ as an approximation for $1$. This clearly converges, but never gets any digits correct!
$endgroup$
– DreamConspiracy
Nov 13 '18 at 17:45
$begingroup$
It does get every digit correct in 0.99999999..., which, as you know, is a different expression for the number we usually denote 1.
$endgroup$
– Rodrigo A. Pérez
Nov 14 '18 at 0:02
$begingroup$
sorry, I should have been more clear. Yes, $sum_{i=1}^{infty}frac 1{2^i}$ converges to $1$, but no partial sums ever have any correct digits.
$endgroup$
– DreamConspiracy
Nov 14 '18 at 0:17
add a comment |
$begingroup$
The Monte Carlo method is a stochastic method, so it does not provide certain proof. All it can do is say that the probability of having a particular result, if it were wrong about the first $k$ digits of $pi$, goes to zero.
For a sequence that converges to $pi$, however, we have that there is some function $f(k)$ such that for any $k$ and $n>f(k)$, the $n$-th term is correct to $k$ digits (barring the .9999.... issue). That's just from the definition of "converges"; one formulation of what it means to converge that is equivalent to the standard definition is that given any number of digits, there is some point in the sequence such that all the terms after that point are accurate to that number of digits. So any time someone claims that a sequence converges to $pi$, they are claiming that for each digit, there is some point at which it is certain (however, some people are loose with stochastic terminology, giving such formulations as "converges with probability one", which is not a precise formulation). Generally, proofs of convergence, even if they do not explicitly construct a function $f(k)$, can be easily modified to generate such a function.
For any approximation based on a Taylor series, the is the Lagrange error bound.
$endgroup$
add a comment |
$begingroup$
No method gives $pi$ exactly, i.e. all digits of $pi$, in finite time. But many methods give arbitrarily close approximations of $pi$ if they run long enough. Such methods construct a sequence of values $x_n$ whose limit as $ntoinfty$ is $pi$. For example, the technique you've mentioned has $x_1=4,,,x_2=4-frac{4}{3}$ etc.
Now, among the sequences satisfying $lim_{ntoinfty}x_n=pi$, some are "faster" than others. For example, the aforementioned sequence has $|x_n-pi|$ roughly proportional to $frac{1}{n}$, so the number of correct decimal places in approximating $pi$ as $x_n$ is approximately $log n$, for $n$ large. For example it takes about a million ($400,000$ in fact) terms to get $6$ decimal places right.
The good news is there are much better sequences than that; for example, this method gets a number of correct decimal place approximately proportional to $9^n$. All we have to do to be sure of specific digits is use appropriate mathematical theory to know how far to run a technique for our purposes. The bad news is this theory gets a little thorny, but I'll try to keep it simple. (If you feel i've made it too simple, see here to learn more.)
If $x_n$ is a sequence of limit $L$, and some $K,,p$ exist with the large-$n$ approximation $|epsilon_{n+1}|approx K|epsilon_n|^p$ with $epsilon_n:=x_n-L$, there are three separate cases to consider:
$p=K=1$, resulting in very slow convergence such as our original example;
$p=1,,K<1$, so $|epsilon_n|$ is approximately proportional to $K^n$, and the number of correct decimal places is approximately proportional to $n$;
$p>1$, so $log|epsilon_{n+1}|approx p|epsilon_n|+log K$, and the number of correct decimal places is approximately proportional to $p^n$.
The first case is called logarithmic convergence; the second is called linear convergence; the third is called superlinear convergence. Note that among superlinearly convergent algorithms increasing $p$ only causes a fractional reduction in the value of $n$ needed to get a given number of decimal places right, and often high-$p$ algorithms have such complicated steps they aren't worth it. The real question is whether some $p>1$ is achievable.
I linked before to a $p=9$ example of superlinear convergence, but it's very complicated. Depending on your son's ambition in self-education, he may be able to understand how this $p=2$ superlinear method works. In fact I probably should have focused on $p=2$ from the start, since calculus lessons often cover a (usually) $p=2$ technique for solving equations called the Newton-Raphson method. Somewhat easier, since it only requires a few basic facts about complex numbers, is understanding certain linear methods such as this work.
$endgroup$
1
$begingroup$
That's not what he asked.
$endgroup$
– Carl Witthoft
Nov 13 '18 at 16:35
$begingroup$
@CarlWitthoft The OP asked for an explanation of how some minimum number of decimal points can be guaranteed. I provided that: we can approximate the error at the $n$th term.
$endgroup$
– J.G.
Nov 13 '18 at 17:52
add a comment |
$begingroup$
We can apply Dalzell's idea to prove $pi<frac{22}{7}$ to decimal approximations as well.
The first digit of $pi$ is guaranteed by the inequality
$$3<pi<4,$$
which can be proven from integrals
$$pi=3+2int_0^1frac{x(1-x)^2}{1+x^2}dx$$
and
$$pi=4-4int_0^1 frac{x^2}{1+x^2}dx$$
Similarly, the second digit being $1$ is equivalent to
$$3.1<pi<3.2$$
or
$$frac{31}{10}<pi<frac{16}{5},$$
which is proven by
$$pi=frac{31}{10}+2int_0^1 frac{x^2(1-x)^2(1-x+x^2)}{1+x^2}dx$$
and
$$pi=frac{16}{5}-int_0^1 frac{x^2(1-x)^2(1+2x+x^2)}{1+x^2}dx$$
Similar double inequalities can be written for every digit. For example, the answer https://math.stackexchange.com/a/2485646/134791 shows an integral for $pi>3.14$.
$endgroup$
add a comment |
$begingroup$
I wish to remind you of this formula: pi/4 = arctan(1) = 4 * arctan(1/5) - 1 * arctan(1/239) . This is easily proven with highschool math.
Then with the Taylor formula for the arctan() function you can see that this converges quickly (much quicker than arctan(1) itself), and you can even calculate how many digits you gain (on average) for each iteration. It all depends on starting with a good formula !
$endgroup$
add a comment |
$begingroup$
Assuming you can explain to you kid that:
$$
a_n rightarrowpi iff exists n_0 , text{such that} , n>n_0 rightarrow |pi -a_n|<varepsilon
$$
Then it is possible to state that $varepsilon$ is the precision of the "approximation" $a_n$.
Thus, you can compare the digits of $a_n+varepsilon$ and of $a_n-varepsilon$. All unchanged digits are certain.
$endgroup$
$begingroup$
This makes wonder if there are arbitrarily long sequences of equal digits in pi, all of those being 9. That could cause the number of correct digits to stop progressing over the sequence of approximations, and would be frustrating. Nevertheless, these sequences could not be infinite because pi is irrational. However, if the number being approximated by a non-monotonic series was 1, no digit could ever be guaranteed. And that may be the source of the voice in the back of one's head saying that something might be off here.
$endgroup$
– Mefitico
Nov 13 '18 at 18:35
$begingroup$
this is slightly weaker (and possibly equivalent to, I'm not sure) asking whether $pi$ is a normal number. This is still an open question.
$endgroup$
– DreamConspiracy
Nov 14 '18 at 0:21
$begingroup$
@DreamConspiracy: Mefifico's property is not a necessary but not sufficient condition for being normal. Consider for exampe $$3.09,099,0999,09999,099999,0999999ldots$$ which does contain arbitrarily long runs of $9$ but is very far from normal.
$endgroup$
– Henning Makholm
Nov 14 '18 at 23:09
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2996541%2fhow-are-the-known-digits-of-pi-guaranteed%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
10 Answers
10
active
oldest
votes
10 Answers
10
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
I think the general answer you're looking for is:
Yes, proving that a method for calculating $pi$ works requires also describing (and proving) a rule for when you can be sure of a digit you have produced. If the method is based on "sum such-and-such series", this means that one needs to provide an error bound for the series. Before you have that, what you're looking at is not yet a "method for calculating $pi$".
So the answer to your first question is "Yes; because otherwise they wouldn't count as techniques for calculating $pi$ at all".
Sometimes the error bound can be left implicit because the reader is supposed to know some general theorems that leads to an obvious error bound. For example, the Leibniz series you're using is an absolutely decreasing alternating series, and therefore we can avail ourselves of a general theorem saying that the limit of such a series is always strictly between the last two partial sums. Thus, if you get two approximations in succession that start with the same $n$ digits, you can trust those digits.
(The Leibniz series is of course a pretty horrible way to calculate $pi$ -- for example you'll need at least two million terms before you have any hope of the first six digits after the point stabilizing, and the number of terms needed increases exponentially when you want more digits).
In other cases where an error bound is not as easy to see, one may need to resort to ad-hoc cleverness to find and prove such a bound -- and then this cleverness is part of the method.
$endgroup$
add a comment |
$begingroup$
I think the general answer you're looking for is:
Yes, proving that a method for calculating $pi$ works requires also describing (and proving) a rule for when you can be sure of a digit you have produced. If the method is based on "sum such-and-such series", this means that one needs to provide an error bound for the series. Before you have that, what you're looking at is not yet a "method for calculating $pi$".
So the answer to your first question is "Yes; because otherwise they wouldn't count as techniques for calculating $pi$ at all".
Sometimes the error bound can be left implicit because the reader is supposed to know some general theorems that leads to an obvious error bound. For example, the Leibniz series you're using is an absolutely decreasing alternating series, and therefore we can avail ourselves of a general theorem saying that the limit of such a series is always strictly between the last two partial sums. Thus, if you get two approximations in succession that start with the same $n$ digits, you can trust those digits.
(The Leibniz series is of course a pretty horrible way to calculate $pi$ -- for example you'll need at least two million terms before you have any hope of the first six digits after the point stabilizing, and the number of terms needed increases exponentially when you want more digits).
In other cases where an error bound is not as easy to see, one may need to resort to ad-hoc cleverness to find and prove such a bound -- and then this cleverness is part of the method.
$endgroup$
add a comment |
$begingroup$
I think the general answer you're looking for is:
Yes, proving that a method for calculating $pi$ works requires also describing (and proving) a rule for when you can be sure of a digit you have produced. If the method is based on "sum such-and-such series", this means that one needs to provide an error bound for the series. Before you have that, what you're looking at is not yet a "method for calculating $pi$".
So the answer to your first question is "Yes; because otherwise they wouldn't count as techniques for calculating $pi$ at all".
Sometimes the error bound can be left implicit because the reader is supposed to know some general theorems that leads to an obvious error bound. For example, the Leibniz series you're using is an absolutely decreasing alternating series, and therefore we can avail ourselves of a general theorem saying that the limit of such a series is always strictly between the last two partial sums. Thus, if you get two approximations in succession that start with the same $n$ digits, you can trust those digits.
(The Leibniz series is of course a pretty horrible way to calculate $pi$ -- for example you'll need at least two million terms before you have any hope of the first six digits after the point stabilizing, and the number of terms needed increases exponentially when you want more digits).
In other cases where an error bound is not as easy to see, one may need to resort to ad-hoc cleverness to find and prove such a bound -- and then this cleverness is part of the method.
$endgroup$
I think the general answer you're looking for is:
Yes, proving that a method for calculating $pi$ works requires also describing (and proving) a rule for when you can be sure of a digit you have produced. If the method is based on "sum such-and-such series", this means that one needs to provide an error bound for the series. Before you have that, what you're looking at is not yet a "method for calculating $pi$".
So the answer to your first question is "Yes; because otherwise they wouldn't count as techniques for calculating $pi$ at all".
Sometimes the error bound can be left implicit because the reader is supposed to know some general theorems that leads to an obvious error bound. For example, the Leibniz series you're using is an absolutely decreasing alternating series, and therefore we can avail ourselves of a general theorem saying that the limit of such a series is always strictly between the last two partial sums. Thus, if you get two approximations in succession that start with the same $n$ digits, you can trust those digits.
(The Leibniz series is of course a pretty horrible way to calculate $pi$ -- for example you'll need at least two million terms before you have any hope of the first six digits after the point stabilizing, and the number of terms needed increases exponentially when you want more digits).
In other cases where an error bound is not as easy to see, one may need to resort to ad-hoc cleverness to find and prove such a bound -- and then this cleverness is part of the method.
edited Nov 13 '18 at 19:16
answered Nov 13 '18 at 14:49
Henning MakholmHenning Makholm
239k17303540
239k17303540
add a comment |
add a comment |
$begingroup$
Note that $pi=6arcsinleft(frac12right)$. So, since$$arcsin(x)=sum_{n=0}^infty frac1{2^{2n}}binom{2n}nfrac{ x^{2n+1}}{2n+1},$$you have$$pi=sum_{n=0}^inftyfrac6{2^{4n+1}(2n+1)}binom{2n}n.$$Now, for each $Ninmathbb{Z}^+$, let$$S_N=sum_{n=0}^Nfrac6{2^{4n+1}(2n+1)}binom{2n}ntext{ and let }R_N=sum_{n=N+1}^inftyfrac6{2^{4n+1}(2n+1)}binom{2n}n.$$Then:
$(forall Ninmathbb{Z}^+):pi=S_N+R_N$;- the sequence $(S_N)_{Ninmathbb{Z}_+}$ is strictly encreasing and $lim_{Ntoinfty}S_N=pi$. In particular, each $S_N$ is a better approximation of $pi$ than the previous one.
Since$$(forall ninmathbb N):binom{2n}n<4^n=2^{2n},$$you have$$R_N<sum_{n=N+1}^inftyfrac6{2^{2n+1}}=frac1{4^N}.$$So, taking $N=0$, you get that $pi=S_0+R_0$. But $S_0=3$ and $R_0<1$. So, the first digit of $pi$ is $3$. If you take $N=3$, then $pi=S_3+R_3$. But $S_3approx3.14116$ and $R_3<0.015625$. So, the second digit is $1$. And so on…
$endgroup$
13
$begingroup$
I think this could be made clearer to the less mathematically literate by mentioning your partial sums are monotonically increasing.
$endgroup$
– Yakk
Nov 13 '18 at 15:05
1
$begingroup$
@Yakk Oh, yes - that makes all the difference :-p
$endgroup$
– Strawberry
Nov 13 '18 at 16:47
$begingroup$
@Strawberry Ok, I'll translate the rest: "We can find a sequence that only increases and we know converges to pi. We can also bound its error, how far off it is, with an upper bound. So when its ones digit raches 3 plus some fraction (0 in the above case) and its error sufficiently less 1 minus the same fraction, we know the one's digit is 3. We can repeat this for each digit."
$endgroup$
– Yakk
Nov 13 '18 at 16:50
$begingroup$
@Yakk Is it like saying that while we can see that a hexagon is a better approximation of a circle than a triangle, we can also deduce that only a polygon of infinite sides will ever truly approximate a circle.
$endgroup$
– Strawberry
Nov 13 '18 at 16:58
add a comment |
$begingroup$
Note that $pi=6arcsinleft(frac12right)$. So, since$$arcsin(x)=sum_{n=0}^infty frac1{2^{2n}}binom{2n}nfrac{ x^{2n+1}}{2n+1},$$you have$$pi=sum_{n=0}^inftyfrac6{2^{4n+1}(2n+1)}binom{2n}n.$$Now, for each $Ninmathbb{Z}^+$, let$$S_N=sum_{n=0}^Nfrac6{2^{4n+1}(2n+1)}binom{2n}ntext{ and let }R_N=sum_{n=N+1}^inftyfrac6{2^{4n+1}(2n+1)}binom{2n}n.$$Then:
$(forall Ninmathbb{Z}^+):pi=S_N+R_N$;- the sequence $(S_N)_{Ninmathbb{Z}_+}$ is strictly encreasing and $lim_{Ntoinfty}S_N=pi$. In particular, each $S_N$ is a better approximation of $pi$ than the previous one.
Since$$(forall ninmathbb N):binom{2n}n<4^n=2^{2n},$$you have$$R_N<sum_{n=N+1}^inftyfrac6{2^{2n+1}}=frac1{4^N}.$$So, taking $N=0$, you get that $pi=S_0+R_0$. But $S_0=3$ and $R_0<1$. So, the first digit of $pi$ is $3$. If you take $N=3$, then $pi=S_3+R_3$. But $S_3approx3.14116$ and $R_3<0.015625$. So, the second digit is $1$. And so on…
$endgroup$
13
$begingroup$
I think this could be made clearer to the less mathematically literate by mentioning your partial sums are monotonically increasing.
$endgroup$
– Yakk
Nov 13 '18 at 15:05
1
$begingroup$
@Yakk Oh, yes - that makes all the difference :-p
$endgroup$
– Strawberry
Nov 13 '18 at 16:47
$begingroup$
@Strawberry Ok, I'll translate the rest: "We can find a sequence that only increases and we know converges to pi. We can also bound its error, how far off it is, with an upper bound. So when its ones digit raches 3 plus some fraction (0 in the above case) and its error sufficiently less 1 minus the same fraction, we know the one's digit is 3. We can repeat this for each digit."
$endgroup$
– Yakk
Nov 13 '18 at 16:50
$begingroup$
@Yakk Is it like saying that while we can see that a hexagon is a better approximation of a circle than a triangle, we can also deduce that only a polygon of infinite sides will ever truly approximate a circle.
$endgroup$
– Strawberry
Nov 13 '18 at 16:58
add a comment |
$begingroup$
Note that $pi=6arcsinleft(frac12right)$. So, since$$arcsin(x)=sum_{n=0}^infty frac1{2^{2n}}binom{2n}nfrac{ x^{2n+1}}{2n+1},$$you have$$pi=sum_{n=0}^inftyfrac6{2^{4n+1}(2n+1)}binom{2n}n.$$Now, for each $Ninmathbb{Z}^+$, let$$S_N=sum_{n=0}^Nfrac6{2^{4n+1}(2n+1)}binom{2n}ntext{ and let }R_N=sum_{n=N+1}^inftyfrac6{2^{4n+1}(2n+1)}binom{2n}n.$$Then:
$(forall Ninmathbb{Z}^+):pi=S_N+R_N$;- the sequence $(S_N)_{Ninmathbb{Z}_+}$ is strictly encreasing and $lim_{Ntoinfty}S_N=pi$. In particular, each $S_N$ is a better approximation of $pi$ than the previous one.
Since$$(forall ninmathbb N):binom{2n}n<4^n=2^{2n},$$you have$$R_N<sum_{n=N+1}^inftyfrac6{2^{2n+1}}=frac1{4^N}.$$So, taking $N=0$, you get that $pi=S_0+R_0$. But $S_0=3$ and $R_0<1$. So, the first digit of $pi$ is $3$. If you take $N=3$, then $pi=S_3+R_3$. But $S_3approx3.14116$ and $R_3<0.015625$. So, the second digit is $1$. And so on…
$endgroup$
Note that $pi=6arcsinleft(frac12right)$. So, since$$arcsin(x)=sum_{n=0}^infty frac1{2^{2n}}binom{2n}nfrac{ x^{2n+1}}{2n+1},$$you have$$pi=sum_{n=0}^inftyfrac6{2^{4n+1}(2n+1)}binom{2n}n.$$Now, for each $Ninmathbb{Z}^+$, let$$S_N=sum_{n=0}^Nfrac6{2^{4n+1}(2n+1)}binom{2n}ntext{ and let }R_N=sum_{n=N+1}^inftyfrac6{2^{4n+1}(2n+1)}binom{2n}n.$$Then:
$(forall Ninmathbb{Z}^+):pi=S_N+R_N$;- the sequence $(S_N)_{Ninmathbb{Z}_+}$ is strictly encreasing and $lim_{Ntoinfty}S_N=pi$. In particular, each $S_N$ is a better approximation of $pi$ than the previous one.
Since$$(forall ninmathbb N):binom{2n}n<4^n=2^{2n},$$you have$$R_N<sum_{n=N+1}^inftyfrac6{2^{2n+1}}=frac1{4^N}.$$So, taking $N=0$, you get that $pi=S_0+R_0$. But $S_0=3$ and $R_0<1$. So, the first digit of $pi$ is $3$. If you take $N=3$, then $pi=S_3+R_3$. But $S_3approx3.14116$ and $R_3<0.015625$. So, the second digit is $1$. And so on…
edited Nov 13 '18 at 18:23
answered Nov 13 '18 at 10:22
José Carlos SantosJosé Carlos Santos
153k22123225
153k22123225
13
$begingroup$
I think this could be made clearer to the less mathematically literate by mentioning your partial sums are monotonically increasing.
$endgroup$
– Yakk
Nov 13 '18 at 15:05
1
$begingroup$
@Yakk Oh, yes - that makes all the difference :-p
$endgroup$
– Strawberry
Nov 13 '18 at 16:47
$begingroup$
@Strawberry Ok, I'll translate the rest: "We can find a sequence that only increases and we know converges to pi. We can also bound its error, how far off it is, with an upper bound. So when its ones digit raches 3 plus some fraction (0 in the above case) and its error sufficiently less 1 minus the same fraction, we know the one's digit is 3. We can repeat this for each digit."
$endgroup$
– Yakk
Nov 13 '18 at 16:50
$begingroup$
@Yakk Is it like saying that while we can see that a hexagon is a better approximation of a circle than a triangle, we can also deduce that only a polygon of infinite sides will ever truly approximate a circle.
$endgroup$
– Strawberry
Nov 13 '18 at 16:58
add a comment |
13
$begingroup$
I think this could be made clearer to the less mathematically literate by mentioning your partial sums are monotonically increasing.
$endgroup$
– Yakk
Nov 13 '18 at 15:05
1
$begingroup$
@Yakk Oh, yes - that makes all the difference :-p
$endgroup$
– Strawberry
Nov 13 '18 at 16:47
$begingroup$
@Strawberry Ok, I'll translate the rest: "We can find a sequence that only increases and we know converges to pi. We can also bound its error, how far off it is, with an upper bound. So when its ones digit raches 3 plus some fraction (0 in the above case) and its error sufficiently less 1 minus the same fraction, we know the one's digit is 3. We can repeat this for each digit."
$endgroup$
– Yakk
Nov 13 '18 at 16:50
$begingroup$
@Yakk Is it like saying that while we can see that a hexagon is a better approximation of a circle than a triangle, we can also deduce that only a polygon of infinite sides will ever truly approximate a circle.
$endgroup$
– Strawberry
Nov 13 '18 at 16:58
13
13
$begingroup$
I think this could be made clearer to the less mathematically literate by mentioning your partial sums are monotonically increasing.
$endgroup$
– Yakk
Nov 13 '18 at 15:05
$begingroup$
I think this could be made clearer to the less mathematically literate by mentioning your partial sums are monotonically increasing.
$endgroup$
– Yakk
Nov 13 '18 at 15:05
1
1
$begingroup$
@Yakk Oh, yes - that makes all the difference :-p
$endgroup$
– Strawberry
Nov 13 '18 at 16:47
$begingroup$
@Yakk Oh, yes - that makes all the difference :-p
$endgroup$
– Strawberry
Nov 13 '18 at 16:47
$begingroup$
@Strawberry Ok, I'll translate the rest: "We can find a sequence that only increases and we know converges to pi. We can also bound its error, how far off it is, with an upper bound. So when its ones digit raches 3 plus some fraction (0 in the above case) and its error sufficiently less 1 minus the same fraction, we know the one's digit is 3. We can repeat this for each digit."
$endgroup$
– Yakk
Nov 13 '18 at 16:50
$begingroup$
@Strawberry Ok, I'll translate the rest: "We can find a sequence that only increases and we know converges to pi. We can also bound its error, how far off it is, with an upper bound. So when its ones digit raches 3 plus some fraction (0 in the above case) and its error sufficiently less 1 minus the same fraction, we know the one's digit is 3. We can repeat this for each digit."
$endgroup$
– Yakk
Nov 13 '18 at 16:50
$begingroup$
@Yakk Is it like saying that while we can see that a hexagon is a better approximation of a circle than a triangle, we can also deduce that only a polygon of infinite sides will ever truly approximate a circle.
$endgroup$
– Strawberry
Nov 13 '18 at 16:58
$begingroup$
@Yakk Is it like saying that while we can see that a hexagon is a better approximation of a circle than a triangle, we can also deduce that only a polygon of infinite sides will ever truly approximate a circle.
$endgroup$
– Strawberry
Nov 13 '18 at 16:58
add a comment |
$begingroup$
The simplest method to explain to a child is probably the polygon method, which states that the circumference of a circle is bounded from below by the circumference of an inscribed regular $n$-polygon and from above by the circumference of a circumscribed polygon.
Once you have a bound from below and above, you can guarantee some digits. For example, any number between $0.12345$ and $0.12346$ will begin with $0.1234$.
$endgroup$
2
$begingroup$
Note that this is implied by the monoticity of the sequences of the aeras.
$endgroup$
– nicomezi
Nov 13 '18 at 10:02
2
$begingroup$
Yes, this is what I actually used (the polygon method is what they learned) but this is exactly what brought the more general question about other methods.
$endgroup$
– WoJ
Nov 13 '18 at 10:06
$begingroup$
Can you explain to me how the circumference of a polygon that circumscribes the circle bounds the circumference of a circle from above? I can't see this. I can see that the circumference of a polygon inscribing the circle is smaller than the circumference of the circle, because the the length of the line from one point to another is smaller than the length of any other curve connecting two points. But I cannot see this for the circumscribing polygon. If we use the areas instead of the length the situation is simple. But initially pi is defined by the circumference.
$endgroup$
– miracle173
Nov 13 '18 at 10:59
2
$begingroup$
There are a few more details to that method than are usually taught. You can take a very similar approach where you don't use regular polygons but instead use inscribed and circumscribed approximations made up of square shaped pixels. As you increase resolution of those pixels the shapes will converge towards the circle. But if you use that approach you'll end up showing that pi equals 4. So in order for this method to produce a valid proof one has to show why approximating with regular polygons is valid but other shapes are not.
$endgroup$
– kasperd
Nov 13 '18 at 15:31
1
$begingroup$
@pipe Is this the sort of thing you're looking for?
$endgroup$
– Richard Ward
Nov 13 '18 at 16:39
|
show 8 more comments
$begingroup$
The simplest method to explain to a child is probably the polygon method, which states that the circumference of a circle is bounded from below by the circumference of an inscribed regular $n$-polygon and from above by the circumference of a circumscribed polygon.
Once you have a bound from below and above, you can guarantee some digits. For example, any number between $0.12345$ and $0.12346$ will begin with $0.1234$.
$endgroup$
2
$begingroup$
Note that this is implied by the monoticity of the sequences of the aeras.
$endgroup$
– nicomezi
Nov 13 '18 at 10:02
2
$begingroup$
Yes, this is what I actually used (the polygon method is what they learned) but this is exactly what brought the more general question about other methods.
$endgroup$
– WoJ
Nov 13 '18 at 10:06
$begingroup$
Can you explain to me how the circumference of a polygon that circumscribes the circle bounds the circumference of a circle from above? I can't see this. I can see that the circumference of a polygon inscribing the circle is smaller than the circumference of the circle, because the the length of the line from one point to another is smaller than the length of any other curve connecting two points. But I cannot see this for the circumscribing polygon. If we use the areas instead of the length the situation is simple. But initially pi is defined by the circumference.
$endgroup$
– miracle173
Nov 13 '18 at 10:59
2
$begingroup$
There are a few more details to that method than are usually taught. You can take a very similar approach where you don't use regular polygons but instead use inscribed and circumscribed approximations made up of square shaped pixels. As you increase resolution of those pixels the shapes will converge towards the circle. But if you use that approach you'll end up showing that pi equals 4. So in order for this method to produce a valid proof one has to show why approximating with regular polygons is valid but other shapes are not.
$endgroup$
– kasperd
Nov 13 '18 at 15:31
1
$begingroup$
@pipe Is this the sort of thing you're looking for?
$endgroup$
– Richard Ward
Nov 13 '18 at 16:39
|
show 8 more comments
$begingroup$
The simplest method to explain to a child is probably the polygon method, which states that the circumference of a circle is bounded from below by the circumference of an inscribed regular $n$-polygon and from above by the circumference of a circumscribed polygon.
Once you have a bound from below and above, you can guarantee some digits. For example, any number between $0.12345$ and $0.12346$ will begin with $0.1234$.
$endgroup$
The simplest method to explain to a child is probably the polygon method, which states that the circumference of a circle is bounded from below by the circumference of an inscribed regular $n$-polygon and from above by the circumference of a circumscribed polygon.
Once you have a bound from below and above, you can guarantee some digits. For example, any number between $0.12345$ and $0.12346$ will begin with $0.1234$.
answered Nov 13 '18 at 10:00
5xum5xum
89.7k393161
89.7k393161
2
$begingroup$
Note that this is implied by the monoticity of the sequences of the aeras.
$endgroup$
– nicomezi
Nov 13 '18 at 10:02
2
$begingroup$
Yes, this is what I actually used (the polygon method is what they learned) but this is exactly what brought the more general question about other methods.
$endgroup$
– WoJ
Nov 13 '18 at 10:06
$begingroup$
Can you explain to me how the circumference of a polygon that circumscribes the circle bounds the circumference of a circle from above? I can't see this. I can see that the circumference of a polygon inscribing the circle is smaller than the circumference of the circle, because the the length of the line from one point to another is smaller than the length of any other curve connecting two points. But I cannot see this for the circumscribing polygon. If we use the areas instead of the length the situation is simple. But initially pi is defined by the circumference.
$endgroup$
– miracle173
Nov 13 '18 at 10:59
2
$begingroup$
There are a few more details to that method than are usually taught. You can take a very similar approach where you don't use regular polygons but instead use inscribed and circumscribed approximations made up of square shaped pixels. As you increase resolution of those pixels the shapes will converge towards the circle. But if you use that approach you'll end up showing that pi equals 4. So in order for this method to produce a valid proof one has to show why approximating with regular polygons is valid but other shapes are not.
$endgroup$
– kasperd
Nov 13 '18 at 15:31
1
$begingroup$
@pipe Is this the sort of thing you're looking for?
$endgroup$
– Richard Ward
Nov 13 '18 at 16:39
|
show 8 more comments
2
$begingroup$
Note that this is implied by the monoticity of the sequences of the aeras.
$endgroup$
– nicomezi
Nov 13 '18 at 10:02
2
$begingroup$
Yes, this is what I actually used (the polygon method is what they learned) but this is exactly what brought the more general question about other methods.
$endgroup$
– WoJ
Nov 13 '18 at 10:06
$begingroup$
Can you explain to me how the circumference of a polygon that circumscribes the circle bounds the circumference of a circle from above? I can't see this. I can see that the circumference of a polygon inscribing the circle is smaller than the circumference of the circle, because the the length of the line from one point to another is smaller than the length of any other curve connecting two points. But I cannot see this for the circumscribing polygon. If we use the areas instead of the length the situation is simple. But initially pi is defined by the circumference.
$endgroup$
– miracle173
Nov 13 '18 at 10:59
2
$begingroup$
There are a few more details to that method than are usually taught. You can take a very similar approach where you don't use regular polygons but instead use inscribed and circumscribed approximations made up of square shaped pixels. As you increase resolution of those pixels the shapes will converge towards the circle. But if you use that approach you'll end up showing that pi equals 4. So in order for this method to produce a valid proof one has to show why approximating with regular polygons is valid but other shapes are not.
$endgroup$
– kasperd
Nov 13 '18 at 15:31
1
$begingroup$
@pipe Is this the sort of thing you're looking for?
$endgroup$
– Richard Ward
Nov 13 '18 at 16:39
2
2
$begingroup$
Note that this is implied by the monoticity of the sequences of the aeras.
$endgroup$
– nicomezi
Nov 13 '18 at 10:02
$begingroup$
Note that this is implied by the monoticity of the sequences of the aeras.
$endgroup$
– nicomezi
Nov 13 '18 at 10:02
2
2
$begingroup$
Yes, this is what I actually used (the polygon method is what they learned) but this is exactly what brought the more general question about other methods.
$endgroup$
– WoJ
Nov 13 '18 at 10:06
$begingroup$
Yes, this is what I actually used (the polygon method is what they learned) but this is exactly what brought the more general question about other methods.
$endgroup$
– WoJ
Nov 13 '18 at 10:06
$begingroup$
Can you explain to me how the circumference of a polygon that circumscribes the circle bounds the circumference of a circle from above? I can't see this. I can see that the circumference of a polygon inscribing the circle is smaller than the circumference of the circle, because the the length of the line from one point to another is smaller than the length of any other curve connecting two points. But I cannot see this for the circumscribing polygon. If we use the areas instead of the length the situation is simple. But initially pi is defined by the circumference.
$endgroup$
– miracle173
Nov 13 '18 at 10:59
$begingroup$
Can you explain to me how the circumference of a polygon that circumscribes the circle bounds the circumference of a circle from above? I can't see this. I can see that the circumference of a polygon inscribing the circle is smaller than the circumference of the circle, because the the length of the line from one point to another is smaller than the length of any other curve connecting two points. But I cannot see this for the circumscribing polygon. If we use the areas instead of the length the situation is simple. But initially pi is defined by the circumference.
$endgroup$
– miracle173
Nov 13 '18 at 10:59
2
2
$begingroup$
There are a few more details to that method than are usually taught. You can take a very similar approach where you don't use regular polygons but instead use inscribed and circumscribed approximations made up of square shaped pixels. As you increase resolution of those pixels the shapes will converge towards the circle. But if you use that approach you'll end up showing that pi equals 4. So in order for this method to produce a valid proof one has to show why approximating with regular polygons is valid but other shapes are not.
$endgroup$
– kasperd
Nov 13 '18 at 15:31
$begingroup$
There are a few more details to that method than are usually taught. You can take a very similar approach where you don't use regular polygons but instead use inscribed and circumscribed approximations made up of square shaped pixels. As you increase resolution of those pixels the shapes will converge towards the circle. But if you use that approach you'll end up showing that pi equals 4. So in order for this method to produce a valid proof one has to show why approximating with regular polygons is valid but other shapes are not.
$endgroup$
– kasperd
Nov 13 '18 at 15:31
1
1
$begingroup$
@pipe Is this the sort of thing you're looking for?
$endgroup$
– Richard Ward
Nov 13 '18 at 16:39
$begingroup$
@pipe Is this the sort of thing you're looking for?
$endgroup$
– Richard Ward
Nov 13 '18 at 16:39
|
show 8 more comments
$begingroup$
In his answer, José shows how to calculate pi via a specific approximation and why that works. I believe the why is rather overlooked there and I wanted to clarify and make it less specific to the computation of $pi$.
Imagine you compute $S = Sigma_0^{infty} a_n$ for some series $a_n$. And, after summing the first few terms, let's say $bar S_i = Sigma_0^i a_n$, you can also prove that the rest of the sum is below some bounds $R_i^- le Sigma_{i + 1}^infty a_n le R_i^+$. Then you know also that $bar S_i + R_i^- le S le bar S_i + R_i^+$. See how that bounds the exact sum $S$ from above and below? If now both above and below have the same leading digits, we can be sure that those are also the leading digits of $S$.
Now, have another look at what José does: he computes the sum over a series up to term $N$ - the exact series is not important here. He approximates the errors $R_N^- = 0$ - all terms are positive - and $R_N^+ = frac{1}{4^N}$. So after you summed the first $N$ terms, what I called $bar S_N$ you can definitly say $bar S_N le S le bar S_N + frac{1}{4^N}$.
$endgroup$
add a comment |
$begingroup$
In his answer, José shows how to calculate pi via a specific approximation and why that works. I believe the why is rather overlooked there and I wanted to clarify and make it less specific to the computation of $pi$.
Imagine you compute $S = Sigma_0^{infty} a_n$ for some series $a_n$. And, after summing the first few terms, let's say $bar S_i = Sigma_0^i a_n$, you can also prove that the rest of the sum is below some bounds $R_i^- le Sigma_{i + 1}^infty a_n le R_i^+$. Then you know also that $bar S_i + R_i^- le S le bar S_i + R_i^+$. See how that bounds the exact sum $S$ from above and below? If now both above and below have the same leading digits, we can be sure that those are also the leading digits of $S$.
Now, have another look at what José does: he computes the sum over a series up to term $N$ - the exact series is not important here. He approximates the errors $R_N^- = 0$ - all terms are positive - and $R_N^+ = frac{1}{4^N}$. So after you summed the first $N$ terms, what I called $bar S_N$ you can definitly say $bar S_N le S le bar S_N + frac{1}{4^N}$.
$endgroup$
add a comment |
$begingroup$
In his answer, José shows how to calculate pi via a specific approximation and why that works. I believe the why is rather overlooked there and I wanted to clarify and make it less specific to the computation of $pi$.
Imagine you compute $S = Sigma_0^{infty} a_n$ for some series $a_n$. And, after summing the first few terms, let's say $bar S_i = Sigma_0^i a_n$, you can also prove that the rest of the sum is below some bounds $R_i^- le Sigma_{i + 1}^infty a_n le R_i^+$. Then you know also that $bar S_i + R_i^- le S le bar S_i + R_i^+$. See how that bounds the exact sum $S$ from above and below? If now both above and below have the same leading digits, we can be sure that those are also the leading digits of $S$.
Now, have another look at what José does: he computes the sum over a series up to term $N$ - the exact series is not important here. He approximates the errors $R_N^- = 0$ - all terms are positive - and $R_N^+ = frac{1}{4^N}$. So after you summed the first $N$ terms, what I called $bar S_N$ you can definitly say $bar S_N le S le bar S_N + frac{1}{4^N}$.
$endgroup$
In his answer, José shows how to calculate pi via a specific approximation and why that works. I believe the why is rather overlooked there and I wanted to clarify and make it less specific to the computation of $pi$.
Imagine you compute $S = Sigma_0^{infty} a_n$ for some series $a_n$. And, after summing the first few terms, let's say $bar S_i = Sigma_0^i a_n$, you can also prove that the rest of the sum is below some bounds $R_i^- le Sigma_{i + 1}^infty a_n le R_i^+$. Then you know also that $bar S_i + R_i^- le S le bar S_i + R_i^+$. See how that bounds the exact sum $S$ from above and below? If now both above and below have the same leading digits, we can be sure that those are also the leading digits of $S$.
Now, have another look at what José does: he computes the sum over a series up to term $N$ - the exact series is not important here. He approximates the errors $R_N^- = 0$ - all terms are positive - and $R_N^+ = frac{1}{4^N}$. So after you summed the first $N$ terms, what I called $bar S_N$ you can definitly say $bar S_N le S le bar S_N + frac{1}{4^N}$.
edited Nov 13 '18 at 21:11
Community♦
1
1
answered Nov 13 '18 at 12:33
WorldSEnderWorldSEnder
310212
310212
add a comment |
add a comment |
$begingroup$
The answers so far to this great question illustrate a problem that we should redress in this forum: We rush in good faith to say something smart, something that other mathematicians may enjoy for its cleverness, but something which often is difficult to digest to the OP.
*steps off the soap box
Let me try a different take that will be of use to a 15 yo. There are two parts to the question: a) Do all known methods get arbitrarily many digits correct, b) how to tell that a digit is already correct.
a) Throughout history, people have found many ingenious ways to approximate $pi$, say as $22/7$ or $sqrt{10}$. Sometimes they knew they had an approximation, sometimes they mistakenly assumed they had the actual value.
When in modern mathematics a formula is presented for $pi$, it is guaranteed to give (eventually) as many digits as desired. The keyword is to say that the formula converges.
Please note that mathematicians word things differently; we do not care that “we get arbitrarily many digits correctly”, but rather that the value computed “is arbitrarily close to the target value”. These are equivalent, but the second is not dependent on writing numbers in base 10.
b) Every formula converges at its own pace, so there is no universal way to decide when a digit given by one or another is settled. However, there are general techniques to prove convergence, and often it is possible to see at a glance (or after a brief computation) that the formula converges. Other times it is not so straightforward...
So let’s take a look at only one example; namely the formula mentioned in the question:
$$4-4/3+4/5-4/7+ldots$$
This is particularly slow, but offers a great insight into convergence. It is an example of an alternating series; i.e., you add, then subtract, then add, then subtract, in perfect alternation. Moreover, each term is smaller than the previous one, as in $4/3>4/5>4/7>ldots$. Moreover, these terms get arbitrarily small, as in
$$4/4000001 < 4/4000000 = 0.000001$$
Now given these three conditions, we know the infinite sum will converge to a final vale (which we are told is $pi$). Why? Plot the consecutive sums on the real line to see what happens. You get 4, then 2.6666, then 3.46666, etc. More, then less, then more, so that the values are nested (because each term is smaller than the previous), and overshoot the final value of $pi$. Since the terms get small, the sums are forced to get closer and closer to the final value.
Here is the kicker: when you add $4/41$ (for instance), you overshot your mark, so the current sum is closer to $pi$ than $4/41$, and similarly for any other summand.
In particular, when you add $4/4000001$, you are closer to target than 0.000001, and the first 5 digits will be guaranteed.
Disclaimer. This does not show that the final value is $pi$. That requires more math. The argument only shows that the sum converges to a final value.
$endgroup$
1
$begingroup$
(-1) A series converging is not equivalent to getting arbitrarily many digits correct. Consider $sum_{i=1}^{infty}frac 1{2^i}$ as an approximation for $1$. This clearly converges, but never gets any digits correct!
$endgroup$
– DreamConspiracy
Nov 13 '18 at 17:45
$begingroup$
It does get every digit correct in 0.99999999..., which, as you know, is a different expression for the number we usually denote 1.
$endgroup$
– Rodrigo A. Pérez
Nov 14 '18 at 0:02
$begingroup$
sorry, I should have been more clear. Yes, $sum_{i=1}^{infty}frac 1{2^i}$ converges to $1$, but no partial sums ever have any correct digits.
$endgroup$
– DreamConspiracy
Nov 14 '18 at 0:17
add a comment |
$begingroup$
The answers so far to this great question illustrate a problem that we should redress in this forum: We rush in good faith to say something smart, something that other mathematicians may enjoy for its cleverness, but something which often is difficult to digest to the OP.
*steps off the soap box
Let me try a different take that will be of use to a 15 yo. There are two parts to the question: a) Do all known methods get arbitrarily many digits correct, b) how to tell that a digit is already correct.
a) Throughout history, people have found many ingenious ways to approximate $pi$, say as $22/7$ or $sqrt{10}$. Sometimes they knew they had an approximation, sometimes they mistakenly assumed they had the actual value.
When in modern mathematics a formula is presented for $pi$, it is guaranteed to give (eventually) as many digits as desired. The keyword is to say that the formula converges.
Please note that mathematicians word things differently; we do not care that “we get arbitrarily many digits correctly”, but rather that the value computed “is arbitrarily close to the target value”. These are equivalent, but the second is not dependent on writing numbers in base 10.
b) Every formula converges at its own pace, so there is no universal way to decide when a digit given by one or another is settled. However, there are general techniques to prove convergence, and often it is possible to see at a glance (or after a brief computation) that the formula converges. Other times it is not so straightforward...
So let’s take a look at only one example; namely the formula mentioned in the question:
$$4-4/3+4/5-4/7+ldots$$
This is particularly slow, but offers a great insight into convergence. It is an example of an alternating series; i.e., you add, then subtract, then add, then subtract, in perfect alternation. Moreover, each term is smaller than the previous one, as in $4/3>4/5>4/7>ldots$. Moreover, these terms get arbitrarily small, as in
$$4/4000001 < 4/4000000 = 0.000001$$
Now given these three conditions, we know the infinite sum will converge to a final vale (which we are told is $pi$). Why? Plot the consecutive sums on the real line to see what happens. You get 4, then 2.6666, then 3.46666, etc. More, then less, then more, so that the values are nested (because each term is smaller than the previous), and overshoot the final value of $pi$. Since the terms get small, the sums are forced to get closer and closer to the final value.
Here is the kicker: when you add $4/41$ (for instance), you overshot your mark, so the current sum is closer to $pi$ than $4/41$, and similarly for any other summand.
In particular, when you add $4/4000001$, you are closer to target than 0.000001, and the first 5 digits will be guaranteed.
Disclaimer. This does not show that the final value is $pi$. That requires more math. The argument only shows that the sum converges to a final value.
$endgroup$
1
$begingroup$
(-1) A series converging is not equivalent to getting arbitrarily many digits correct. Consider $sum_{i=1}^{infty}frac 1{2^i}$ as an approximation for $1$. This clearly converges, but never gets any digits correct!
$endgroup$
– DreamConspiracy
Nov 13 '18 at 17:45
$begingroup$
It does get every digit correct in 0.99999999..., which, as you know, is a different expression for the number we usually denote 1.
$endgroup$
– Rodrigo A. Pérez
Nov 14 '18 at 0:02
$begingroup$
sorry, I should have been more clear. Yes, $sum_{i=1}^{infty}frac 1{2^i}$ converges to $1$, but no partial sums ever have any correct digits.
$endgroup$
– DreamConspiracy
Nov 14 '18 at 0:17
add a comment |
$begingroup$
The answers so far to this great question illustrate a problem that we should redress in this forum: We rush in good faith to say something smart, something that other mathematicians may enjoy for its cleverness, but something which often is difficult to digest to the OP.
*steps off the soap box
Let me try a different take that will be of use to a 15 yo. There are two parts to the question: a) Do all known methods get arbitrarily many digits correct, b) how to tell that a digit is already correct.
a) Throughout history, people have found many ingenious ways to approximate $pi$, say as $22/7$ or $sqrt{10}$. Sometimes they knew they had an approximation, sometimes they mistakenly assumed they had the actual value.
When in modern mathematics a formula is presented for $pi$, it is guaranteed to give (eventually) as many digits as desired. The keyword is to say that the formula converges.
Please note that mathematicians word things differently; we do not care that “we get arbitrarily many digits correctly”, but rather that the value computed “is arbitrarily close to the target value”. These are equivalent, but the second is not dependent on writing numbers in base 10.
b) Every formula converges at its own pace, so there is no universal way to decide when a digit given by one or another is settled. However, there are general techniques to prove convergence, and often it is possible to see at a glance (or after a brief computation) that the formula converges. Other times it is not so straightforward...
So let’s take a look at only one example; namely the formula mentioned in the question:
$$4-4/3+4/5-4/7+ldots$$
This is particularly slow, but offers a great insight into convergence. It is an example of an alternating series; i.e., you add, then subtract, then add, then subtract, in perfect alternation. Moreover, each term is smaller than the previous one, as in $4/3>4/5>4/7>ldots$. Moreover, these terms get arbitrarily small, as in
$$4/4000001 < 4/4000000 = 0.000001$$
Now given these three conditions, we know the infinite sum will converge to a final vale (which we are told is $pi$). Why? Plot the consecutive sums on the real line to see what happens. You get 4, then 2.6666, then 3.46666, etc. More, then less, then more, so that the values are nested (because each term is smaller than the previous), and overshoot the final value of $pi$. Since the terms get small, the sums are forced to get closer and closer to the final value.
Here is the kicker: when you add $4/41$ (for instance), you overshot your mark, so the current sum is closer to $pi$ than $4/41$, and similarly for any other summand.
In particular, when you add $4/4000001$, you are closer to target than 0.000001, and the first 5 digits will be guaranteed.
Disclaimer. This does not show that the final value is $pi$. That requires more math. The argument only shows that the sum converges to a final value.
$endgroup$
The answers so far to this great question illustrate a problem that we should redress in this forum: We rush in good faith to say something smart, something that other mathematicians may enjoy for its cleverness, but something which often is difficult to digest to the OP.
*steps off the soap box
Let me try a different take that will be of use to a 15 yo. There are two parts to the question: a) Do all known methods get arbitrarily many digits correct, b) how to tell that a digit is already correct.
a) Throughout history, people have found many ingenious ways to approximate $pi$, say as $22/7$ or $sqrt{10}$. Sometimes they knew they had an approximation, sometimes they mistakenly assumed they had the actual value.
When in modern mathematics a formula is presented for $pi$, it is guaranteed to give (eventually) as many digits as desired. The keyword is to say that the formula converges.
Please note that mathematicians word things differently; we do not care that “we get arbitrarily many digits correctly”, but rather that the value computed “is arbitrarily close to the target value”. These are equivalent, but the second is not dependent on writing numbers in base 10.
b) Every formula converges at its own pace, so there is no universal way to decide when a digit given by one or another is settled. However, there are general techniques to prove convergence, and often it is possible to see at a glance (or after a brief computation) that the formula converges. Other times it is not so straightforward...
So let’s take a look at only one example; namely the formula mentioned in the question:
$$4-4/3+4/5-4/7+ldots$$
This is particularly slow, but offers a great insight into convergence. It is an example of an alternating series; i.e., you add, then subtract, then add, then subtract, in perfect alternation. Moreover, each term is smaller than the previous one, as in $4/3>4/5>4/7>ldots$. Moreover, these terms get arbitrarily small, as in
$$4/4000001 < 4/4000000 = 0.000001$$
Now given these three conditions, we know the infinite sum will converge to a final vale (which we are told is $pi$). Why? Plot the consecutive sums on the real line to see what happens. You get 4, then 2.6666, then 3.46666, etc. More, then less, then more, so that the values are nested (because each term is smaller than the previous), and overshoot the final value of $pi$. Since the terms get small, the sums are forced to get closer and closer to the final value.
Here is the kicker: when you add $4/41$ (for instance), you overshot your mark, so the current sum is closer to $pi$ than $4/41$, and similarly for any other summand.
In particular, when you add $4/4000001$, you are closer to target than 0.000001, and the first 5 digits will be guaranteed.
Disclaimer. This does not show that the final value is $pi$. That requires more math. The argument only shows that the sum converges to a final value.
answered Nov 13 '18 at 15:56
Rodrigo A. PérezRodrigo A. Pérez
1,1331712
1,1331712
1
$begingroup$
(-1) A series converging is not equivalent to getting arbitrarily many digits correct. Consider $sum_{i=1}^{infty}frac 1{2^i}$ as an approximation for $1$. This clearly converges, but never gets any digits correct!
$endgroup$
– DreamConspiracy
Nov 13 '18 at 17:45
$begingroup$
It does get every digit correct in 0.99999999..., which, as you know, is a different expression for the number we usually denote 1.
$endgroup$
– Rodrigo A. Pérez
Nov 14 '18 at 0:02
$begingroup$
sorry, I should have been more clear. Yes, $sum_{i=1}^{infty}frac 1{2^i}$ converges to $1$, but no partial sums ever have any correct digits.
$endgroup$
– DreamConspiracy
Nov 14 '18 at 0:17
add a comment |
1
$begingroup$
(-1) A series converging is not equivalent to getting arbitrarily many digits correct. Consider $sum_{i=1}^{infty}frac 1{2^i}$ as an approximation for $1$. This clearly converges, but never gets any digits correct!
$endgroup$
– DreamConspiracy
Nov 13 '18 at 17:45
$begingroup$
It does get every digit correct in 0.99999999..., which, as you know, is a different expression for the number we usually denote 1.
$endgroup$
– Rodrigo A. Pérez
Nov 14 '18 at 0:02
$begingroup$
sorry, I should have been more clear. Yes, $sum_{i=1}^{infty}frac 1{2^i}$ converges to $1$, but no partial sums ever have any correct digits.
$endgroup$
– DreamConspiracy
Nov 14 '18 at 0:17
1
1
$begingroup$
(-1) A series converging is not equivalent to getting arbitrarily many digits correct. Consider $sum_{i=1}^{infty}frac 1{2^i}$ as an approximation for $1$. This clearly converges, but never gets any digits correct!
$endgroup$
– DreamConspiracy
Nov 13 '18 at 17:45
$begingroup$
(-1) A series converging is not equivalent to getting arbitrarily many digits correct. Consider $sum_{i=1}^{infty}frac 1{2^i}$ as an approximation for $1$. This clearly converges, but never gets any digits correct!
$endgroup$
– DreamConspiracy
Nov 13 '18 at 17:45
$begingroup$
It does get every digit correct in 0.99999999..., which, as you know, is a different expression for the number we usually denote 1.
$endgroup$
– Rodrigo A. Pérez
Nov 14 '18 at 0:02
$begingroup$
It does get every digit correct in 0.99999999..., which, as you know, is a different expression for the number we usually denote 1.
$endgroup$
– Rodrigo A. Pérez
Nov 14 '18 at 0:02
$begingroup$
sorry, I should have been more clear. Yes, $sum_{i=1}^{infty}frac 1{2^i}$ converges to $1$, but no partial sums ever have any correct digits.
$endgroup$
– DreamConspiracy
Nov 14 '18 at 0:17
$begingroup$
sorry, I should have been more clear. Yes, $sum_{i=1}^{infty}frac 1{2^i}$ converges to $1$, but no partial sums ever have any correct digits.
$endgroup$
– DreamConspiracy
Nov 14 '18 at 0:17
add a comment |
$begingroup$
The Monte Carlo method is a stochastic method, so it does not provide certain proof. All it can do is say that the probability of having a particular result, if it were wrong about the first $k$ digits of $pi$, goes to zero.
For a sequence that converges to $pi$, however, we have that there is some function $f(k)$ such that for any $k$ and $n>f(k)$, the $n$-th term is correct to $k$ digits (barring the .9999.... issue). That's just from the definition of "converges"; one formulation of what it means to converge that is equivalent to the standard definition is that given any number of digits, there is some point in the sequence such that all the terms after that point are accurate to that number of digits. So any time someone claims that a sequence converges to $pi$, they are claiming that for each digit, there is some point at which it is certain (however, some people are loose with stochastic terminology, giving such formulations as "converges with probability one", which is not a precise formulation). Generally, proofs of convergence, even if they do not explicitly construct a function $f(k)$, can be easily modified to generate such a function.
For any approximation based on a Taylor series, the is the Lagrange error bound.
$endgroup$
add a comment |
$begingroup$
The Monte Carlo method is a stochastic method, so it does not provide certain proof. All it can do is say that the probability of having a particular result, if it were wrong about the first $k$ digits of $pi$, goes to zero.
For a sequence that converges to $pi$, however, we have that there is some function $f(k)$ such that for any $k$ and $n>f(k)$, the $n$-th term is correct to $k$ digits (barring the .9999.... issue). That's just from the definition of "converges"; one formulation of what it means to converge that is equivalent to the standard definition is that given any number of digits, there is some point in the sequence such that all the terms after that point are accurate to that number of digits. So any time someone claims that a sequence converges to $pi$, they are claiming that for each digit, there is some point at which it is certain (however, some people are loose with stochastic terminology, giving such formulations as "converges with probability one", which is not a precise formulation). Generally, proofs of convergence, even if they do not explicitly construct a function $f(k)$, can be easily modified to generate such a function.
For any approximation based on a Taylor series, the is the Lagrange error bound.
$endgroup$
add a comment |
$begingroup$
The Monte Carlo method is a stochastic method, so it does not provide certain proof. All it can do is say that the probability of having a particular result, if it were wrong about the first $k$ digits of $pi$, goes to zero.
For a sequence that converges to $pi$, however, we have that there is some function $f(k)$ such that for any $k$ and $n>f(k)$, the $n$-th term is correct to $k$ digits (barring the .9999.... issue). That's just from the definition of "converges"; one formulation of what it means to converge that is equivalent to the standard definition is that given any number of digits, there is some point in the sequence such that all the terms after that point are accurate to that number of digits. So any time someone claims that a sequence converges to $pi$, they are claiming that for each digit, there is some point at which it is certain (however, some people are loose with stochastic terminology, giving such formulations as "converges with probability one", which is not a precise formulation). Generally, proofs of convergence, even if they do not explicitly construct a function $f(k)$, can be easily modified to generate such a function.
For any approximation based on a Taylor series, the is the Lagrange error bound.
$endgroup$
The Monte Carlo method is a stochastic method, so it does not provide certain proof. All it can do is say that the probability of having a particular result, if it were wrong about the first $k$ digits of $pi$, goes to zero.
For a sequence that converges to $pi$, however, we have that there is some function $f(k)$ such that for any $k$ and $n>f(k)$, the $n$-th term is correct to $k$ digits (barring the .9999.... issue). That's just from the definition of "converges"; one formulation of what it means to converge that is equivalent to the standard definition is that given any number of digits, there is some point in the sequence such that all the terms after that point are accurate to that number of digits. So any time someone claims that a sequence converges to $pi$, they are claiming that for each digit, there is some point at which it is certain (however, some people are loose with stochastic terminology, giving such formulations as "converges with probability one", which is not a precise formulation). Generally, proofs of convergence, even if they do not explicitly construct a function $f(k)$, can be easily modified to generate such a function.
For any approximation based on a Taylor series, the is the Lagrange error bound.
answered Nov 13 '18 at 17:06
AcccumulationAcccumulation
6,8162618
6,8162618
add a comment |
add a comment |
$begingroup$
No method gives $pi$ exactly, i.e. all digits of $pi$, in finite time. But many methods give arbitrarily close approximations of $pi$ if they run long enough. Such methods construct a sequence of values $x_n$ whose limit as $ntoinfty$ is $pi$. For example, the technique you've mentioned has $x_1=4,,,x_2=4-frac{4}{3}$ etc.
Now, among the sequences satisfying $lim_{ntoinfty}x_n=pi$, some are "faster" than others. For example, the aforementioned sequence has $|x_n-pi|$ roughly proportional to $frac{1}{n}$, so the number of correct decimal places in approximating $pi$ as $x_n$ is approximately $log n$, for $n$ large. For example it takes about a million ($400,000$ in fact) terms to get $6$ decimal places right.
The good news is there are much better sequences than that; for example, this method gets a number of correct decimal place approximately proportional to $9^n$. All we have to do to be sure of specific digits is use appropriate mathematical theory to know how far to run a technique for our purposes. The bad news is this theory gets a little thorny, but I'll try to keep it simple. (If you feel i've made it too simple, see here to learn more.)
If $x_n$ is a sequence of limit $L$, and some $K,,p$ exist with the large-$n$ approximation $|epsilon_{n+1}|approx K|epsilon_n|^p$ with $epsilon_n:=x_n-L$, there are three separate cases to consider:
$p=K=1$, resulting in very slow convergence such as our original example;
$p=1,,K<1$, so $|epsilon_n|$ is approximately proportional to $K^n$, and the number of correct decimal places is approximately proportional to $n$;
$p>1$, so $log|epsilon_{n+1}|approx p|epsilon_n|+log K$, and the number of correct decimal places is approximately proportional to $p^n$.
The first case is called logarithmic convergence; the second is called linear convergence; the third is called superlinear convergence. Note that among superlinearly convergent algorithms increasing $p$ only causes a fractional reduction in the value of $n$ needed to get a given number of decimal places right, and often high-$p$ algorithms have such complicated steps they aren't worth it. The real question is whether some $p>1$ is achievable.
I linked before to a $p=9$ example of superlinear convergence, but it's very complicated. Depending on your son's ambition in self-education, he may be able to understand how this $p=2$ superlinear method works. In fact I probably should have focused on $p=2$ from the start, since calculus lessons often cover a (usually) $p=2$ technique for solving equations called the Newton-Raphson method. Somewhat easier, since it only requires a few basic facts about complex numbers, is understanding certain linear methods such as this work.
$endgroup$
1
$begingroup$
That's not what he asked.
$endgroup$
– Carl Witthoft
Nov 13 '18 at 16:35
$begingroup$
@CarlWitthoft The OP asked for an explanation of how some minimum number of decimal points can be guaranteed. I provided that: we can approximate the error at the $n$th term.
$endgroup$
– J.G.
Nov 13 '18 at 17:52
add a comment |
$begingroup$
No method gives $pi$ exactly, i.e. all digits of $pi$, in finite time. But many methods give arbitrarily close approximations of $pi$ if they run long enough. Such methods construct a sequence of values $x_n$ whose limit as $ntoinfty$ is $pi$. For example, the technique you've mentioned has $x_1=4,,,x_2=4-frac{4}{3}$ etc.
Now, among the sequences satisfying $lim_{ntoinfty}x_n=pi$, some are "faster" than others. For example, the aforementioned sequence has $|x_n-pi|$ roughly proportional to $frac{1}{n}$, so the number of correct decimal places in approximating $pi$ as $x_n$ is approximately $log n$, for $n$ large. For example it takes about a million ($400,000$ in fact) terms to get $6$ decimal places right.
The good news is there are much better sequences than that; for example, this method gets a number of correct decimal place approximately proportional to $9^n$. All we have to do to be sure of specific digits is use appropriate mathematical theory to know how far to run a technique for our purposes. The bad news is this theory gets a little thorny, but I'll try to keep it simple. (If you feel i've made it too simple, see here to learn more.)
If $x_n$ is a sequence of limit $L$, and some $K,,p$ exist with the large-$n$ approximation $|epsilon_{n+1}|approx K|epsilon_n|^p$ with $epsilon_n:=x_n-L$, there are three separate cases to consider:
$p=K=1$, resulting in very slow convergence such as our original example;
$p=1,,K<1$, so $|epsilon_n|$ is approximately proportional to $K^n$, and the number of correct decimal places is approximately proportional to $n$;
$p>1$, so $log|epsilon_{n+1}|approx p|epsilon_n|+log K$, and the number of correct decimal places is approximately proportional to $p^n$.
The first case is called logarithmic convergence; the second is called linear convergence; the third is called superlinear convergence. Note that among superlinearly convergent algorithms increasing $p$ only causes a fractional reduction in the value of $n$ needed to get a given number of decimal places right, and often high-$p$ algorithms have such complicated steps they aren't worth it. The real question is whether some $p>1$ is achievable.
I linked before to a $p=9$ example of superlinear convergence, but it's very complicated. Depending on your son's ambition in self-education, he may be able to understand how this $p=2$ superlinear method works. In fact I probably should have focused on $p=2$ from the start, since calculus lessons often cover a (usually) $p=2$ technique for solving equations called the Newton-Raphson method. Somewhat easier, since it only requires a few basic facts about complex numbers, is understanding certain linear methods such as this work.
$endgroup$
1
$begingroup$
That's not what he asked.
$endgroup$
– Carl Witthoft
Nov 13 '18 at 16:35
$begingroup$
@CarlWitthoft The OP asked for an explanation of how some minimum number of decimal points can be guaranteed. I provided that: we can approximate the error at the $n$th term.
$endgroup$
– J.G.
Nov 13 '18 at 17:52
add a comment |
$begingroup$
No method gives $pi$ exactly, i.e. all digits of $pi$, in finite time. But many methods give arbitrarily close approximations of $pi$ if they run long enough. Such methods construct a sequence of values $x_n$ whose limit as $ntoinfty$ is $pi$. For example, the technique you've mentioned has $x_1=4,,,x_2=4-frac{4}{3}$ etc.
Now, among the sequences satisfying $lim_{ntoinfty}x_n=pi$, some are "faster" than others. For example, the aforementioned sequence has $|x_n-pi|$ roughly proportional to $frac{1}{n}$, so the number of correct decimal places in approximating $pi$ as $x_n$ is approximately $log n$, for $n$ large. For example it takes about a million ($400,000$ in fact) terms to get $6$ decimal places right.
The good news is there are much better sequences than that; for example, this method gets a number of correct decimal place approximately proportional to $9^n$. All we have to do to be sure of specific digits is use appropriate mathematical theory to know how far to run a technique for our purposes. The bad news is this theory gets a little thorny, but I'll try to keep it simple. (If you feel i've made it too simple, see here to learn more.)
If $x_n$ is a sequence of limit $L$, and some $K,,p$ exist with the large-$n$ approximation $|epsilon_{n+1}|approx K|epsilon_n|^p$ with $epsilon_n:=x_n-L$, there are three separate cases to consider:
$p=K=1$, resulting in very slow convergence such as our original example;
$p=1,,K<1$, so $|epsilon_n|$ is approximately proportional to $K^n$, and the number of correct decimal places is approximately proportional to $n$;
$p>1$, so $log|epsilon_{n+1}|approx p|epsilon_n|+log K$, and the number of correct decimal places is approximately proportional to $p^n$.
The first case is called logarithmic convergence; the second is called linear convergence; the third is called superlinear convergence. Note that among superlinearly convergent algorithms increasing $p$ only causes a fractional reduction in the value of $n$ needed to get a given number of decimal places right, and often high-$p$ algorithms have such complicated steps they aren't worth it. The real question is whether some $p>1$ is achievable.
I linked before to a $p=9$ example of superlinear convergence, but it's very complicated. Depending on your son's ambition in self-education, he may be able to understand how this $p=2$ superlinear method works. In fact I probably should have focused on $p=2$ from the start, since calculus lessons often cover a (usually) $p=2$ technique for solving equations called the Newton-Raphson method. Somewhat easier, since it only requires a few basic facts about complex numbers, is understanding certain linear methods such as this work.
$endgroup$
No method gives $pi$ exactly, i.e. all digits of $pi$, in finite time. But many methods give arbitrarily close approximations of $pi$ if they run long enough. Such methods construct a sequence of values $x_n$ whose limit as $ntoinfty$ is $pi$. For example, the technique you've mentioned has $x_1=4,,,x_2=4-frac{4}{3}$ etc.
Now, among the sequences satisfying $lim_{ntoinfty}x_n=pi$, some are "faster" than others. For example, the aforementioned sequence has $|x_n-pi|$ roughly proportional to $frac{1}{n}$, so the number of correct decimal places in approximating $pi$ as $x_n$ is approximately $log n$, for $n$ large. For example it takes about a million ($400,000$ in fact) terms to get $6$ decimal places right.
The good news is there are much better sequences than that; for example, this method gets a number of correct decimal place approximately proportional to $9^n$. All we have to do to be sure of specific digits is use appropriate mathematical theory to know how far to run a technique for our purposes. The bad news is this theory gets a little thorny, but I'll try to keep it simple. (If you feel i've made it too simple, see here to learn more.)
If $x_n$ is a sequence of limit $L$, and some $K,,p$ exist with the large-$n$ approximation $|epsilon_{n+1}|approx K|epsilon_n|^p$ with $epsilon_n:=x_n-L$, there are three separate cases to consider:
$p=K=1$, resulting in very slow convergence such as our original example;
$p=1,,K<1$, so $|epsilon_n|$ is approximately proportional to $K^n$, and the number of correct decimal places is approximately proportional to $n$;
$p>1$, so $log|epsilon_{n+1}|approx p|epsilon_n|+log K$, and the number of correct decimal places is approximately proportional to $p^n$.
The first case is called logarithmic convergence; the second is called linear convergence; the third is called superlinear convergence. Note that among superlinearly convergent algorithms increasing $p$ only causes a fractional reduction in the value of $n$ needed to get a given number of decimal places right, and often high-$p$ algorithms have such complicated steps they aren't worth it. The real question is whether some $p>1$ is achievable.
I linked before to a $p=9$ example of superlinear convergence, but it's very complicated. Depending on your son's ambition in self-education, he may be able to understand how this $p=2$ superlinear method works. In fact I probably should have focused on $p=2$ from the start, since calculus lessons often cover a (usually) $p=2$ technique for solving equations called the Newton-Raphson method. Somewhat easier, since it only requires a few basic facts about complex numbers, is understanding certain linear methods such as this work.
answered Nov 13 '18 at 10:42
J.G.J.G.
23.6k22338
23.6k22338
1
$begingroup$
That's not what he asked.
$endgroup$
– Carl Witthoft
Nov 13 '18 at 16:35
$begingroup$
@CarlWitthoft The OP asked for an explanation of how some minimum number of decimal points can be guaranteed. I provided that: we can approximate the error at the $n$th term.
$endgroup$
– J.G.
Nov 13 '18 at 17:52
add a comment |
1
$begingroup$
That's not what he asked.
$endgroup$
– Carl Witthoft
Nov 13 '18 at 16:35
$begingroup$
@CarlWitthoft The OP asked for an explanation of how some minimum number of decimal points can be guaranteed. I provided that: we can approximate the error at the $n$th term.
$endgroup$
– J.G.
Nov 13 '18 at 17:52
1
1
$begingroup$
That's not what he asked.
$endgroup$
– Carl Witthoft
Nov 13 '18 at 16:35
$begingroup$
That's not what he asked.
$endgroup$
– Carl Witthoft
Nov 13 '18 at 16:35
$begingroup$
@CarlWitthoft The OP asked for an explanation of how some minimum number of decimal points can be guaranteed. I provided that: we can approximate the error at the $n$th term.
$endgroup$
– J.G.
Nov 13 '18 at 17:52
$begingroup$
@CarlWitthoft The OP asked for an explanation of how some minimum number of decimal points can be guaranteed. I provided that: we can approximate the error at the $n$th term.
$endgroup$
– J.G.
Nov 13 '18 at 17:52
add a comment |
$begingroup$
We can apply Dalzell's idea to prove $pi<frac{22}{7}$ to decimal approximations as well.
The first digit of $pi$ is guaranteed by the inequality
$$3<pi<4,$$
which can be proven from integrals
$$pi=3+2int_0^1frac{x(1-x)^2}{1+x^2}dx$$
and
$$pi=4-4int_0^1 frac{x^2}{1+x^2}dx$$
Similarly, the second digit being $1$ is equivalent to
$$3.1<pi<3.2$$
or
$$frac{31}{10}<pi<frac{16}{5},$$
which is proven by
$$pi=frac{31}{10}+2int_0^1 frac{x^2(1-x)^2(1-x+x^2)}{1+x^2}dx$$
and
$$pi=frac{16}{5}-int_0^1 frac{x^2(1-x)^2(1+2x+x^2)}{1+x^2}dx$$
Similar double inequalities can be written for every digit. For example, the answer https://math.stackexchange.com/a/2485646/134791 shows an integral for $pi>3.14$.
$endgroup$
add a comment |
$begingroup$
We can apply Dalzell's idea to prove $pi<frac{22}{7}$ to decimal approximations as well.
The first digit of $pi$ is guaranteed by the inequality
$$3<pi<4,$$
which can be proven from integrals
$$pi=3+2int_0^1frac{x(1-x)^2}{1+x^2}dx$$
and
$$pi=4-4int_0^1 frac{x^2}{1+x^2}dx$$
Similarly, the second digit being $1$ is equivalent to
$$3.1<pi<3.2$$
or
$$frac{31}{10}<pi<frac{16}{5},$$
which is proven by
$$pi=frac{31}{10}+2int_0^1 frac{x^2(1-x)^2(1-x+x^2)}{1+x^2}dx$$
and
$$pi=frac{16}{5}-int_0^1 frac{x^2(1-x)^2(1+2x+x^2)}{1+x^2}dx$$
Similar double inequalities can be written for every digit. For example, the answer https://math.stackexchange.com/a/2485646/134791 shows an integral for $pi>3.14$.
$endgroup$
add a comment |
$begingroup$
We can apply Dalzell's idea to prove $pi<frac{22}{7}$ to decimal approximations as well.
The first digit of $pi$ is guaranteed by the inequality
$$3<pi<4,$$
which can be proven from integrals
$$pi=3+2int_0^1frac{x(1-x)^2}{1+x^2}dx$$
and
$$pi=4-4int_0^1 frac{x^2}{1+x^2}dx$$
Similarly, the second digit being $1$ is equivalent to
$$3.1<pi<3.2$$
or
$$frac{31}{10}<pi<frac{16}{5},$$
which is proven by
$$pi=frac{31}{10}+2int_0^1 frac{x^2(1-x)^2(1-x+x^2)}{1+x^2}dx$$
and
$$pi=frac{16}{5}-int_0^1 frac{x^2(1-x)^2(1+2x+x^2)}{1+x^2}dx$$
Similar double inequalities can be written for every digit. For example, the answer https://math.stackexchange.com/a/2485646/134791 shows an integral for $pi>3.14$.
$endgroup$
We can apply Dalzell's idea to prove $pi<frac{22}{7}$ to decimal approximations as well.
The first digit of $pi$ is guaranteed by the inequality
$$3<pi<4,$$
which can be proven from integrals
$$pi=3+2int_0^1frac{x(1-x)^2}{1+x^2}dx$$
and
$$pi=4-4int_0^1 frac{x^2}{1+x^2}dx$$
Similarly, the second digit being $1$ is equivalent to
$$3.1<pi<3.2$$
or
$$frac{31}{10}<pi<frac{16}{5},$$
which is proven by
$$pi=frac{31}{10}+2int_0^1 frac{x^2(1-x)^2(1-x+x^2)}{1+x^2}dx$$
and
$$pi=frac{16}{5}-int_0^1 frac{x^2(1-x)^2(1+2x+x^2)}{1+x^2}dx$$
Similar double inequalities can be written for every digit. For example, the answer https://math.stackexchange.com/a/2485646/134791 shows an integral for $pi>3.14$.
answered Nov 13 '18 at 17:22
Jaume Oliver LafontJaume Oliver Lafont
3,09411033
3,09411033
add a comment |
add a comment |
$begingroup$
I wish to remind you of this formula: pi/4 = arctan(1) = 4 * arctan(1/5) - 1 * arctan(1/239) . This is easily proven with highschool math.
Then with the Taylor formula for the arctan() function you can see that this converges quickly (much quicker than arctan(1) itself), and you can even calculate how many digits you gain (on average) for each iteration. It all depends on starting with a good formula !
$endgroup$
add a comment |
$begingroup$
I wish to remind you of this formula: pi/4 = arctan(1) = 4 * arctan(1/5) - 1 * arctan(1/239) . This is easily proven with highschool math.
Then with the Taylor formula for the arctan() function you can see that this converges quickly (much quicker than arctan(1) itself), and you can even calculate how many digits you gain (on average) for each iteration. It all depends on starting with a good formula !
$endgroup$
add a comment |
$begingroup$
I wish to remind you of this formula: pi/4 = arctan(1) = 4 * arctan(1/5) - 1 * arctan(1/239) . This is easily proven with highschool math.
Then with the Taylor formula for the arctan() function you can see that this converges quickly (much quicker than arctan(1) itself), and you can even calculate how many digits you gain (on average) for each iteration. It all depends on starting with a good formula !
$endgroup$
I wish to remind you of this formula: pi/4 = arctan(1) = 4 * arctan(1/5) - 1 * arctan(1/239) . This is easily proven with highschool math.
Then with the Taylor formula for the arctan() function you can see that this converges quickly (much quicker than arctan(1) itself), and you can even calculate how many digits you gain (on average) for each iteration. It all depends on starting with a good formula !
answered Nov 13 '18 at 14:49
StessenJStessenJ
109
109
add a comment |
add a comment |
$begingroup$
Assuming you can explain to you kid that:
$$
a_n rightarrowpi iff exists n_0 , text{such that} , n>n_0 rightarrow |pi -a_n|<varepsilon
$$
Then it is possible to state that $varepsilon$ is the precision of the "approximation" $a_n$.
Thus, you can compare the digits of $a_n+varepsilon$ and of $a_n-varepsilon$. All unchanged digits are certain.
$endgroup$
$begingroup$
This makes wonder if there are arbitrarily long sequences of equal digits in pi, all of those being 9. That could cause the number of correct digits to stop progressing over the sequence of approximations, and would be frustrating. Nevertheless, these sequences could not be infinite because pi is irrational. However, if the number being approximated by a non-monotonic series was 1, no digit could ever be guaranteed. And that may be the source of the voice in the back of one's head saying that something might be off here.
$endgroup$
– Mefitico
Nov 13 '18 at 18:35
$begingroup$
this is slightly weaker (and possibly equivalent to, I'm not sure) asking whether $pi$ is a normal number. This is still an open question.
$endgroup$
– DreamConspiracy
Nov 14 '18 at 0:21
$begingroup$
@DreamConspiracy: Mefifico's property is not a necessary but not sufficient condition for being normal. Consider for exampe $$3.09,099,0999,09999,099999,0999999ldots$$ which does contain arbitrarily long runs of $9$ but is very far from normal.
$endgroup$
– Henning Makholm
Nov 14 '18 at 23:09
add a comment |
$begingroup$
Assuming you can explain to you kid that:
$$
a_n rightarrowpi iff exists n_0 , text{such that} , n>n_0 rightarrow |pi -a_n|<varepsilon
$$
Then it is possible to state that $varepsilon$ is the precision of the "approximation" $a_n$.
Thus, you can compare the digits of $a_n+varepsilon$ and of $a_n-varepsilon$. All unchanged digits are certain.
$endgroup$
$begingroup$
This makes wonder if there are arbitrarily long sequences of equal digits in pi, all of those being 9. That could cause the number of correct digits to stop progressing over the sequence of approximations, and would be frustrating. Nevertheless, these sequences could not be infinite because pi is irrational. However, if the number being approximated by a non-monotonic series was 1, no digit could ever be guaranteed. And that may be the source of the voice in the back of one's head saying that something might be off here.
$endgroup$
– Mefitico
Nov 13 '18 at 18:35
$begingroup$
this is slightly weaker (and possibly equivalent to, I'm not sure) asking whether $pi$ is a normal number. This is still an open question.
$endgroup$
– DreamConspiracy
Nov 14 '18 at 0:21
$begingroup$
@DreamConspiracy: Mefifico's property is not a necessary but not sufficient condition for being normal. Consider for exampe $$3.09,099,0999,09999,099999,0999999ldots$$ which does contain arbitrarily long runs of $9$ but is very far from normal.
$endgroup$
– Henning Makholm
Nov 14 '18 at 23:09
add a comment |
$begingroup$
Assuming you can explain to you kid that:
$$
a_n rightarrowpi iff exists n_0 , text{such that} , n>n_0 rightarrow |pi -a_n|<varepsilon
$$
Then it is possible to state that $varepsilon$ is the precision of the "approximation" $a_n$.
Thus, you can compare the digits of $a_n+varepsilon$ and of $a_n-varepsilon$. All unchanged digits are certain.
$endgroup$
Assuming you can explain to you kid that:
$$
a_n rightarrowpi iff exists n_0 , text{such that} , n>n_0 rightarrow |pi -a_n|<varepsilon
$$
Then it is possible to state that $varepsilon$ is the precision of the "approximation" $a_n$.
Thus, you can compare the digits of $a_n+varepsilon$ and of $a_n-varepsilon$. All unchanged digits are certain.
answered Nov 13 '18 at 18:23
MefiticoMefitico
924117
924117
$begingroup$
This makes wonder if there are arbitrarily long sequences of equal digits in pi, all of those being 9. That could cause the number of correct digits to stop progressing over the sequence of approximations, and would be frustrating. Nevertheless, these sequences could not be infinite because pi is irrational. However, if the number being approximated by a non-monotonic series was 1, no digit could ever be guaranteed. And that may be the source of the voice in the back of one's head saying that something might be off here.
$endgroup$
– Mefitico
Nov 13 '18 at 18:35
$begingroup$
this is slightly weaker (and possibly equivalent to, I'm not sure) asking whether $pi$ is a normal number. This is still an open question.
$endgroup$
– DreamConspiracy
Nov 14 '18 at 0:21
$begingroup$
@DreamConspiracy: Mefifico's property is not a necessary but not sufficient condition for being normal. Consider for exampe $$3.09,099,0999,09999,099999,0999999ldots$$ which does contain arbitrarily long runs of $9$ but is very far from normal.
$endgroup$
– Henning Makholm
Nov 14 '18 at 23:09
add a comment |
$begingroup$
This makes wonder if there are arbitrarily long sequences of equal digits in pi, all of those being 9. That could cause the number of correct digits to stop progressing over the sequence of approximations, and would be frustrating. Nevertheless, these sequences could not be infinite because pi is irrational. However, if the number being approximated by a non-monotonic series was 1, no digit could ever be guaranteed. And that may be the source of the voice in the back of one's head saying that something might be off here.
$endgroup$
– Mefitico
Nov 13 '18 at 18:35
$begingroup$
this is slightly weaker (and possibly equivalent to, I'm not sure) asking whether $pi$ is a normal number. This is still an open question.
$endgroup$
– DreamConspiracy
Nov 14 '18 at 0:21
$begingroup$
@DreamConspiracy: Mefifico's property is not a necessary but not sufficient condition for being normal. Consider for exampe $$3.09,099,0999,09999,099999,0999999ldots$$ which does contain arbitrarily long runs of $9$ but is very far from normal.
$endgroup$
– Henning Makholm
Nov 14 '18 at 23:09
$begingroup$
This makes wonder if there are arbitrarily long sequences of equal digits in pi, all of those being 9. That could cause the number of correct digits to stop progressing over the sequence of approximations, and would be frustrating. Nevertheless, these sequences could not be infinite because pi is irrational. However, if the number being approximated by a non-monotonic series was 1, no digit could ever be guaranteed. And that may be the source of the voice in the back of one's head saying that something might be off here.
$endgroup$
– Mefitico
Nov 13 '18 at 18:35
$begingroup$
This makes wonder if there are arbitrarily long sequences of equal digits in pi, all of those being 9. That could cause the number of correct digits to stop progressing over the sequence of approximations, and would be frustrating. Nevertheless, these sequences could not be infinite because pi is irrational. However, if the number being approximated by a non-monotonic series was 1, no digit could ever be guaranteed. And that may be the source of the voice in the back of one's head saying that something might be off here.
$endgroup$
– Mefitico
Nov 13 '18 at 18:35
$begingroup$
this is slightly weaker (and possibly equivalent to, I'm not sure) asking whether $pi$ is a normal number. This is still an open question.
$endgroup$
– DreamConspiracy
Nov 14 '18 at 0:21
$begingroup$
this is slightly weaker (and possibly equivalent to, I'm not sure) asking whether $pi$ is a normal number. This is still an open question.
$endgroup$
– DreamConspiracy
Nov 14 '18 at 0:21
$begingroup$
@DreamConspiracy: Mefifico's property is not a necessary but not sufficient condition for being normal. Consider for exampe $$3.09,099,0999,09999,099999,0999999ldots$$ which does contain arbitrarily long runs of $9$ but is very far from normal.
$endgroup$
– Henning Makholm
Nov 14 '18 at 23:09
$begingroup$
@DreamConspiracy: Mefifico's property is not a necessary but not sufficient condition for being normal. Consider for exampe $$3.09,099,0999,09999,099999,0999999ldots$$ which does contain arbitrarily long runs of $9$ but is very far from normal.
$endgroup$
– Henning Makholm
Nov 14 '18 at 23:09
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2996541%2fhow-are-the-known-digits-of-pi-guaranteed%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
Check your arithmetic on the series again.
$endgroup$
– saulspatz
Nov 13 '18 at 9:57
1
$begingroup$
@saulspatz: thanks! (shame on at least four generations), I corrected that and will make a nicer set of equations once I learn how to typeset them
$endgroup$
– WoJ
Nov 13 '18 at 10:04
4
$begingroup$
Regarding the Monte Carlo method, since it's based on random sampling, there can be no guarantees of any kind no matter how many samples you take. You could take a trillion samples and the result might still be zero. It's extremely unlikely, but there's a finite non-zero probability.
$endgroup$
– user1008646
Nov 13 '18 at 13:12
$begingroup$
With thanks to the R- package
gmp
, I can happily tell you that $frac{ 884279719003555 }{2^{48}}$ is equal to $pi$ to within a part in 10^16$endgroup$
– Carl Witthoft
Nov 13 '18 at 16:37
$begingroup$
Using any technique to evaluate digits of $pi$ also includes an analysis of the error involved. For example if one uses series one knows the amount of error involved in summing $n$ terms. And the error estimate tells you the number of correct digits obtained.
$endgroup$
– Paramanand Singh
Nov 14 '18 at 7:36