Convergence of finite differences Tobin A. Driscoll Richard J. Braun
All the finite-difference formulas in the previous section based on equally spaced nodes converge as the node spacing h h h decreases to zero. However, note that to discretize a function over an interval [ a , b ] [a,b] [ a , b ] , we use h = ( b β a ) / n h=(b-a)/n h = ( b β a ) / n , which implies n = ( b β a ) / h = O ( h β 1 ) n=(b-a)/h=O(h^{-1}) n = ( b β a ) / h = O ( h β 1 ) . As h β 0 h\to 0 h β 0 , the total number of nodes needed grows without bound. So we would like to make h h h as large as possible while still achieving some acceptable accuracy.
For the finite-difference method (5.4.1) with weights a β p , β¦ , a q a_{-p},\ldots,a_{q} a β p β , β¦ , a q β , the truncation error is
Ο f ( h ) = f β² ( 0 ) β 1 h β k = β p q a k f ( k h ) . \tau_f(h) = f'(0) - \frac{1}{h} \sum_{k=-p}^{q} a_k f(kh). Ο f β ( h ) = f β² ( 0 ) β h 1 β k = β p β q β a k β f ( kh ) . The method is said to be convergent if Ο f ( h ) β 0 \tau_f(h)\to 0 Ο f β ( h ) β 0 as h β 0 h\to 0 h β 0 .
Although we are measuring the truncation error only at x = 0 x=0 x = 0 , it could be defined for other x x x as well. The definition adjusts naturally to use f β² β² ( 0 ) f''(0) f β²β² ( 0 ) for difference formulas targeting the second derivative.
All the finite-difference formulas given in Finite differences are convergent.
5.5.1 Order of accuracy ΒΆ Of major interest is the rate at which Ο f β 0 \tau_f\to 0 Ο f β β 0 in a convergent formula.
Hence, the forward-difference formula in Example 5.5.1 has order of accuracy equal to 1; i.e., it is first-order accurate . All else being equal, a higher order of accuracy is preferred, since O ( h m ) O(h^m) O ( h m ) vanishes more quickly for larger values of m m m . As a rule, including more function values in a finite-difference formula (i.e., increasing the number of weights in (5.4.1) ) increases the order of accuracy, as can be seen in TableΒ 5.4.1 and TableΒ 5.4.2 .
Order of accuracy is calculated by expanding Ο f \tau_f Ο f β in a Taylor series about h = 0 h=0 h = 0 and ignoring all but the leading term.[1]
We compute the truncation error of the centered difference formula (5.4.5) :
Ο f ( h ) = f β² ( 0 ) β f ( h ) β f ( β h ) 2 h = f β² ( 0 ) β ( 2 h ) β 1 [ ( f ( 0 ) + h f β² ( 0 ) + 1 2 h 2 f β² β² ( 0 ) + 1 6 h 3 f β² β² β² ( 0 ) + O ( h 4 ) ) β ( f ( 0 ) β h f β² ( 0 ) + 1 2 h 2 f β² β² ( 0 ) β 1 6 h 3 f β² β² β² ( 0 ) + O ( h 4 ) ) ] = β ( 2 h ) β 1 [ 1 3 h 3 f β² β² β² ( 0 ) + O ( h 4 ) ] = O ( h 2 ) . \begin{split}
\tau_f(h) &= f'(0) - \frac{ f(h)-f(-h)}{2h}\\
&= f'(0) - (2h)^{-1} \left[ \bigl( f(0) + h f'(0) + \tfrac{1}{2}h^2f''(0)+ \tfrac{1}{6}h^3f'''(0)+ O(h^4) \bigr) \right.\\
&\qquad - \left. \bigl( f(0) - h f'(0) + \tfrac{1}{2}h^2f''(0) - \tfrac{1}{6}h^3f'''(0)+O(h^4) \bigr) \right] \\
&= -(2h)^{-1} \left[ \tfrac{1}{3}h^3f'''(0) + O(h^4) \right] = O(h^2).
\end{split} Ο f β ( h ) β = f β² ( 0 ) β 2 h f ( h ) β f ( β h ) β = f β² ( 0 ) β ( 2 h ) β 1 [ ( f ( 0 ) + h f β² ( 0 ) + 2 1 β h 2 f β²β² ( 0 ) + 6 1 β h 3 f β²β²β² ( 0 ) + O ( h 4 ) ) β ( f ( 0 ) β h f β² ( 0 ) + 2 1 β h 2 f β²β² ( 0 ) β 6 1 β h 3 f β²β²β² ( 0 ) + O ( h 4 ) ) ] = β ( 2 h ) β 1 [ 3 1 β h 3 f β²β²β² ( 0 ) + O ( h 4 ) ] = O ( h 2 ) . β Thus, this method has order of accuracy equal to 2.
Letβs observe the convergence of the formulas in Example 5.5.1 and Example 5.5.2 , applied to the function sin β‘ ( e x + 1 ) \sin(e^{x+1}) sin ( e x + 1 ) at x = 0 x=0 x = 0 .
f = x -> sin(exp(x + 1))
exact_value = exp(1) * cos(exp(1))Weβll compute the formulas in parallel for a sequence of h h h values.
h = [5 / 10^n for n in 1:6]
FD = zeros(length(h), 2)
for (k, h) in enumerate(h)
FD[k, 1] = (f(h) - f(0)) / h
FD[k, 2] = (f(h) - f(-h)) / 2h
end
pretty_table([h FD]; column_labels=["h", "FD1", "FD2"], backend=:html)All thatβs easy to see from this table is that FD2 appears to converge to the same result as FD1, but more rapidly. A table of errors is more informative.
error_FD = @. exact_value - FD
pretty_table([h error_FD];
column_labels=["h", "error in FD1", "error in FD2"], backend=:html)In each row, h h h is decreased by a factor of 10, so that the error is reduced by a factor of 10 in the first-order method and 100 in the second-order method.
A graphical comparison can be useful as well. On a log-log scale, the error should (as h β 0 h\to 0 h β 0 ) be a straight line whose slope is the order of accuracy. However, itβs conventional in convergence plots to show h h h decreasing from left to right, which negates the slopes.
using Plots
plot(h, abs.(error_FD);
m=:o, label=["FD1" "FD2"], leg=:bottomleft,
xflip=true, xaxis=(:log10, L"h"), yaxis=(:log10, "error"),
title="Convergence of finite differences")
# Add lines for perfect 1st and 2nd order.
plot!(h, [h h .^ 2], l=:dash, label=[L"O(h)" L"O(h^2)"])Letβs observe the convergence of the formulas in Example 5.5.1 and Example 5.5.2 , applied to the function sin β‘ ( e x + 1 ) \sin(e^{x+1}) sin ( e x + 1 ) at x = 0 x=0 x = 0 .
f = @(x) sin(exp(x + 1));
exact_value = exp(1) * cos(exp(1))Weβll compute the formulas in parallel for a sequence of h h h values.
h = 5 ./ 10.^(1:6)';
FD1 = zeros(size(h));
FD2 = zeros(size(h));
for i = 1:length(h)
h_i = h(i);
FD1(i) = (f(h_i) - f(0) ) / h_i;
FD2(i) = (f(h_i) - f(-h_i)) / (2*h_i);
end
table(h, FD1, FD2)All thatβs easy to see from this table is that FD2 appears to converge to the same result as FD1, but more rapidly. A table of errors is more informative.
err1 = abs(exact_value - FD1);
err2 = abs(exact_value - FD2);
table(h, err1, err2, variableNames=["h", "error in FD1", "error in FD2"])In each row, h h h is decreased by a factor of 10, so that the error is reduced by a factor of 10 in the first-order method and 100 in the second-order method.
A graphical comparison can be useful as well. On a log-log scale, the error should (as h β 0 h\to 0 h β 0 ) be a straight line whose slope is the order of accuracy. However, itβs conventional in convergence plots to show h h h decreasing from left to right, which negates the slopes.
clf
loglog(h, abs([err1 err2]), "o-")
set(gca, "xdir", "reverse")
order1 = 0.1 * err1(end) * (h / h(end)) .^ 1;
order2 = 0.1 * err2(end) * (h / h(end)) .^ 2;
hold on
loglog(h, order1, "--", h, order2, "--")
xlabel("h"); ylabel("error")
title("Convergence of finite differences")
legend("FD1", "FD2", "O(h)", "O(h^2)");Letβs observe the convergence of the formulas in Example 5.5.1 and Example 5.5.2 , applied to the function sin β‘ ( e x + 1 ) \sin(e^{x+1}) sin ( e x + 1 ) at x = 0 x=0 x = 0 .
f = lambda x: sin(exp(x + 1))
exact_value = exp(1) * cos(exp(1))
Weβll compute the formulas in parallel for a sequence of h h h values.
h_ = array([5 / 10**(n+1) for n in range(6)])
FD = zeros((len(h_), 2))
for (i, h) in enumerate(h_):
FD[i, 0] = (f(h) - f(0)) / h
FD[i, 1] = (f(h) - f(-h)) / (2*h)
results = PrettyTable()
results.add_column("h", h_)
results.add_column("FD1", FD[:, 0])
results.add_column("FD2", FD[:, 1])
print(results)+--------+---------------------+---------------------+
| h | FD1 | FD2 |
+--------+---------------------+---------------------+
| 0.5 | -2.768575766550465 | -1.9704719803862911 |
| 0.05 | -2.6127952856136947 | -2.475520256824717 |
| 0.005 | -2.4921052189334048 | -2.4783216951179465 |
| 0.0005 | -2.4797278609705042 | -2.4783494526022243 |
| 5e-05 | -2.4784875710526233 | -2.478349730152263 |
| 5e-06 | -2.478363517077753 | -2.478349732970564 |
+--------+---------------------+---------------------+
All thatβs easy to see from this table is that FD2 appears to converge to the same result as FD1, but more rapidly. A table of errors is more informative.
errors = FD - exact_value
results = PrettyTable()
results.add_column("h", h_)
results.add_column("error in FD1", errors[:, 0])
results.add_column("error in FD2", errors[:, 1])
print(results)+--------+-------------------------+------------------------+
| h | error in FD1 | error in FD2 |
+--------+-------------------------+------------------------+
| 0.5 | -0.29022603359523025 | 0.5078777525689437 |
| 0.05 | -0.1344455526584598 | 0.0028294761305178717 |
| 0.005 | -0.01375548597816989 | 2.8037837288330536e-05 |
| 0.0005 | -0.0013781280152693753 | 2.8035301058437767e-07 |
| 5e-05 | -0.0001378380973884319 | 2.8029720766653554e-09 |
| 5e-06 | -1.3784122518067932e-05 | -1.532907134560446e-11 |
+--------+-------------------------+------------------------+
In each row, h h h is decreased by a factor of 10, so that the error is reduced by a factor of 10 in the first-order method and 100 in the second-order method.
A graphical comparison can be useful as well. On a log-log scale, the error should (as h β 0 h\to 0 h β 0 ) be a straight line whose slope is the order of accuracy. However, itβs conventional in convergence plots to show h h h decreasing from left to right, which negates the slopes.
plot(h_, abs(errors), "o-", label=["FD1", "FD2"])
gca().invert_xaxis()
# Add lines for perfect 1st and 2nd order.
loglog(h_, h_, "--", label="$O(h)$")
loglog(h_, h_**2, "--", label="$O(h^2)$")
xlabel("$h$")
ylabel("error")
legend();5.5.2 Stability ΒΆ The truncation error Ο f ( h ) \tau_f(h) Ο f β ( h ) of a finite-difference formula is dominated by a leading term O ( h m ) O(h^m) O ( h m ) for an integer m m m . This error decreases as h β 0 h\to 0 h β 0 . However, we have not yet accounted for the effects of roundoff error. To keep matters as simple as possible, letβs consider the forward difference
Ξ΄ ( h ) = f ( x + h ) β f ( x ) h . \delta(h) = \frac{f(x+h)-f(x)}{h}. Ξ΄ ( h ) = h f ( x + h ) β f ( x ) β . As h β 0 h\to 0 h β 0 , the numerator approaches zero even though the values f ( x + h ) f(x+h) f ( x + h ) and f ( x ) f(x) f ( x ) are not necessarily near zero. This is the recipe for subtractive cancellation error! In fact, finite-difference formulas are inherently ill-conditioned as h β 0 h\to 0 h β 0 . To be precise, recall that the condition number for the problem of computing f ( x + h ) β f ( x ) f(x+h)-f(x) f ( x + h ) β f ( x ) is
ΞΊ ( h ) = max β‘ { β β£ f ( x + h ) β£ , β£ f ( x ) β£ β } β£ f ( x + h ) β f ( x ) β£ , \kappa(h) = \frac{ \max\{\,|f(x+h)|,|f(x)|\,\} }{ |f(x+h)-f(x) | }, ΞΊ ( h ) = β£ f ( x + h ) β f ( x ) β£ max { β£ f ( x + h ) β£ , β£ f ( x ) β£ } β , implying a relative error of size ΞΊ ( h ) Ο΅ mach \kappa(h) \epsilon_\text{mach} ΞΊ ( h ) Ο΅ mach β in its computation. Hence, the numerical value we actually compute for Ξ΄ \delta Ξ΄ is
Ξ΄ ~ ( h ) = f ( x + h ) β f ( x ) h β ( 1 + ΞΊ ( h ) Ο΅ mach ) = Ξ΄ ( h ) + max β‘ { β β£ f ( x + h ) β£ , β£ f ( x ) β£ β } β£ f ( x + h ) β f ( x ) β£ β
f ( x + h ) β f ( x ) h β
Ο΅ mach . \begin{split}
\tilde{\delta}(h) &= \frac{f(x+h)-f(x)}{h}\, (1+\kappa(h)\epsilon_\text{mach}) \\
&= \delta(h) + \frac{ \max\{\,|f(x+h)|,|f(x)|\,\} }{ |f(x+h)-f(x) | }\cdot \frac{f(x+h)-f(x)}{h} \cdot \epsilon_\text{mach}.
\end{split} Ξ΄ ~ ( h ) β = h f ( x + h ) β f ( x ) β ( 1 + ΞΊ ( h ) Ο΅ mach β ) = Ξ΄ ( h ) + β£ f ( x + h ) β f ( x ) β£ max { β£ f ( x + h ) β£ , β£ f ( x ) β£ } β β
h f ( x + h ) β f ( x ) β β
Ο΅ mach β . β Hence, as h β 0 h\to 0 h β 0 ,
β£ Ξ΄ ~ ( h ) β Ξ΄ ( h ) β£ = max β‘ { β β£ f ( x + h ) β£ , β£ f ( x ) β£ β } h β Ο΅ mach βΌ β£ f ( x ) β£ β Ο΅ mach β
h β 1 . \bigl| \tilde{\delta}(h) - \delta(h) \bigr| = \frac{ \max\{\,|f(x+h)|,|f(x)|\,\} }{ h}\,\epsilon_\text{mach} \sim |f(x)|\, \epsilon_\text{mach}\cdot h^{-1}. β£ β£ β Ξ΄ ~ ( h ) β Ξ΄ ( h ) β£ β£ β = h max { β£ f ( x + h ) β£ , β£ f ( x ) β£ } β Ο΅ mach β βΌ β£ f ( x ) β£ Ο΅ mach β β
h β 1 . Combining the truncation error and the roundoff error leads to
β£ f β² ( x ) β Ξ΄ ~ ( h ) β£ β€ β£ Ο f ( h ) β£ + β£ f ( x ) β£ β Ο΅ mach β h β 1 . \bigl| f'(x) - \tilde{\delta}(h) \bigr| \le \bigl| \tau_f(h) \bigr| + \bigl|f(x) \bigr|\, \epsilon_\text{mach} \, h^{-1}. β£ β£ β f β² ( x ) β Ξ΄ ~ ( h ) β£ β£ β β€ β£ β£ β Ο f β ( h ) β£ β£ β + β£ β£ β f ( x ) β£ β£ β Ο΅ mach β h β 1 . Equation (5.5.8) indicates that while the truncation error Ο \tau Ο vanishes as h h h decreases, the roundoff error actually increases thanks to the subtractive cancellation. At some value of h h h the two error contributions will be of roughly equal size. This occurs when
β£ f ( x ) β£ β Ο΅ mach β h β 1 β C h , or h β K Ο΅ mach , \bigl|f(x)\bigr|\, \epsilon_\text{mach}\, h^{-1} \approx C h, \quad \text{or} \quad h \approx K \sqrt{\rule[0.05em]{0mm}{0.4em}\epsilon_\text{mach}}, β£ β£ β f ( x ) β£ β£ β Ο΅ mach β h β 1 β C h , or h β K Ο΅ mach β β , for a constant K K K that depends on x x x and f f f , but not h h h . In summary, for a first-order finite-difference method, the optimum spacing between nodes is proportional to Ο΅ mach β β 1 / 2 \epsilon_\text{mach}^{\,\,1/2} Ο΅ mach 1/2 β . (This observation explains the choice of Ξ΄ in Function 4.6.1 .)
For a method of truncation order m m m , the details of the subtractive cancellation are a bit different, but the conclusion generalizes.
For computing with a finite-difference method of order m m m in the presence of roundoff, the optimal spacing of nodes satisfies
h opt β Ο΅ mach β β 1 / ( m + 1 ) , h_\text{opt} \approx \epsilon_\text{mach}^{\,\,1/(m+1)}, h opt β β Ο΅ mach 1/ ( m + 1 ) β , and the optimum total error is roughly Ο΅ mach β β m / ( m + 1 ) \epsilon_\text{mach}^{\,\, m/(m+1)} Ο΅ mach m / ( m + 1 ) β .
A different statement of the conclusion is that for a first-order formula, at most we can expect accuracy in only about half of the available machine digits. As m m m increases, we get ever closer to using the full accuracy available. Higher-order finite-difference methods are both more efficient and less vulnerable to roundoff than low-order methods.
Let f ( x ) = e β 1.3 x f(x)=e^{-1.3x} f ( x ) = e β 1.3 x . We apply finite-difference formulas of first, second, and fourth order to estimate f β² ( 0 ) = β 1.3 f'(0)=-1.3 f β² ( 0 ) = β 1.3 .
f = x -> exp(-1.3 * x);
exact = -1.3
h = [1 / 10^n for n in 1:12]
FD = zeros(length(h), 3)
for (k, h) in enumerate(h)
nodes = h * (-2:2)
vals = @. f(nodes)
FD[k, 1] = dot([0 0 -1 1 0] / h, vals)
FD[k, 2] = dot([0 -1 / 2 0 1 / 2 0] / h, vals)
FD[k, 3] = dot([1 / 12 -2 / 3 0 2 / 3 -1 / 12] / h, vals)
end
pretty_table([h FD]; column_labels=["h", "FD1", "FD2", "FD4"], backend=:html)They all seem to be converging to -1.3. The convergence plot reveals some interesting structure to the errors, though.
err = @. abs(FD - exact)
plot(h, err;
m=:o, label=["FD1" "FD2" "FD4"], legend=:bottomright,
xaxis=(:log10, L"h"), xflip=true, yaxis=(:log10, "error"),
title="FD error with roundoff")
# Add line for perfect 1st order.
plot!(h, 0.1 * eps() ./ h, l=:dash, color=:black, label=L"O(h^{-1})")Again the graph is made so that h h h decreases from left to right. The errors are dominated at first by truncation error, which decreases most rapidly for the fourth-order formula. However, increasing roundoff error eventually equals and then dominates the truncation error as h h h continues to decrease. As the order of accuracy increases, the crossover point moves to the left (greater efficiency) and down (greater accuracy).
Let f ( x ) = e β 1.3 x f(x)=e^{-1.3x} f ( x ) = e β 1.3 x . We apply finite-difference formulas of first, second, and fourth order to estimate f β² ( 0 ) = β 1.3 f'(0)=-1.3 f β² ( 0 ) = β 1.3 .
f = @(x) exp(-1.3 * x);
exact = -1.3;
h = 10 .^ (-(1:12))';
FD = zeros(length(h), 3);
for i = 1:length(h)
h_i = h(i);
nodes = h_i * (-2:2);
vals = f(nodes);
FD(i, 1) = dot([0 0 -1 1 0] / h_i, vals);
FD(i, 2) = dot([0 -1/2 0 1/2 0] / h_i, vals);
FD(i, 3) = dot([1/12 -2/3 0 2/3 -1/12] / h_i, vals);
end
format long
table(h, FD(:, 1), FD(:, 2), FD(:, 3), variableNames=["h", "FD1", "FD2", "FD4"])They all seem to be converging to -1.3. The convergence plot reveals some interesting structure to the errors, though.
err = abs(FD - exact);
clf
loglog(h, err, "o-")
set(gca, "xdir", "reverse")
order1 = 0.1 * err(end, 1) * (h / h(end)) .^ (-1);
hold on
loglog(h, order1, "k--")
xlabel("h"); ylabel("error")
title("FD error with roundoff")
legend("FD1", "FD2", "FD4", "O(1/h)", "location", "northeast");Again the graph is made so that h h h decreases from left to right. The errors are dominated at first by truncation error, which decreases most rapidly for the fourth-order formula. However, increasing roundoff error eventually equals and then dominates the truncation error as h h h continues to decrease. As the order of accuracy increases, the crossover point moves to the left (greater efficiency) and down (greater accuracy).
Let f ( x ) = e β 1.3 x f(x)=e^{-1.3x} f ( x ) = e β 1.3 x . We apply finite-difference formulas of first, second, and fourth order to estimate f β² ( 0 ) = β 1.3 f'(0)=-1.3 f β² ( 0 ) = β 1.3 .
f = lambda x: exp(-1.3 * x)
exact = -1.3
h_ = array([1 / 10**(n+1) for n in range(12)])
FD = zeros((len(h_), 3))
for (i, h) in enumerate(h_):
nodes = h * linspace(-2, 2, 5)
vals = f(nodes)
FD[i, 0] = dot(array([0, 0, -1, 1, 0]) / h, vals)
FD[i, 1] = dot(array([0, -1/2, 0, 1/2, 0]) / h, vals)
FD[i, 2] = dot(array([1/12, -2/3, 0, 2/3, -1/12]) / h, vals)
results = PrettyTable()
results.add_column("h", h_)
results.add_column("FD1", FD[:, 0])
results.add_column("FD2", FD[:, 1])
results.add_column("FD4", FD[:, 2])
print(results)+--------+---------------------+---------------------+---------------------+
| h | FD1 | FD2 | FD4 |
+--------+---------------------+---------------------+---------------------+
| 0.1 | -1.219045690794387 | -1.3036647620203026 | -1.2999875986418985 |
| 0.01 | -1.2915864979712421 | -1.3000366169760815 | -1.2999999987623323 |
| 0.001 | -1.299155366047744 | -1.3000003661667074 | -1.2999999999998653 |
| 0.0001 | -1.2999155036619303 | -1.300000003662075 | -1.2999999999996963 |
| 1e-05 | -1.2999915500411239 | -1.3000000000396366 | -1.3000000000099614 |
| 1e-06 | -1.2999991549911272 | -1.2999999999900496 | -1.300000000024955 |
| 1e-07 | -1.299999915493899 | -1.3000000000289944 | -1.3000000004946557 |
| 1e-08 | -1.2999999965401798 | -1.3000000042305544 | -1.3000000107468235 |
| 1e-09 | -1.2999999965401796 | -1.3000000340328768 | -1.3000000442867095 |
| 1e-10 | -1.3000001075624823 | -1.2999996723115146 | -1.2999999661783477 |
| 1e-11 | -1.3000045484545808 | -1.3000038001061966 | -1.3000094495401033 |
| 1e-12 | -1.2999601395335958 | -1.300004483829298 | -1.3000696164874246 |
+--------+---------------------+---------------------+---------------------+
They all seem to be converging to -1.3. The convergence plot reveals some interesting structure to the errors, though.
loglog(h_, abs(FD[:, 0] + 1.3), "-o", label="FD1")
loglog(h_, abs(FD[:, 1] + 1.3), "-o", label="FD2")
loglog(h_, abs(FD[:, 2] + 1.3), "-o", label="FD4")
gca().invert_xaxis()
plot(h_, 0.1 * 2 ** (-52) / h_, "--", color="k", label="$O(h^{-1})$")
xlabel("$h$")
ylabel("total error")
title("FD error with roundoff")
legend();Again the graph is made so that h h h decreases from left to right. The errors are dominated at first by truncation error, which decreases most rapidly for the fourth-order formula. However, increasing roundoff error eventually equals and then dominates the truncation error as h h h continues to decrease. As the order of accuracy increases, the crossover point moves to the left (greater efficiency) and down (greater accuracy).
5.5.3 Exercises ΒΆ β¨ Evaluate the centered second-order finite-difference approximation to f β² ( 4 Ο / 5 ) f'(4\pi/5) f β² ( 4 Ο /5 ) for f ( x ) = cos β‘ ( x 3 ) f(x)=\cos(x^3) f ( x ) = cos ( x 3 ) and h = 2 β 1 , 2 β 2 , β¦ , 2 β 8 h=2^{-1},2^{-2},\ldots,2^{-8} h = 2 β 1 , 2 β 2 , β¦ , 2 β 8 . On a log-log graph, plot the error as a function of h h h and compare it graphically to second-order convergence.
β A different way to derive finite-difference formulas is the method of undetermined coefficients . Starting from (5.4.1) ,
f β² ( x ) β 1 h β k = β p q a k f ( x + k h ) , f'(x) \approx \frac{1}{h}\sum_{k=-p}^q a_k f(x+kh), f β² ( x ) β h 1 β k = β p β q β a k β f ( x + kh ) , let each f ( x + k h ) f(x+k h) f ( x + kh ) be expanded in a series around h = 0 h=0 h = 0 . When the coefficients of powers of h h h are collected, one obtains
1 h β k = β p q a k f ( x + k h ) = b 0 h + b 1 f β² ( x ) + b 2 f β² β² ( x ) h + β― β , \frac{1}{h} \sum_{k=-p}^q a_k f(x+kh) = \frac{b_0}{h} + b_1 f'(x) + b_2 f''(x)h + \cdots, h 1 β k = β p β q β a k β f ( x + kh ) = h b 0 β β + b 1 β f β² ( x ) + b 2 β f β²β² ( x ) h + β― , where
b i = β k = β p q k i a k . b_i = \sum_{k=-p}^q k^i a_k. b i β = k = β p β q β k i a k β . In order to make the result as close as possible to f β² ( x ) f'(x) f β² ( x ) , we impose the conditions
b 0 = 0 , β b 1 = 1 , β b 2 = 0 , β b 3 = 0 , β β¦ , β b p + q = 0. b_0 = 0,\, b_1=1,\, b_2=0,\, b_3=0,\,\ldots,\,b_{p+q}=0. b 0 β = 0 , b 1 β = 1 , b 2 β = 0 , b 3 β = 0 , β¦ , b p + q β = 0. This provides a system of linear equations for the weights.
(a) For p = q = 2 p=q=2 p = q = 2 , write out the system of equations for a β 2 a_{-2} a β 2 β , a β 1 a_{-1} a β 1 β , a 0 a_0 a 0 β , a 1 a_1 a 1 β , a 2 a_2 a 2 β .
(b) Verify that the coefficients from the appropriate row of TableΒ 5.4.1 satisfy the equations you wrote down in part (a).
(c) Derive the finite-difference formula for p = 1 p=1 p = 1 , q = 2 q=2 q = 2 using the method of undetermined coefficients.