Connection between linear/quadratic/cubic/logarithmic convergence and function?
From the way linear/quadratic/cubic convergence of a sequence are
defined, I wonder why they are called linear/quadratic/cubic, in the
sense of some connections to linear/quadratic/cubic functions.
Here are the definitions of linear/quadratic/cubic convergence of a
sequence in my words based on Wikipedia
Suppose that the sequence ${x_k}$ converges to the number $L$.
Suppose $q > 1$.
When $lim_{kto infty} frac{|x_{k+1}-L|}{|x_k-L|^q} = μ$ and $μ ∈ (0, 1)$, we say that the sequence (Q-)converges linearly if
$q=1$,
quadratically if $q=2$, and cubically if $q=3$.
Similarly, how is logarithmic convergence connected to a logarithm
function? The definition of logarithmic convergence is from the same
link to Wikipedia:
If the sequences converges sublinearly and additionally $
lim_{kto infty} frac{|x_{k+2} - x_{k+1}|}{|x_{k+1} - x_k|} = 1, $ then it is said the sequence ${x_k}$ converges logarithmically
to $L$.
I found a plot of linear, linear, quadratic and logarithmic rates of convergence for an example in Wikipedia, which seems to
suggest some connection, although it is not clear to me how they are
connected:
Thanks for clarification!
real-analysis sequences-and-series
add a comment |
From the way linear/quadratic/cubic convergence of a sequence are
defined, I wonder why they are called linear/quadratic/cubic, in the
sense of some connections to linear/quadratic/cubic functions.
Here are the definitions of linear/quadratic/cubic convergence of a
sequence in my words based on Wikipedia
Suppose that the sequence ${x_k}$ converges to the number $L$.
Suppose $q > 1$.
When $lim_{kto infty} frac{|x_{k+1}-L|}{|x_k-L|^q} = μ$ and $μ ∈ (0, 1)$, we say that the sequence (Q-)converges linearly if
$q=1$,
quadratically if $q=2$, and cubically if $q=3$.
Similarly, how is logarithmic convergence connected to a logarithm
function? The definition of logarithmic convergence is from the same
link to Wikipedia:
If the sequences converges sublinearly and additionally $
lim_{kto infty} frac{|x_{k+2} - x_{k+1}|}{|x_{k+1} - x_k|} = 1, $ then it is said the sequence ${x_k}$ converges logarithmically
to $L$.
I found a plot of linear, linear, quadratic and logarithmic rates of convergence for an example in Wikipedia, which seems to
suggest some connection, although it is not clear to me how they are
connected:
Thanks for clarification!
real-analysis sequences-and-series
Can you please tell me simple definition of Linear, Quadratic, Cubic response in a text Thanks
– user42301
Sep 22 '12 at 17:15
The question gives the definitions. Is your problem that you don't understand them?
– Rick Decker
Sep 22 '12 at 18:43
1
@RickDecker: No. I asked for connections to those special functions.
– Tim
Sep 23 '12 at 1:43
add a comment |
From the way linear/quadratic/cubic convergence of a sequence are
defined, I wonder why they are called linear/quadratic/cubic, in the
sense of some connections to linear/quadratic/cubic functions.
Here are the definitions of linear/quadratic/cubic convergence of a
sequence in my words based on Wikipedia
Suppose that the sequence ${x_k}$ converges to the number $L$.
Suppose $q > 1$.
When $lim_{kto infty} frac{|x_{k+1}-L|}{|x_k-L|^q} = μ$ and $μ ∈ (0, 1)$, we say that the sequence (Q-)converges linearly if
$q=1$,
quadratically if $q=2$, and cubically if $q=3$.
Similarly, how is logarithmic convergence connected to a logarithm
function? The definition of logarithmic convergence is from the same
link to Wikipedia:
If the sequences converges sublinearly and additionally $
lim_{kto infty} frac{|x_{k+2} - x_{k+1}|}{|x_{k+1} - x_k|} = 1, $ then it is said the sequence ${x_k}$ converges logarithmically
to $L$.
I found a plot of linear, linear, quadratic and logarithmic rates of convergence for an example in Wikipedia, which seems to
suggest some connection, although it is not clear to me how they are
connected:
Thanks for clarification!
real-analysis sequences-and-series
From the way linear/quadratic/cubic convergence of a sequence are
defined, I wonder why they are called linear/quadratic/cubic, in the
sense of some connections to linear/quadratic/cubic functions.
Here are the definitions of linear/quadratic/cubic convergence of a
sequence in my words based on Wikipedia
Suppose that the sequence ${x_k}$ converges to the number $L$.
Suppose $q > 1$.
When $lim_{kto infty} frac{|x_{k+1}-L|}{|x_k-L|^q} = μ$ and $μ ∈ (0, 1)$, we say that the sequence (Q-)converges linearly if
$q=1$,
quadratically if $q=2$, and cubically if $q=3$.
Similarly, how is logarithmic convergence connected to a logarithm
function? The definition of logarithmic convergence is from the same
link to Wikipedia:
If the sequences converges sublinearly and additionally $
lim_{kto infty} frac{|x_{k+2} - x_{k+1}|}{|x_{k+1} - x_k|} = 1, $ then it is said the sequence ${x_k}$ converges logarithmically
to $L$.
I found a plot of linear, linear, quadratic and logarithmic rates of convergence for an example in Wikipedia, which seems to
suggest some connection, although it is not clear to me how they are
connected:
Thanks for clarification!
real-analysis sequences-and-series
real-analysis sequences-and-series
edited Feb 17 '12 at 17:45
asked Feb 17 '12 at 17:32
Tim
16.3k20121318
16.3k20121318
Can you please tell me simple definition of Linear, Quadratic, Cubic response in a text Thanks
– user42301
Sep 22 '12 at 17:15
The question gives the definitions. Is your problem that you don't understand them?
– Rick Decker
Sep 22 '12 at 18:43
1
@RickDecker: No. I asked for connections to those special functions.
– Tim
Sep 23 '12 at 1:43
add a comment |
Can you please tell me simple definition of Linear, Quadratic, Cubic response in a text Thanks
– user42301
Sep 22 '12 at 17:15
The question gives the definitions. Is your problem that you don't understand them?
– Rick Decker
Sep 22 '12 at 18:43
1
@RickDecker: No. I asked for connections to those special functions.
– Tim
Sep 23 '12 at 1:43
Can you please tell me simple definition of Linear, Quadratic, Cubic response in a text Thanks
– user42301
Sep 22 '12 at 17:15
Can you please tell me simple definition of Linear, Quadratic, Cubic response in a text Thanks
– user42301
Sep 22 '12 at 17:15
The question gives the definitions. Is your problem that you don't understand them?
– Rick Decker
Sep 22 '12 at 18:43
The question gives the definitions. Is your problem that you don't understand them?
– Rick Decker
Sep 22 '12 at 18:43
1
1
@RickDecker: No. I asked for connections to those special functions.
– Tim
Sep 23 '12 at 1:43
@RickDecker: No. I asked for connections to those special functions.
– Tim
Sep 23 '12 at 1:43
add a comment |
3 Answers
3
active
oldest
votes
It's simply that $x^q$ is a linear function of $x$ if $q=1$, a quadratic function of $x$ if $q=2$ and a cubic function of $x$ if $q=3$. Convergence "logarithmically" is different, and that definition really doesn't have any necessary connection to a logarithm function.
Thanks! I did realize that. But (1) how can the plot show these types of convergence as corresponding functions? (2) How is logarithmic convergence connected to logarithmic function?
– Tim
Feb 17 '12 at 17:40
Thanks! So "logarithmic" in "logarithmic convergence" seems meaningless.
– Tim
Feb 17 '12 at 17:49
Suppose $|x_{n+1} - L| = c |x_n - L|^q$. If $q = 1$, $log |x_n - L| = A + B n$ for some constants $A$ and $B$, so you get a straight line plot. If $q > 1$, $log |x_n - L| = A + B q^n$ for some constants $A$ and $B$, and your plot shows exponential growth. One case of logarithmic convergence is where $|x_n - L| approx c/n^p$ for some $p > 0$, and then your plot of $log |x_n - L| = log c - p log n$ will look like a logarithm function.
– Robert Israel
Feb 17 '12 at 20:02
add a comment |
Alright, this thread is like 5 years old, but here goes:
Suppose you have several methods to find the root of the function $f(x) = x(x-4)^2$. The first method you try is the bisection method. Set up bounds of -0.5 and 1 (don't use symmetric bounds for this example) and here's the results of the first few iterations:
$1.0, -0.5, 0.25, -0.125, 0.0625, -0.03125, 0.015625, -0.0078125, 0.00390625$
Now, we obviously know that this is converging to $0$, but consider the number of iterations it takes to find a "correct" digit. The first $0$ is found in iteration 1, the second $0$ is found in iteration 3, and the third is found in iteration 6. We could go on like this, but given the particular properties of base10, we'd know that it takes about 3-4 iterations to find a new correct digit.
That's linear convergence.
Let's try quadratic, Newton's Method:
Our guess will be at x = 1.
$1, -2, -0.8, -0.2, -0.0173, -0.000149, -1.11*10^{-8}$
Now, look at how it took 2 iterations to find the first zero. Then 2 more to find the second one. Then 1 iteration to find the next 2 zeros. Then 1 iteration to find 4 more. The precision doubles each term. So why is that called quadratic convergence and not exponential convergence, or something like that?
Think of the derivative of the linear function and the quadratic function:
$f(x) = c, f(x) = 2x$
If "x" here represents the order of magnitude of the error in the method for a given iteration, then you can see here that these derivatives correlate pretty well to the type of convergence it has.
Why the originators of the term decided to name it this way is beyond me, but that's the logic behind it. I'd assume that logarithmic convergence is just an extension of that idea:
Consider the sequence $frac{1}{n}$. If you do some work with that, you'd see that it is logarithmically convergent. The first correct digit comes at iteration 2: $frac{1}{2} = 0.5$, then iteration 11: $frac{1}{11} = 0.09...$, then iteration 101, and so on.
The derivative of the logarithm is $f(x) = frac{1}{x}$. If you think of "x" represents the same thing as before, then you can see why it somewhat makes sense.
add a comment |
In my opinion, the best way to motivate this terminology is to use series and decimal-base (or other base) expansions. (Let me assume that base-expansions don't end in a string of infinitely many $9$s).
Suppose that $S$ is some positive real number that lies strictly between $0$ and $1$ (for simplicity). Thus we can write
$$S=.d_1d_2d_3d_4ldots=sum_{k=1}^infty d_k(10)^{-k}$$
for some unique sequence ${d_k}$ from ${0, 1,2, 3, 4, 5, 6, 7,8, 9}$.
Let us define
$$S_n=sum_{k=1}^n d_k(10)^{-k}=.d_1d_2ldots d_n.$$
Then we can show that ${S_n}$ converges to $S$ linearly as long as ${d_k}$ is sufficiently nice. Namely, let's assume that ${d_k}$ does not have infinitely many zeroes. Then for sufficiently large $n$ (at least to pass all zeroes) we have
$$frac{|S-S_{n+1}|}{|S-S_n|}<frac{1cdot 10^{-(n+1)}}{d_{n+1}cdot 10^{-(n+1)}}leq 1,.$$
This is just a special case of something more general. Namely, if we set
$$S_n=sum_{k=1}^{mn+b}d_k(10)^{-k}$$
for a pair of worth-while natural $m$ and $b$, we can show that $S_n$ converges to $S$ linearly as well.
Thus linear convergence is supposed to encapsulate the idea that the decimal expansions of the terms of the sequence tend to the decimal expansion of the limit in a linear fashion.
Going further with this, if we set
$$S_n=sum_{k=1}^{n^2}d_k(10)^{-k}$$
we could show that $S_n$ converges to $S$ quadratically (for many different $S$ without oddities in their expansion). Since
$n^2-(n-1)^2=2n-1$
this roughly explains why quadratically converging sequences typically display terms whose decimal expansions are "twice" as good (i.e. twice as many correct decimal digits) as the prior term in the sequence.
I'm inclined to suspect that if we set
$$S_n=sum_{k=1}^{lfloorlog_2(n)rfloor}d_k(10)^{-k}$$
then we'd see logarithmic convergence.
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f110405%2fconnection-between-linear-quadratic-cubic-logarithmic-convergence-and-function%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
It's simply that $x^q$ is a linear function of $x$ if $q=1$, a quadratic function of $x$ if $q=2$ and a cubic function of $x$ if $q=3$. Convergence "logarithmically" is different, and that definition really doesn't have any necessary connection to a logarithm function.
Thanks! I did realize that. But (1) how can the plot show these types of convergence as corresponding functions? (2) How is logarithmic convergence connected to logarithmic function?
– Tim
Feb 17 '12 at 17:40
Thanks! So "logarithmic" in "logarithmic convergence" seems meaningless.
– Tim
Feb 17 '12 at 17:49
Suppose $|x_{n+1} - L| = c |x_n - L|^q$. If $q = 1$, $log |x_n - L| = A + B n$ for some constants $A$ and $B$, so you get a straight line plot. If $q > 1$, $log |x_n - L| = A + B q^n$ for some constants $A$ and $B$, and your plot shows exponential growth. One case of logarithmic convergence is where $|x_n - L| approx c/n^p$ for some $p > 0$, and then your plot of $log |x_n - L| = log c - p log n$ will look like a logarithm function.
– Robert Israel
Feb 17 '12 at 20:02
add a comment |
It's simply that $x^q$ is a linear function of $x$ if $q=1$, a quadratic function of $x$ if $q=2$ and a cubic function of $x$ if $q=3$. Convergence "logarithmically" is different, and that definition really doesn't have any necessary connection to a logarithm function.
Thanks! I did realize that. But (1) how can the plot show these types of convergence as corresponding functions? (2) How is logarithmic convergence connected to logarithmic function?
– Tim
Feb 17 '12 at 17:40
Thanks! So "logarithmic" in "logarithmic convergence" seems meaningless.
– Tim
Feb 17 '12 at 17:49
Suppose $|x_{n+1} - L| = c |x_n - L|^q$. If $q = 1$, $log |x_n - L| = A + B n$ for some constants $A$ and $B$, so you get a straight line plot. If $q > 1$, $log |x_n - L| = A + B q^n$ for some constants $A$ and $B$, and your plot shows exponential growth. One case of logarithmic convergence is where $|x_n - L| approx c/n^p$ for some $p > 0$, and then your plot of $log |x_n - L| = log c - p log n$ will look like a logarithm function.
– Robert Israel
Feb 17 '12 at 20:02
add a comment |
It's simply that $x^q$ is a linear function of $x$ if $q=1$, a quadratic function of $x$ if $q=2$ and a cubic function of $x$ if $q=3$. Convergence "logarithmically" is different, and that definition really doesn't have any necessary connection to a logarithm function.
It's simply that $x^q$ is a linear function of $x$ if $q=1$, a quadratic function of $x$ if $q=2$ and a cubic function of $x$ if $q=3$. Convergence "logarithmically" is different, and that definition really doesn't have any necessary connection to a logarithm function.
edited Feb 17 '12 at 17:44
answered Feb 17 '12 at 17:38
Robert Israel
319k23208457
319k23208457
Thanks! I did realize that. But (1) how can the plot show these types of convergence as corresponding functions? (2) How is logarithmic convergence connected to logarithmic function?
– Tim
Feb 17 '12 at 17:40
Thanks! So "logarithmic" in "logarithmic convergence" seems meaningless.
– Tim
Feb 17 '12 at 17:49
Suppose $|x_{n+1} - L| = c |x_n - L|^q$. If $q = 1$, $log |x_n - L| = A + B n$ for some constants $A$ and $B$, so you get a straight line plot. If $q > 1$, $log |x_n - L| = A + B q^n$ for some constants $A$ and $B$, and your plot shows exponential growth. One case of logarithmic convergence is where $|x_n - L| approx c/n^p$ for some $p > 0$, and then your plot of $log |x_n - L| = log c - p log n$ will look like a logarithm function.
– Robert Israel
Feb 17 '12 at 20:02
add a comment |
Thanks! I did realize that. But (1) how can the plot show these types of convergence as corresponding functions? (2) How is logarithmic convergence connected to logarithmic function?
– Tim
Feb 17 '12 at 17:40
Thanks! So "logarithmic" in "logarithmic convergence" seems meaningless.
– Tim
Feb 17 '12 at 17:49
Suppose $|x_{n+1} - L| = c |x_n - L|^q$. If $q = 1$, $log |x_n - L| = A + B n$ for some constants $A$ and $B$, so you get a straight line plot. If $q > 1$, $log |x_n - L| = A + B q^n$ for some constants $A$ and $B$, and your plot shows exponential growth. One case of logarithmic convergence is where $|x_n - L| approx c/n^p$ for some $p > 0$, and then your plot of $log |x_n - L| = log c - p log n$ will look like a logarithm function.
– Robert Israel
Feb 17 '12 at 20:02
Thanks! I did realize that. But (1) how can the plot show these types of convergence as corresponding functions? (2) How is logarithmic convergence connected to logarithmic function?
– Tim
Feb 17 '12 at 17:40
Thanks! I did realize that. But (1) how can the plot show these types of convergence as corresponding functions? (2) How is logarithmic convergence connected to logarithmic function?
– Tim
Feb 17 '12 at 17:40
Thanks! So "logarithmic" in "logarithmic convergence" seems meaningless.
– Tim
Feb 17 '12 at 17:49
Thanks! So "logarithmic" in "logarithmic convergence" seems meaningless.
– Tim
Feb 17 '12 at 17:49
Suppose $|x_{n+1} - L| = c |x_n - L|^q$. If $q = 1$, $log |x_n - L| = A + B n$ for some constants $A$ and $B$, so you get a straight line plot. If $q > 1$, $log |x_n - L| = A + B q^n$ for some constants $A$ and $B$, and your plot shows exponential growth. One case of logarithmic convergence is where $|x_n - L| approx c/n^p$ for some $p > 0$, and then your plot of $log |x_n - L| = log c - p log n$ will look like a logarithm function.
– Robert Israel
Feb 17 '12 at 20:02
Suppose $|x_{n+1} - L| = c |x_n - L|^q$. If $q = 1$, $log |x_n - L| = A + B n$ for some constants $A$ and $B$, so you get a straight line plot. If $q > 1$, $log |x_n - L| = A + B q^n$ for some constants $A$ and $B$, and your plot shows exponential growth. One case of logarithmic convergence is where $|x_n - L| approx c/n^p$ for some $p > 0$, and then your plot of $log |x_n - L| = log c - p log n$ will look like a logarithm function.
– Robert Israel
Feb 17 '12 at 20:02
add a comment |
Alright, this thread is like 5 years old, but here goes:
Suppose you have several methods to find the root of the function $f(x) = x(x-4)^2$. The first method you try is the bisection method. Set up bounds of -0.5 and 1 (don't use symmetric bounds for this example) and here's the results of the first few iterations:
$1.0, -0.5, 0.25, -0.125, 0.0625, -0.03125, 0.015625, -0.0078125, 0.00390625$
Now, we obviously know that this is converging to $0$, but consider the number of iterations it takes to find a "correct" digit. The first $0$ is found in iteration 1, the second $0$ is found in iteration 3, and the third is found in iteration 6. We could go on like this, but given the particular properties of base10, we'd know that it takes about 3-4 iterations to find a new correct digit.
That's linear convergence.
Let's try quadratic, Newton's Method:
Our guess will be at x = 1.
$1, -2, -0.8, -0.2, -0.0173, -0.000149, -1.11*10^{-8}$
Now, look at how it took 2 iterations to find the first zero. Then 2 more to find the second one. Then 1 iteration to find the next 2 zeros. Then 1 iteration to find 4 more. The precision doubles each term. So why is that called quadratic convergence and not exponential convergence, or something like that?
Think of the derivative of the linear function and the quadratic function:
$f(x) = c, f(x) = 2x$
If "x" here represents the order of magnitude of the error in the method for a given iteration, then you can see here that these derivatives correlate pretty well to the type of convergence it has.
Why the originators of the term decided to name it this way is beyond me, but that's the logic behind it. I'd assume that logarithmic convergence is just an extension of that idea:
Consider the sequence $frac{1}{n}$. If you do some work with that, you'd see that it is logarithmically convergent. The first correct digit comes at iteration 2: $frac{1}{2} = 0.5$, then iteration 11: $frac{1}{11} = 0.09...$, then iteration 101, and so on.
The derivative of the logarithm is $f(x) = frac{1}{x}$. If you think of "x" represents the same thing as before, then you can see why it somewhat makes sense.
add a comment |
Alright, this thread is like 5 years old, but here goes:
Suppose you have several methods to find the root of the function $f(x) = x(x-4)^2$. The first method you try is the bisection method. Set up bounds of -0.5 and 1 (don't use symmetric bounds for this example) and here's the results of the first few iterations:
$1.0, -0.5, 0.25, -0.125, 0.0625, -0.03125, 0.015625, -0.0078125, 0.00390625$
Now, we obviously know that this is converging to $0$, but consider the number of iterations it takes to find a "correct" digit. The first $0$ is found in iteration 1, the second $0$ is found in iteration 3, and the third is found in iteration 6. We could go on like this, but given the particular properties of base10, we'd know that it takes about 3-4 iterations to find a new correct digit.
That's linear convergence.
Let's try quadratic, Newton's Method:
Our guess will be at x = 1.
$1, -2, -0.8, -0.2, -0.0173, -0.000149, -1.11*10^{-8}$
Now, look at how it took 2 iterations to find the first zero. Then 2 more to find the second one. Then 1 iteration to find the next 2 zeros. Then 1 iteration to find 4 more. The precision doubles each term. So why is that called quadratic convergence and not exponential convergence, or something like that?
Think of the derivative of the linear function and the quadratic function:
$f(x) = c, f(x) = 2x$
If "x" here represents the order of magnitude of the error in the method for a given iteration, then you can see here that these derivatives correlate pretty well to the type of convergence it has.
Why the originators of the term decided to name it this way is beyond me, but that's the logic behind it. I'd assume that logarithmic convergence is just an extension of that idea:
Consider the sequence $frac{1}{n}$. If you do some work with that, you'd see that it is logarithmically convergent. The first correct digit comes at iteration 2: $frac{1}{2} = 0.5$, then iteration 11: $frac{1}{11} = 0.09...$, then iteration 101, and so on.
The derivative of the logarithm is $f(x) = frac{1}{x}$. If you think of "x" represents the same thing as before, then you can see why it somewhat makes sense.
add a comment |
Alright, this thread is like 5 years old, but here goes:
Suppose you have several methods to find the root of the function $f(x) = x(x-4)^2$. The first method you try is the bisection method. Set up bounds of -0.5 and 1 (don't use symmetric bounds for this example) and here's the results of the first few iterations:
$1.0, -0.5, 0.25, -0.125, 0.0625, -0.03125, 0.015625, -0.0078125, 0.00390625$
Now, we obviously know that this is converging to $0$, but consider the number of iterations it takes to find a "correct" digit. The first $0$ is found in iteration 1, the second $0$ is found in iteration 3, and the third is found in iteration 6. We could go on like this, but given the particular properties of base10, we'd know that it takes about 3-4 iterations to find a new correct digit.
That's linear convergence.
Let's try quadratic, Newton's Method:
Our guess will be at x = 1.
$1, -2, -0.8, -0.2, -0.0173, -0.000149, -1.11*10^{-8}$
Now, look at how it took 2 iterations to find the first zero. Then 2 more to find the second one. Then 1 iteration to find the next 2 zeros. Then 1 iteration to find 4 more. The precision doubles each term. So why is that called quadratic convergence and not exponential convergence, or something like that?
Think of the derivative of the linear function and the quadratic function:
$f(x) = c, f(x) = 2x$
If "x" here represents the order of magnitude of the error in the method for a given iteration, then you can see here that these derivatives correlate pretty well to the type of convergence it has.
Why the originators of the term decided to name it this way is beyond me, but that's the logic behind it. I'd assume that logarithmic convergence is just an extension of that idea:
Consider the sequence $frac{1}{n}$. If you do some work with that, you'd see that it is logarithmically convergent. The first correct digit comes at iteration 2: $frac{1}{2} = 0.5$, then iteration 11: $frac{1}{11} = 0.09...$, then iteration 101, and so on.
The derivative of the logarithm is $f(x) = frac{1}{x}$. If you think of "x" represents the same thing as before, then you can see why it somewhat makes sense.
Alright, this thread is like 5 years old, but here goes:
Suppose you have several methods to find the root of the function $f(x) = x(x-4)^2$. The first method you try is the bisection method. Set up bounds of -0.5 and 1 (don't use symmetric bounds for this example) and here's the results of the first few iterations:
$1.0, -0.5, 0.25, -0.125, 0.0625, -0.03125, 0.015625, -0.0078125, 0.00390625$
Now, we obviously know that this is converging to $0$, but consider the number of iterations it takes to find a "correct" digit. The first $0$ is found in iteration 1, the second $0$ is found in iteration 3, and the third is found in iteration 6. We could go on like this, but given the particular properties of base10, we'd know that it takes about 3-4 iterations to find a new correct digit.
That's linear convergence.
Let's try quadratic, Newton's Method:
Our guess will be at x = 1.
$1, -2, -0.8, -0.2, -0.0173, -0.000149, -1.11*10^{-8}$
Now, look at how it took 2 iterations to find the first zero. Then 2 more to find the second one. Then 1 iteration to find the next 2 zeros. Then 1 iteration to find 4 more. The precision doubles each term. So why is that called quadratic convergence and not exponential convergence, or something like that?
Think of the derivative of the linear function and the quadratic function:
$f(x) = c, f(x) = 2x$
If "x" here represents the order of magnitude of the error in the method for a given iteration, then you can see here that these derivatives correlate pretty well to the type of convergence it has.
Why the originators of the term decided to name it this way is beyond me, but that's the logic behind it. I'd assume that logarithmic convergence is just an extension of that idea:
Consider the sequence $frac{1}{n}$. If you do some work with that, you'd see that it is logarithmically convergent. The first correct digit comes at iteration 2: $frac{1}{2} = 0.5$, then iteration 11: $frac{1}{11} = 0.09...$, then iteration 101, and so on.
The derivative of the logarithm is $f(x) = frac{1}{x}$. If you think of "x" represents the same thing as before, then you can see why it somewhat makes sense.
answered Sep 6 '17 at 15:24
rp0
336
336
add a comment |
add a comment |
In my opinion, the best way to motivate this terminology is to use series and decimal-base (or other base) expansions. (Let me assume that base-expansions don't end in a string of infinitely many $9$s).
Suppose that $S$ is some positive real number that lies strictly between $0$ and $1$ (for simplicity). Thus we can write
$$S=.d_1d_2d_3d_4ldots=sum_{k=1}^infty d_k(10)^{-k}$$
for some unique sequence ${d_k}$ from ${0, 1,2, 3, 4, 5, 6, 7,8, 9}$.
Let us define
$$S_n=sum_{k=1}^n d_k(10)^{-k}=.d_1d_2ldots d_n.$$
Then we can show that ${S_n}$ converges to $S$ linearly as long as ${d_k}$ is sufficiently nice. Namely, let's assume that ${d_k}$ does not have infinitely many zeroes. Then for sufficiently large $n$ (at least to pass all zeroes) we have
$$frac{|S-S_{n+1}|}{|S-S_n|}<frac{1cdot 10^{-(n+1)}}{d_{n+1}cdot 10^{-(n+1)}}leq 1,.$$
This is just a special case of something more general. Namely, if we set
$$S_n=sum_{k=1}^{mn+b}d_k(10)^{-k}$$
for a pair of worth-while natural $m$ and $b$, we can show that $S_n$ converges to $S$ linearly as well.
Thus linear convergence is supposed to encapsulate the idea that the decimal expansions of the terms of the sequence tend to the decimal expansion of the limit in a linear fashion.
Going further with this, if we set
$$S_n=sum_{k=1}^{n^2}d_k(10)^{-k}$$
we could show that $S_n$ converges to $S$ quadratically (for many different $S$ without oddities in their expansion). Since
$n^2-(n-1)^2=2n-1$
this roughly explains why quadratically converging sequences typically display terms whose decimal expansions are "twice" as good (i.e. twice as many correct decimal digits) as the prior term in the sequence.
I'm inclined to suspect that if we set
$$S_n=sum_{k=1}^{lfloorlog_2(n)rfloor}d_k(10)^{-k}$$
then we'd see logarithmic convergence.
add a comment |
In my opinion, the best way to motivate this terminology is to use series and decimal-base (or other base) expansions. (Let me assume that base-expansions don't end in a string of infinitely many $9$s).
Suppose that $S$ is some positive real number that lies strictly between $0$ and $1$ (for simplicity). Thus we can write
$$S=.d_1d_2d_3d_4ldots=sum_{k=1}^infty d_k(10)^{-k}$$
for some unique sequence ${d_k}$ from ${0, 1,2, 3, 4, 5, 6, 7,8, 9}$.
Let us define
$$S_n=sum_{k=1}^n d_k(10)^{-k}=.d_1d_2ldots d_n.$$
Then we can show that ${S_n}$ converges to $S$ linearly as long as ${d_k}$ is sufficiently nice. Namely, let's assume that ${d_k}$ does not have infinitely many zeroes. Then for sufficiently large $n$ (at least to pass all zeroes) we have
$$frac{|S-S_{n+1}|}{|S-S_n|}<frac{1cdot 10^{-(n+1)}}{d_{n+1}cdot 10^{-(n+1)}}leq 1,.$$
This is just a special case of something more general. Namely, if we set
$$S_n=sum_{k=1}^{mn+b}d_k(10)^{-k}$$
for a pair of worth-while natural $m$ and $b$, we can show that $S_n$ converges to $S$ linearly as well.
Thus linear convergence is supposed to encapsulate the idea that the decimal expansions of the terms of the sequence tend to the decimal expansion of the limit in a linear fashion.
Going further with this, if we set
$$S_n=sum_{k=1}^{n^2}d_k(10)^{-k}$$
we could show that $S_n$ converges to $S$ quadratically (for many different $S$ without oddities in their expansion). Since
$n^2-(n-1)^2=2n-1$
this roughly explains why quadratically converging sequences typically display terms whose decimal expansions are "twice" as good (i.e. twice as many correct decimal digits) as the prior term in the sequence.
I'm inclined to suspect that if we set
$$S_n=sum_{k=1}^{lfloorlog_2(n)rfloor}d_k(10)^{-k}$$
then we'd see logarithmic convergence.
add a comment |
In my opinion, the best way to motivate this terminology is to use series and decimal-base (or other base) expansions. (Let me assume that base-expansions don't end in a string of infinitely many $9$s).
Suppose that $S$ is some positive real number that lies strictly between $0$ and $1$ (for simplicity). Thus we can write
$$S=.d_1d_2d_3d_4ldots=sum_{k=1}^infty d_k(10)^{-k}$$
for some unique sequence ${d_k}$ from ${0, 1,2, 3, 4, 5, 6, 7,8, 9}$.
Let us define
$$S_n=sum_{k=1}^n d_k(10)^{-k}=.d_1d_2ldots d_n.$$
Then we can show that ${S_n}$ converges to $S$ linearly as long as ${d_k}$ is sufficiently nice. Namely, let's assume that ${d_k}$ does not have infinitely many zeroes. Then for sufficiently large $n$ (at least to pass all zeroes) we have
$$frac{|S-S_{n+1}|}{|S-S_n|}<frac{1cdot 10^{-(n+1)}}{d_{n+1}cdot 10^{-(n+1)}}leq 1,.$$
This is just a special case of something more general. Namely, if we set
$$S_n=sum_{k=1}^{mn+b}d_k(10)^{-k}$$
for a pair of worth-while natural $m$ and $b$, we can show that $S_n$ converges to $S$ linearly as well.
Thus linear convergence is supposed to encapsulate the idea that the decimal expansions of the terms of the sequence tend to the decimal expansion of the limit in a linear fashion.
Going further with this, if we set
$$S_n=sum_{k=1}^{n^2}d_k(10)^{-k}$$
we could show that $S_n$ converges to $S$ quadratically (for many different $S$ without oddities in their expansion). Since
$n^2-(n-1)^2=2n-1$
this roughly explains why quadratically converging sequences typically display terms whose decimal expansions are "twice" as good (i.e. twice as many correct decimal digits) as the prior term in the sequence.
I'm inclined to suspect that if we set
$$S_n=sum_{k=1}^{lfloorlog_2(n)rfloor}d_k(10)^{-k}$$
then we'd see logarithmic convergence.
In my opinion, the best way to motivate this terminology is to use series and decimal-base (or other base) expansions. (Let me assume that base-expansions don't end in a string of infinitely many $9$s).
Suppose that $S$ is some positive real number that lies strictly between $0$ and $1$ (for simplicity). Thus we can write
$$S=.d_1d_2d_3d_4ldots=sum_{k=1}^infty d_k(10)^{-k}$$
for some unique sequence ${d_k}$ from ${0, 1,2, 3, 4, 5, 6, 7,8, 9}$.
Let us define
$$S_n=sum_{k=1}^n d_k(10)^{-k}=.d_1d_2ldots d_n.$$
Then we can show that ${S_n}$ converges to $S$ linearly as long as ${d_k}$ is sufficiently nice. Namely, let's assume that ${d_k}$ does not have infinitely many zeroes. Then for sufficiently large $n$ (at least to pass all zeroes) we have
$$frac{|S-S_{n+1}|}{|S-S_n|}<frac{1cdot 10^{-(n+1)}}{d_{n+1}cdot 10^{-(n+1)}}leq 1,.$$
This is just a special case of something more general. Namely, if we set
$$S_n=sum_{k=1}^{mn+b}d_k(10)^{-k}$$
for a pair of worth-while natural $m$ and $b$, we can show that $S_n$ converges to $S$ linearly as well.
Thus linear convergence is supposed to encapsulate the idea that the decimal expansions of the terms of the sequence tend to the decimal expansion of the limit in a linear fashion.
Going further with this, if we set
$$S_n=sum_{k=1}^{n^2}d_k(10)^{-k}$$
we could show that $S_n$ converges to $S$ quadratically (for many different $S$ without oddities in their expansion). Since
$n^2-(n-1)^2=2n-1$
this roughly explains why quadratically converging sequences typically display terms whose decimal expansions are "twice" as good (i.e. twice as many correct decimal digits) as the prior term in the sequence.
I'm inclined to suspect that if we set
$$S_n=sum_{k=1}^{lfloorlog_2(n)rfloor}d_k(10)^{-k}$$
then we'd see logarithmic convergence.
answered 2 days ago
Robert Wolfe
5,69722563
5,69722563
add a comment |
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f110405%2fconnection-between-linear-quadratic-cubic-logarithmic-convergence-and-function%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Can you please tell me simple definition of Linear, Quadratic, Cubic response in a text Thanks
– user42301
Sep 22 '12 at 17:15
The question gives the definitions. Is your problem that you don't understand them?
– Rick Decker
Sep 22 '12 at 18:43
1
@RickDecker: No. I asked for connections to those special functions.
– Tim
Sep 23 '12 at 1:43