Asymptotics of a root
$begingroup$
Suppose $a,binmathbb{N}$ and, moreoever, $1leqslant a$ and $bgeqslant a+2$.
I am considering the polynomial
$$
f_{a,b}(x):=x^{2b}-frac{x^{b-a}-1}{x-1}
$$
which has exactly one positive (simple) root $x_{a,b}$ and, moreover, $x_{a,b}>1$. In particular, $lim_{atoinfty}x_{a,b}=1$.
I am trying to analyse at which rate $x_{a,b}$ tends to $1$ as $atoinfty$. To this end, I make the ansatz
$$
x_{a,b}=1+y_{a,b}
$$
and now try to analyse at which rate $y_{a,b}to 0$ as $atoinfty$, say, $y_{a,b}=frac{1}{a}+o(1/a)$ or whatever the correct rate might be.
Do you have any idea how to get this?
My first attempt was to plug the ansatz for $x_{a,b}$ in the polynomial:
begin{align*}
&(1+y_{a,b})^{2b}-frac{(1+y_{a,b})^{b-a}-1}{(1+y_{a,b})-1}=0\
&Leftrightarrow (1+y_{a,b})^{2b+1}-(1+y_{a,b})^{2b}-(1+y_{a,b})^{b-a}+1=0
end{align*}
Using the binomial theorem, I wrote the last equation as
begin{align*}
&sum_{k=1}^{2b+1}binom{2b+1}{k}y_{a,b}^k-sum_{k=1}^{2b}binom{2b}{k}y_{a,b}^k-sum_{k=1}^{b-a}binom{b-a}{k}y_{a,b}^k\
&=y_{a,b}(1-(b-a))+sum_{k=2}^{2b+1}binom{2b+1}{k}y_{a,b}^k-sum_{k=2}^{2b}binom{2b}{k}y_{a,b}^k-sum_{k=2}^{b-a}binom{b-a}{k}y_{a,b}^k\
&=0
end{align*}
Factoring out $y_{a,b}$, what I get is
begin{equation*}
y_{a,b}cdot left(1-b+a+sum_{k=1}^{2b}binom{2b+1}{k+1}y_{a,b}^k-sum_{k=1}^{2b-1}binom{2b}{k+1}y_{a,b}^k-sum_{k=1}^{b-a-1}binom{b-a}{k+1}y_{a,b}^kright)=0.
end{equation*}
This equation is fulfilled exactly if $y_{a,b}=0$ (what seems not to be helpful for my purpose) or if
begin{equation}
sum_{k=1}^{2b}binom{2b+1}{k+1}y_{a,b}^k-sum_{k=1}^{2b-1}binom{2b}{k+1}y_{a,b}^k-sum_{k=1}^{b-a-1}binom{b-a}{k+1}y_{a,b}^k=b-a-1.
end{equation}
Maybe this last equation can help to get the desired Information about $y_{a,b}$; however, I don’t see how.
I am thankful for your ideas.
real-analysis polynomials roots
$endgroup$
add a comment |
$begingroup$
Suppose $a,binmathbb{N}$ and, moreoever, $1leqslant a$ and $bgeqslant a+2$.
I am considering the polynomial
$$
f_{a,b}(x):=x^{2b}-frac{x^{b-a}-1}{x-1}
$$
which has exactly one positive (simple) root $x_{a,b}$ and, moreover, $x_{a,b}>1$. In particular, $lim_{atoinfty}x_{a,b}=1$.
I am trying to analyse at which rate $x_{a,b}$ tends to $1$ as $atoinfty$. To this end, I make the ansatz
$$
x_{a,b}=1+y_{a,b}
$$
and now try to analyse at which rate $y_{a,b}to 0$ as $atoinfty$, say, $y_{a,b}=frac{1}{a}+o(1/a)$ or whatever the correct rate might be.
Do you have any idea how to get this?
My first attempt was to plug the ansatz for $x_{a,b}$ in the polynomial:
begin{align*}
&(1+y_{a,b})^{2b}-frac{(1+y_{a,b})^{b-a}-1}{(1+y_{a,b})-1}=0\
&Leftrightarrow (1+y_{a,b})^{2b+1}-(1+y_{a,b})^{2b}-(1+y_{a,b})^{b-a}+1=0
end{align*}
Using the binomial theorem, I wrote the last equation as
begin{align*}
&sum_{k=1}^{2b+1}binom{2b+1}{k}y_{a,b}^k-sum_{k=1}^{2b}binom{2b}{k}y_{a,b}^k-sum_{k=1}^{b-a}binom{b-a}{k}y_{a,b}^k\
&=y_{a,b}(1-(b-a))+sum_{k=2}^{2b+1}binom{2b+1}{k}y_{a,b}^k-sum_{k=2}^{2b}binom{2b}{k}y_{a,b}^k-sum_{k=2}^{b-a}binom{b-a}{k}y_{a,b}^k\
&=0
end{align*}
Factoring out $y_{a,b}$, what I get is
begin{equation*}
y_{a,b}cdot left(1-b+a+sum_{k=1}^{2b}binom{2b+1}{k+1}y_{a,b}^k-sum_{k=1}^{2b-1}binom{2b}{k+1}y_{a,b}^k-sum_{k=1}^{b-a-1}binom{b-a}{k+1}y_{a,b}^kright)=0.
end{equation*}
This equation is fulfilled exactly if $y_{a,b}=0$ (what seems not to be helpful for my purpose) or if
begin{equation}
sum_{k=1}^{2b}binom{2b+1}{k+1}y_{a,b}^k-sum_{k=1}^{2b-1}binom{2b}{k+1}y_{a,b}^k-sum_{k=1}^{b-a-1}binom{b-a}{k+1}y_{a,b}^k=b-a-1.
end{equation}
Maybe this last equation can help to get the desired Information about $y_{a,b}$; however, I don’t see how.
I am thankful for your ideas.
real-analysis polynomials roots
$endgroup$
1
$begingroup$
How does $b$ vary with $a$ ?
$endgroup$
– Joel Cohen
Dec 22 '18 at 19:18
$begingroup$
@JoelCohen I always assume that $bgeq a+2$, i.e the difference $b-a$ is at least 2. Hence, if $atoinfty$, then, automatically, $btoinfty$.
$endgroup$
– J. Doe
Dec 22 '18 at 19:26
1
$begingroup$
@JDoe The answer really depends on the rate of growth of $b-a$ compared to $b$. As is shown in my answer below, if $(b-a) ln(b-a) = o(b)$, then we can show $y_{a,b} sim frac{ln(b-a)}{2b}$. In cases where $b-a$ grows faster, I don't know the answer (except that it isn't $ frac{ln(b-a)}{2b}$).
$endgroup$
– Joel Cohen
Dec 23 '18 at 2:26
add a comment |
$begingroup$
Suppose $a,binmathbb{N}$ and, moreoever, $1leqslant a$ and $bgeqslant a+2$.
I am considering the polynomial
$$
f_{a,b}(x):=x^{2b}-frac{x^{b-a}-1}{x-1}
$$
which has exactly one positive (simple) root $x_{a,b}$ and, moreover, $x_{a,b}>1$. In particular, $lim_{atoinfty}x_{a,b}=1$.
I am trying to analyse at which rate $x_{a,b}$ tends to $1$ as $atoinfty$. To this end, I make the ansatz
$$
x_{a,b}=1+y_{a,b}
$$
and now try to analyse at which rate $y_{a,b}to 0$ as $atoinfty$, say, $y_{a,b}=frac{1}{a}+o(1/a)$ or whatever the correct rate might be.
Do you have any idea how to get this?
My first attempt was to plug the ansatz for $x_{a,b}$ in the polynomial:
begin{align*}
&(1+y_{a,b})^{2b}-frac{(1+y_{a,b})^{b-a}-1}{(1+y_{a,b})-1}=0\
&Leftrightarrow (1+y_{a,b})^{2b+1}-(1+y_{a,b})^{2b}-(1+y_{a,b})^{b-a}+1=0
end{align*}
Using the binomial theorem, I wrote the last equation as
begin{align*}
&sum_{k=1}^{2b+1}binom{2b+1}{k}y_{a,b}^k-sum_{k=1}^{2b}binom{2b}{k}y_{a,b}^k-sum_{k=1}^{b-a}binom{b-a}{k}y_{a,b}^k\
&=y_{a,b}(1-(b-a))+sum_{k=2}^{2b+1}binom{2b+1}{k}y_{a,b}^k-sum_{k=2}^{2b}binom{2b}{k}y_{a,b}^k-sum_{k=2}^{b-a}binom{b-a}{k}y_{a,b}^k\
&=0
end{align*}
Factoring out $y_{a,b}$, what I get is
begin{equation*}
y_{a,b}cdot left(1-b+a+sum_{k=1}^{2b}binom{2b+1}{k+1}y_{a,b}^k-sum_{k=1}^{2b-1}binom{2b}{k+1}y_{a,b}^k-sum_{k=1}^{b-a-1}binom{b-a}{k+1}y_{a,b}^kright)=0.
end{equation*}
This equation is fulfilled exactly if $y_{a,b}=0$ (what seems not to be helpful for my purpose) or if
begin{equation}
sum_{k=1}^{2b}binom{2b+1}{k+1}y_{a,b}^k-sum_{k=1}^{2b-1}binom{2b}{k+1}y_{a,b}^k-sum_{k=1}^{b-a-1}binom{b-a}{k+1}y_{a,b}^k=b-a-1.
end{equation}
Maybe this last equation can help to get the desired Information about $y_{a,b}$; however, I don’t see how.
I am thankful for your ideas.
real-analysis polynomials roots
$endgroup$
Suppose $a,binmathbb{N}$ and, moreoever, $1leqslant a$ and $bgeqslant a+2$.
I am considering the polynomial
$$
f_{a,b}(x):=x^{2b}-frac{x^{b-a}-1}{x-1}
$$
which has exactly one positive (simple) root $x_{a,b}$ and, moreover, $x_{a,b}>1$. In particular, $lim_{atoinfty}x_{a,b}=1$.
I am trying to analyse at which rate $x_{a,b}$ tends to $1$ as $atoinfty$. To this end, I make the ansatz
$$
x_{a,b}=1+y_{a,b}
$$
and now try to analyse at which rate $y_{a,b}to 0$ as $atoinfty$, say, $y_{a,b}=frac{1}{a}+o(1/a)$ or whatever the correct rate might be.
Do you have any idea how to get this?
My first attempt was to plug the ansatz for $x_{a,b}$ in the polynomial:
begin{align*}
&(1+y_{a,b})^{2b}-frac{(1+y_{a,b})^{b-a}-1}{(1+y_{a,b})-1}=0\
&Leftrightarrow (1+y_{a,b})^{2b+1}-(1+y_{a,b})^{2b}-(1+y_{a,b})^{b-a}+1=0
end{align*}
Using the binomial theorem, I wrote the last equation as
begin{align*}
&sum_{k=1}^{2b+1}binom{2b+1}{k}y_{a,b}^k-sum_{k=1}^{2b}binom{2b}{k}y_{a,b}^k-sum_{k=1}^{b-a}binom{b-a}{k}y_{a,b}^k\
&=y_{a,b}(1-(b-a))+sum_{k=2}^{2b+1}binom{2b+1}{k}y_{a,b}^k-sum_{k=2}^{2b}binom{2b}{k}y_{a,b}^k-sum_{k=2}^{b-a}binom{b-a}{k}y_{a,b}^k\
&=0
end{align*}
Factoring out $y_{a,b}$, what I get is
begin{equation*}
y_{a,b}cdot left(1-b+a+sum_{k=1}^{2b}binom{2b+1}{k+1}y_{a,b}^k-sum_{k=1}^{2b-1}binom{2b}{k+1}y_{a,b}^k-sum_{k=1}^{b-a-1}binom{b-a}{k+1}y_{a,b}^kright)=0.
end{equation*}
This equation is fulfilled exactly if $y_{a,b}=0$ (what seems not to be helpful for my purpose) or if
begin{equation}
sum_{k=1}^{2b}binom{2b+1}{k+1}y_{a,b}^k-sum_{k=1}^{2b-1}binom{2b}{k+1}y_{a,b}^k-sum_{k=1}^{b-a-1}binom{b-a}{k+1}y_{a,b}^k=b-a-1.
end{equation}
Maybe this last equation can help to get the desired Information about $y_{a,b}$; however, I don’t see how.
I am thankful for your ideas.
real-analysis polynomials roots
real-analysis polynomials roots
edited Dec 22 '18 at 16:05
J. Doe
asked Dec 22 '18 at 11:01
J. DoeJ. Doe
163
163
1
$begingroup$
How does $b$ vary with $a$ ?
$endgroup$
– Joel Cohen
Dec 22 '18 at 19:18
$begingroup$
@JoelCohen I always assume that $bgeq a+2$, i.e the difference $b-a$ is at least 2. Hence, if $atoinfty$, then, automatically, $btoinfty$.
$endgroup$
– J. Doe
Dec 22 '18 at 19:26
1
$begingroup$
@JDoe The answer really depends on the rate of growth of $b-a$ compared to $b$. As is shown in my answer below, if $(b-a) ln(b-a) = o(b)$, then we can show $y_{a,b} sim frac{ln(b-a)}{2b}$. In cases where $b-a$ grows faster, I don't know the answer (except that it isn't $ frac{ln(b-a)}{2b}$).
$endgroup$
– Joel Cohen
Dec 23 '18 at 2:26
add a comment |
1
$begingroup$
How does $b$ vary with $a$ ?
$endgroup$
– Joel Cohen
Dec 22 '18 at 19:18
$begingroup$
@JoelCohen I always assume that $bgeq a+2$, i.e the difference $b-a$ is at least 2. Hence, if $atoinfty$, then, automatically, $btoinfty$.
$endgroup$
– J. Doe
Dec 22 '18 at 19:26
1
$begingroup$
@JDoe The answer really depends on the rate of growth of $b-a$ compared to $b$. As is shown in my answer below, if $(b-a) ln(b-a) = o(b)$, then we can show $y_{a,b} sim frac{ln(b-a)}{2b}$. In cases where $b-a$ grows faster, I don't know the answer (except that it isn't $ frac{ln(b-a)}{2b}$).
$endgroup$
– Joel Cohen
Dec 23 '18 at 2:26
1
1
$begingroup$
How does $b$ vary with $a$ ?
$endgroup$
– Joel Cohen
Dec 22 '18 at 19:18
$begingroup$
How does $b$ vary with $a$ ?
$endgroup$
– Joel Cohen
Dec 22 '18 at 19:18
$begingroup$
@JoelCohen I always assume that $bgeq a+2$, i.e the difference $b-a$ is at least 2. Hence, if $atoinfty$, then, automatically, $btoinfty$.
$endgroup$
– J. Doe
Dec 22 '18 at 19:26
$begingroup$
@JoelCohen I always assume that $bgeq a+2$, i.e the difference $b-a$ is at least 2. Hence, if $atoinfty$, then, automatically, $btoinfty$.
$endgroup$
– J. Doe
Dec 22 '18 at 19:26
1
1
$begingroup$
@JDoe The answer really depends on the rate of growth of $b-a$ compared to $b$. As is shown in my answer below, if $(b-a) ln(b-a) = o(b)$, then we can show $y_{a,b} sim frac{ln(b-a)}{2b}$. In cases where $b-a$ grows faster, I don't know the answer (except that it isn't $ frac{ln(b-a)}{2b}$).
$endgroup$
– Joel Cohen
Dec 23 '18 at 2:26
$begingroup$
@JDoe The answer really depends on the rate of growth of $b-a$ compared to $b$. As is shown in my answer below, if $(b-a) ln(b-a) = o(b)$, then we can show $y_{a,b} sim frac{ln(b-a)}{2b}$. In cases where $b-a$ grows faster, I don't know the answer (except that it isn't $ frac{ln(b-a)}{2b}$).
$endgroup$
– Joel Cohen
Dec 23 '18 at 2:26
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
For brevity, I will denote $n = 2b$, $k = b-a$ and drop the index $a,b$ in $y := y_{a,b}$. Our equation is
$$(1+y)^n = frac{(1+y)^k - 1}{y}$$
which we can rewrite as
$$(1+y)^k = 1 + y (1+y)^n$$
and taking logarithms, we get
$$k ln(1+y) = ln(1 + y (1+y)^n)$$
Now, assuming $y (1+y)^n underset{a to infty}{longrightarrow} 0$, we get the equivalents
$$k y underset{a to infty}{sim} y (1+y)^n$$
dividing by $y$ and taking logarithms yields
$$ln(k) underset{a to infty}{sim} n ln(1+y) underset{a to infty}{sim} n y$$
and finally we get
$$y underset{a to infty}{sim} frac{ln(k)}{n} = frac{ln(b-a)}{2b}$$
EDIT : There was a mistake initially in assuming the condition $y (1+y)^n underset{a to infty}{longrightarrow} 0$ was automatic, but it's only true under the assumption that $k ln(k) = o(n)$. In this section, we prove the following three properties are equivalent :
1) $y (1+y)^n underset{a to infty}{longrightarrow} 0$
2) $y sim frac{ln(k)}{n}$
3) $k ln(k) = o(n)$
We have just proved that $1) Longrightarrow 2)$.
Let's prove $2) Longrightarrow 3)$. We assume 2), that is $y sim frac{ln(k)}{n}$. We can then compute that
begin{eqnarray*}
y (1+y)^n &=& frac{ln(k)}{n} e^{n ln(1 + frac{ln(k)}{n})} \
&=& frac{ln(k)}{n} e^{ln(k) - frac{(ln(k))^2}{2n} + oleft(frac{(ln(k))^2}{n}right)} \
&=& frac{ln(k)}{n} k , e^{- frac{(ln(k))^2}{2n} + oleft(frac{(ln(k))^2}{n}right)} \
&sim& frac{k ln(k)}{n}
end{eqnarray*}
Which implies that $frac{k ln(k)}{n} to 0$, which is property 3).
And finally, we prove that $3) Longrightarrow 1)$. We assume $k ln(k) = o(n)$. Let $epsilon > 0$ be a fixed parameter. We prove that for $a$ sufficiently large
$$frac{ln(k) - epsilon}{n} < y < frac{ln(k) + epsilon}{n}$$
To do that, we compute the asymptotic expansion of $f_{a,b}left(1+ frac{ln(k) pm epsilon}{n}right)$ and look at its sign. Indeed we have
begin{eqnarray*}
left(1 + frac{ln(k) pm epsilon}{n}right)^n &=& e^{n lnleft(1+ frac{ln(k) pm epsilon}{n}right)}\
&=& e^{(ln(k) pm epsilon) - frac{(ln(k) pm epsilon)^2}{2n} + oleft(frac{(ln k pm epsilon)^2}{2n}right)}\
&=& k , e^{pmepsilon} e^{- frac{(ln(k) pm epsilon)^2}{2n} + oleft(frac{(ln k pm epsilon)^2}{2n}right)}\
&=& k , e^{pmepsilon} + o(k)
end{eqnarray*}
And
begin{eqnarray*}
frac{left(1 + frac{ln(k) pm epsilon}{n}right)^k-1}{frac{ln(k) pm epsilon}{n}} &=& frac{n}{ln(k) pm epsilon} left(e^{k lnleft(1+ frac{ln(k) pm epsilon}{n}right)}-1right) \
&=& frac{n}{ln(k) pm epsilon} left(e^{frac{k(ln(k) pm epsilon)}{n} + oleft(frac{k(ln(k) pm epsilon)}{n}right)} - 1 right)\
&=& k + o(k)
end{eqnarray*}
So we finally get
$$f_{a,b}left(1+ frac{ln(k) pm epsilon}{n}right) sim (e^{pmepsilon}-1) , k $$
This means that for $a$ sufficiently large, we have $f_{a,b}left(1+ frac{ln(k) - epsilon}{n}right) < 0$ and $f_{a,b}left(1+ frac{ln(k) + epsilon}{n}right) > 0$ and so there is a root of $f_{a,b}$ between those two boundaries. But because there is only one positive root, it must be $y$, and we deduce
$$frac{ln(k) - epsilon}{n} < y < frac{ln(k) + epsilon}{n}$$
And now we can asymptotically bound the quantity $y (1+y)^n$ by
$$ underbrace{frac{ln(k) - epsilon}{n} left(1 + frac{ln(k) - epsilon}{n}right)^n}_{sim frac{k (ln(k)-epsilon) e^{-epsilon}}{n}} < y (1+y)^n < underbrace{frac{ln(k) + epsilon}{n} left(1 + frac{ln(k) + epsilon}{n}right)^n }_{sim frac{k (ln(k)+epsilon) e^{epsilon}}{n}}$$
where both bounds converge to $0$ from the assumption that $k ln(k) = o(n)$. Which proves 1).
$endgroup$
$begingroup$
Where does the condition $kln(k)=o(n)$ come from? Of course , you showed the equivalences but what is the origin of this condition? How to see that $y(1+y)^nto 0$ if and only if this condition holds? Is it possible to say this directly without using statement 2? Or asked in other words: How did you find this condition to be necessary? The factor $y$ tends to $0$ and the factor $(1+y)^n$ then needs to exist, right? Is $kln(k)=o(n)$ necessary to have convergence of $(1+y)^n$?
$endgroup$
– J. Doe
Dec 23 '18 at 10:42
1
$begingroup$
Those are good questions. In hindsight, my approach can be simplified, and contains a mistake (we have $1) Leftrightarrow 3)$ and $1) Rightarrow 2)$ but I don't think $2) Rightarrow 1)$). I'll try to update to correct the mistake and take a simpler approach (using the fact the condition $y (1+y)^n$ is actually equivalent to $ky to 0$, which simplifies the proof).
$endgroup$
– Joel Cohen
Dec 23 '18 at 11:18
$begingroup$
I think you are pointing to the following: 1) implies 2) and 3). On the other hand, 2) and 3) together imply $kysimfrac{kln k}{n}to 0$ as $atoinfty$. Using $e^{ky}sim e^{kln(1+y)}=(1+y)^k=1+y(1+y)^n$ and $e^{ky}to e^0=1$ as $atoinfty$, this implies that the RHS tends to $1$, i.e. $1+y(1+y)^nto 1$ as $atoinfty$, meaning that $y(1+y)^nto 0$ as $atoinfty$. Hence, 1) exactly if $kyto 0$ as $atoinfty$. - - In particular, if $b-a=textrm{const}$, the condition $kyto 0$ as $atoinfty$ is satisfied; at least, if $b-a$ is no multiple of $e$ (which I think has to be excluded?)-
$endgroup$
– J. Doe
Dec 25 '18 at 16:22
$begingroup$
Addendum: For example, $b=2a$ would not fit into the framework (unless we assume that $y=o(1/a)$ as $atoinfty$ in order to ensure that $kyto 0$ as $atoinfty$)?
$endgroup$
– J. Doe
Dec 26 '18 at 11:04
$begingroup$
Yes $y(1+y)^n to 0$ is equivalent to $(1+y)^k to 1$ (from equation $(*)$). And taking logarithms, this is equivalent to $k ln(1+y) to 0$, which is equivalent to $kyto 0$ (because $ln(1+y) sim y$).
$endgroup$
– Joel Cohen
Dec 27 '18 at 12:07
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3049316%2fasymptotics-of-a-root%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
For brevity, I will denote $n = 2b$, $k = b-a$ and drop the index $a,b$ in $y := y_{a,b}$. Our equation is
$$(1+y)^n = frac{(1+y)^k - 1}{y}$$
which we can rewrite as
$$(1+y)^k = 1 + y (1+y)^n$$
and taking logarithms, we get
$$k ln(1+y) = ln(1 + y (1+y)^n)$$
Now, assuming $y (1+y)^n underset{a to infty}{longrightarrow} 0$, we get the equivalents
$$k y underset{a to infty}{sim} y (1+y)^n$$
dividing by $y$ and taking logarithms yields
$$ln(k) underset{a to infty}{sim} n ln(1+y) underset{a to infty}{sim} n y$$
and finally we get
$$y underset{a to infty}{sim} frac{ln(k)}{n} = frac{ln(b-a)}{2b}$$
EDIT : There was a mistake initially in assuming the condition $y (1+y)^n underset{a to infty}{longrightarrow} 0$ was automatic, but it's only true under the assumption that $k ln(k) = o(n)$. In this section, we prove the following three properties are equivalent :
1) $y (1+y)^n underset{a to infty}{longrightarrow} 0$
2) $y sim frac{ln(k)}{n}$
3) $k ln(k) = o(n)$
We have just proved that $1) Longrightarrow 2)$.
Let's prove $2) Longrightarrow 3)$. We assume 2), that is $y sim frac{ln(k)}{n}$. We can then compute that
begin{eqnarray*}
y (1+y)^n &=& frac{ln(k)}{n} e^{n ln(1 + frac{ln(k)}{n})} \
&=& frac{ln(k)}{n} e^{ln(k) - frac{(ln(k))^2}{2n} + oleft(frac{(ln(k))^2}{n}right)} \
&=& frac{ln(k)}{n} k , e^{- frac{(ln(k))^2}{2n} + oleft(frac{(ln(k))^2}{n}right)} \
&sim& frac{k ln(k)}{n}
end{eqnarray*}
Which implies that $frac{k ln(k)}{n} to 0$, which is property 3).
And finally, we prove that $3) Longrightarrow 1)$. We assume $k ln(k) = o(n)$. Let $epsilon > 0$ be a fixed parameter. We prove that for $a$ sufficiently large
$$frac{ln(k) - epsilon}{n} < y < frac{ln(k) + epsilon}{n}$$
To do that, we compute the asymptotic expansion of $f_{a,b}left(1+ frac{ln(k) pm epsilon}{n}right)$ and look at its sign. Indeed we have
begin{eqnarray*}
left(1 + frac{ln(k) pm epsilon}{n}right)^n &=& e^{n lnleft(1+ frac{ln(k) pm epsilon}{n}right)}\
&=& e^{(ln(k) pm epsilon) - frac{(ln(k) pm epsilon)^2}{2n} + oleft(frac{(ln k pm epsilon)^2}{2n}right)}\
&=& k , e^{pmepsilon} e^{- frac{(ln(k) pm epsilon)^2}{2n} + oleft(frac{(ln k pm epsilon)^2}{2n}right)}\
&=& k , e^{pmepsilon} + o(k)
end{eqnarray*}
And
begin{eqnarray*}
frac{left(1 + frac{ln(k) pm epsilon}{n}right)^k-1}{frac{ln(k) pm epsilon}{n}} &=& frac{n}{ln(k) pm epsilon} left(e^{k lnleft(1+ frac{ln(k) pm epsilon}{n}right)}-1right) \
&=& frac{n}{ln(k) pm epsilon} left(e^{frac{k(ln(k) pm epsilon)}{n} + oleft(frac{k(ln(k) pm epsilon)}{n}right)} - 1 right)\
&=& k + o(k)
end{eqnarray*}
So we finally get
$$f_{a,b}left(1+ frac{ln(k) pm epsilon}{n}right) sim (e^{pmepsilon}-1) , k $$
This means that for $a$ sufficiently large, we have $f_{a,b}left(1+ frac{ln(k) - epsilon}{n}right) < 0$ and $f_{a,b}left(1+ frac{ln(k) + epsilon}{n}right) > 0$ and so there is a root of $f_{a,b}$ between those two boundaries. But because there is only one positive root, it must be $y$, and we deduce
$$frac{ln(k) - epsilon}{n} < y < frac{ln(k) + epsilon}{n}$$
And now we can asymptotically bound the quantity $y (1+y)^n$ by
$$ underbrace{frac{ln(k) - epsilon}{n} left(1 + frac{ln(k) - epsilon}{n}right)^n}_{sim frac{k (ln(k)-epsilon) e^{-epsilon}}{n}} < y (1+y)^n < underbrace{frac{ln(k) + epsilon}{n} left(1 + frac{ln(k) + epsilon}{n}right)^n }_{sim frac{k (ln(k)+epsilon) e^{epsilon}}{n}}$$
where both bounds converge to $0$ from the assumption that $k ln(k) = o(n)$. Which proves 1).
$endgroup$
$begingroup$
Where does the condition $kln(k)=o(n)$ come from? Of course , you showed the equivalences but what is the origin of this condition? How to see that $y(1+y)^nto 0$ if and only if this condition holds? Is it possible to say this directly without using statement 2? Or asked in other words: How did you find this condition to be necessary? The factor $y$ tends to $0$ and the factor $(1+y)^n$ then needs to exist, right? Is $kln(k)=o(n)$ necessary to have convergence of $(1+y)^n$?
$endgroup$
– J. Doe
Dec 23 '18 at 10:42
1
$begingroup$
Those are good questions. In hindsight, my approach can be simplified, and contains a mistake (we have $1) Leftrightarrow 3)$ and $1) Rightarrow 2)$ but I don't think $2) Rightarrow 1)$). I'll try to update to correct the mistake and take a simpler approach (using the fact the condition $y (1+y)^n$ is actually equivalent to $ky to 0$, which simplifies the proof).
$endgroup$
– Joel Cohen
Dec 23 '18 at 11:18
$begingroup$
I think you are pointing to the following: 1) implies 2) and 3). On the other hand, 2) and 3) together imply $kysimfrac{kln k}{n}to 0$ as $atoinfty$. Using $e^{ky}sim e^{kln(1+y)}=(1+y)^k=1+y(1+y)^n$ and $e^{ky}to e^0=1$ as $atoinfty$, this implies that the RHS tends to $1$, i.e. $1+y(1+y)^nto 1$ as $atoinfty$, meaning that $y(1+y)^nto 0$ as $atoinfty$. Hence, 1) exactly if $kyto 0$ as $atoinfty$. - - In particular, if $b-a=textrm{const}$, the condition $kyto 0$ as $atoinfty$ is satisfied; at least, if $b-a$ is no multiple of $e$ (which I think has to be excluded?)-
$endgroup$
– J. Doe
Dec 25 '18 at 16:22
$begingroup$
Addendum: For example, $b=2a$ would not fit into the framework (unless we assume that $y=o(1/a)$ as $atoinfty$ in order to ensure that $kyto 0$ as $atoinfty$)?
$endgroup$
– J. Doe
Dec 26 '18 at 11:04
$begingroup$
Yes $y(1+y)^n to 0$ is equivalent to $(1+y)^k to 1$ (from equation $(*)$). And taking logarithms, this is equivalent to $k ln(1+y) to 0$, which is equivalent to $kyto 0$ (because $ln(1+y) sim y$).
$endgroup$
– Joel Cohen
Dec 27 '18 at 12:07
add a comment |
$begingroup$
For brevity, I will denote $n = 2b$, $k = b-a$ and drop the index $a,b$ in $y := y_{a,b}$. Our equation is
$$(1+y)^n = frac{(1+y)^k - 1}{y}$$
which we can rewrite as
$$(1+y)^k = 1 + y (1+y)^n$$
and taking logarithms, we get
$$k ln(1+y) = ln(1 + y (1+y)^n)$$
Now, assuming $y (1+y)^n underset{a to infty}{longrightarrow} 0$, we get the equivalents
$$k y underset{a to infty}{sim} y (1+y)^n$$
dividing by $y$ and taking logarithms yields
$$ln(k) underset{a to infty}{sim} n ln(1+y) underset{a to infty}{sim} n y$$
and finally we get
$$y underset{a to infty}{sim} frac{ln(k)}{n} = frac{ln(b-a)}{2b}$$
EDIT : There was a mistake initially in assuming the condition $y (1+y)^n underset{a to infty}{longrightarrow} 0$ was automatic, but it's only true under the assumption that $k ln(k) = o(n)$. In this section, we prove the following three properties are equivalent :
1) $y (1+y)^n underset{a to infty}{longrightarrow} 0$
2) $y sim frac{ln(k)}{n}$
3) $k ln(k) = o(n)$
We have just proved that $1) Longrightarrow 2)$.
Let's prove $2) Longrightarrow 3)$. We assume 2), that is $y sim frac{ln(k)}{n}$. We can then compute that
begin{eqnarray*}
y (1+y)^n &=& frac{ln(k)}{n} e^{n ln(1 + frac{ln(k)}{n})} \
&=& frac{ln(k)}{n} e^{ln(k) - frac{(ln(k))^2}{2n} + oleft(frac{(ln(k))^2}{n}right)} \
&=& frac{ln(k)}{n} k , e^{- frac{(ln(k))^2}{2n} + oleft(frac{(ln(k))^2}{n}right)} \
&sim& frac{k ln(k)}{n}
end{eqnarray*}
Which implies that $frac{k ln(k)}{n} to 0$, which is property 3).
And finally, we prove that $3) Longrightarrow 1)$. We assume $k ln(k) = o(n)$. Let $epsilon > 0$ be a fixed parameter. We prove that for $a$ sufficiently large
$$frac{ln(k) - epsilon}{n} < y < frac{ln(k) + epsilon}{n}$$
To do that, we compute the asymptotic expansion of $f_{a,b}left(1+ frac{ln(k) pm epsilon}{n}right)$ and look at its sign. Indeed we have
begin{eqnarray*}
left(1 + frac{ln(k) pm epsilon}{n}right)^n &=& e^{n lnleft(1+ frac{ln(k) pm epsilon}{n}right)}\
&=& e^{(ln(k) pm epsilon) - frac{(ln(k) pm epsilon)^2}{2n} + oleft(frac{(ln k pm epsilon)^2}{2n}right)}\
&=& k , e^{pmepsilon} e^{- frac{(ln(k) pm epsilon)^2}{2n} + oleft(frac{(ln k pm epsilon)^2}{2n}right)}\
&=& k , e^{pmepsilon} + o(k)
end{eqnarray*}
And
begin{eqnarray*}
frac{left(1 + frac{ln(k) pm epsilon}{n}right)^k-1}{frac{ln(k) pm epsilon}{n}} &=& frac{n}{ln(k) pm epsilon} left(e^{k lnleft(1+ frac{ln(k) pm epsilon}{n}right)}-1right) \
&=& frac{n}{ln(k) pm epsilon} left(e^{frac{k(ln(k) pm epsilon)}{n} + oleft(frac{k(ln(k) pm epsilon)}{n}right)} - 1 right)\
&=& k + o(k)
end{eqnarray*}
So we finally get
$$f_{a,b}left(1+ frac{ln(k) pm epsilon}{n}right) sim (e^{pmepsilon}-1) , k $$
This means that for $a$ sufficiently large, we have $f_{a,b}left(1+ frac{ln(k) - epsilon}{n}right) < 0$ and $f_{a,b}left(1+ frac{ln(k) + epsilon}{n}right) > 0$ and so there is a root of $f_{a,b}$ between those two boundaries. But because there is only one positive root, it must be $y$, and we deduce
$$frac{ln(k) - epsilon}{n} < y < frac{ln(k) + epsilon}{n}$$
And now we can asymptotically bound the quantity $y (1+y)^n$ by
$$ underbrace{frac{ln(k) - epsilon}{n} left(1 + frac{ln(k) - epsilon}{n}right)^n}_{sim frac{k (ln(k)-epsilon) e^{-epsilon}}{n}} < y (1+y)^n < underbrace{frac{ln(k) + epsilon}{n} left(1 + frac{ln(k) + epsilon}{n}right)^n }_{sim frac{k (ln(k)+epsilon) e^{epsilon}}{n}}$$
where both bounds converge to $0$ from the assumption that $k ln(k) = o(n)$. Which proves 1).
$endgroup$
$begingroup$
Where does the condition $kln(k)=o(n)$ come from? Of course , you showed the equivalences but what is the origin of this condition? How to see that $y(1+y)^nto 0$ if and only if this condition holds? Is it possible to say this directly without using statement 2? Or asked in other words: How did you find this condition to be necessary? The factor $y$ tends to $0$ and the factor $(1+y)^n$ then needs to exist, right? Is $kln(k)=o(n)$ necessary to have convergence of $(1+y)^n$?
$endgroup$
– J. Doe
Dec 23 '18 at 10:42
1
$begingroup$
Those are good questions. In hindsight, my approach can be simplified, and contains a mistake (we have $1) Leftrightarrow 3)$ and $1) Rightarrow 2)$ but I don't think $2) Rightarrow 1)$). I'll try to update to correct the mistake and take a simpler approach (using the fact the condition $y (1+y)^n$ is actually equivalent to $ky to 0$, which simplifies the proof).
$endgroup$
– Joel Cohen
Dec 23 '18 at 11:18
$begingroup$
I think you are pointing to the following: 1) implies 2) and 3). On the other hand, 2) and 3) together imply $kysimfrac{kln k}{n}to 0$ as $atoinfty$. Using $e^{ky}sim e^{kln(1+y)}=(1+y)^k=1+y(1+y)^n$ and $e^{ky}to e^0=1$ as $atoinfty$, this implies that the RHS tends to $1$, i.e. $1+y(1+y)^nto 1$ as $atoinfty$, meaning that $y(1+y)^nto 0$ as $atoinfty$. Hence, 1) exactly if $kyto 0$ as $atoinfty$. - - In particular, if $b-a=textrm{const}$, the condition $kyto 0$ as $atoinfty$ is satisfied; at least, if $b-a$ is no multiple of $e$ (which I think has to be excluded?)-
$endgroup$
– J. Doe
Dec 25 '18 at 16:22
$begingroup$
Addendum: For example, $b=2a$ would not fit into the framework (unless we assume that $y=o(1/a)$ as $atoinfty$ in order to ensure that $kyto 0$ as $atoinfty$)?
$endgroup$
– J. Doe
Dec 26 '18 at 11:04
$begingroup$
Yes $y(1+y)^n to 0$ is equivalent to $(1+y)^k to 1$ (from equation $(*)$). And taking logarithms, this is equivalent to $k ln(1+y) to 0$, which is equivalent to $kyto 0$ (because $ln(1+y) sim y$).
$endgroup$
– Joel Cohen
Dec 27 '18 at 12:07
add a comment |
$begingroup$
For brevity, I will denote $n = 2b$, $k = b-a$ and drop the index $a,b$ in $y := y_{a,b}$. Our equation is
$$(1+y)^n = frac{(1+y)^k - 1}{y}$$
which we can rewrite as
$$(1+y)^k = 1 + y (1+y)^n$$
and taking logarithms, we get
$$k ln(1+y) = ln(1 + y (1+y)^n)$$
Now, assuming $y (1+y)^n underset{a to infty}{longrightarrow} 0$, we get the equivalents
$$k y underset{a to infty}{sim} y (1+y)^n$$
dividing by $y$ and taking logarithms yields
$$ln(k) underset{a to infty}{sim} n ln(1+y) underset{a to infty}{sim} n y$$
and finally we get
$$y underset{a to infty}{sim} frac{ln(k)}{n} = frac{ln(b-a)}{2b}$$
EDIT : There was a mistake initially in assuming the condition $y (1+y)^n underset{a to infty}{longrightarrow} 0$ was automatic, but it's only true under the assumption that $k ln(k) = o(n)$. In this section, we prove the following three properties are equivalent :
1) $y (1+y)^n underset{a to infty}{longrightarrow} 0$
2) $y sim frac{ln(k)}{n}$
3) $k ln(k) = o(n)$
We have just proved that $1) Longrightarrow 2)$.
Let's prove $2) Longrightarrow 3)$. We assume 2), that is $y sim frac{ln(k)}{n}$. We can then compute that
begin{eqnarray*}
y (1+y)^n &=& frac{ln(k)}{n} e^{n ln(1 + frac{ln(k)}{n})} \
&=& frac{ln(k)}{n} e^{ln(k) - frac{(ln(k))^2}{2n} + oleft(frac{(ln(k))^2}{n}right)} \
&=& frac{ln(k)}{n} k , e^{- frac{(ln(k))^2}{2n} + oleft(frac{(ln(k))^2}{n}right)} \
&sim& frac{k ln(k)}{n}
end{eqnarray*}
Which implies that $frac{k ln(k)}{n} to 0$, which is property 3).
And finally, we prove that $3) Longrightarrow 1)$. We assume $k ln(k) = o(n)$. Let $epsilon > 0$ be a fixed parameter. We prove that for $a$ sufficiently large
$$frac{ln(k) - epsilon}{n} < y < frac{ln(k) + epsilon}{n}$$
To do that, we compute the asymptotic expansion of $f_{a,b}left(1+ frac{ln(k) pm epsilon}{n}right)$ and look at its sign. Indeed we have
begin{eqnarray*}
left(1 + frac{ln(k) pm epsilon}{n}right)^n &=& e^{n lnleft(1+ frac{ln(k) pm epsilon}{n}right)}\
&=& e^{(ln(k) pm epsilon) - frac{(ln(k) pm epsilon)^2}{2n} + oleft(frac{(ln k pm epsilon)^2}{2n}right)}\
&=& k , e^{pmepsilon} e^{- frac{(ln(k) pm epsilon)^2}{2n} + oleft(frac{(ln k pm epsilon)^2}{2n}right)}\
&=& k , e^{pmepsilon} + o(k)
end{eqnarray*}
And
begin{eqnarray*}
frac{left(1 + frac{ln(k) pm epsilon}{n}right)^k-1}{frac{ln(k) pm epsilon}{n}} &=& frac{n}{ln(k) pm epsilon} left(e^{k lnleft(1+ frac{ln(k) pm epsilon}{n}right)}-1right) \
&=& frac{n}{ln(k) pm epsilon} left(e^{frac{k(ln(k) pm epsilon)}{n} + oleft(frac{k(ln(k) pm epsilon)}{n}right)} - 1 right)\
&=& k + o(k)
end{eqnarray*}
So we finally get
$$f_{a,b}left(1+ frac{ln(k) pm epsilon}{n}right) sim (e^{pmepsilon}-1) , k $$
This means that for $a$ sufficiently large, we have $f_{a,b}left(1+ frac{ln(k) - epsilon}{n}right) < 0$ and $f_{a,b}left(1+ frac{ln(k) + epsilon}{n}right) > 0$ and so there is a root of $f_{a,b}$ between those two boundaries. But because there is only one positive root, it must be $y$, and we deduce
$$frac{ln(k) - epsilon}{n} < y < frac{ln(k) + epsilon}{n}$$
And now we can asymptotically bound the quantity $y (1+y)^n$ by
$$ underbrace{frac{ln(k) - epsilon}{n} left(1 + frac{ln(k) - epsilon}{n}right)^n}_{sim frac{k (ln(k)-epsilon) e^{-epsilon}}{n}} < y (1+y)^n < underbrace{frac{ln(k) + epsilon}{n} left(1 + frac{ln(k) + epsilon}{n}right)^n }_{sim frac{k (ln(k)+epsilon) e^{epsilon}}{n}}$$
where both bounds converge to $0$ from the assumption that $k ln(k) = o(n)$. Which proves 1).
$endgroup$
For brevity, I will denote $n = 2b$, $k = b-a$ and drop the index $a,b$ in $y := y_{a,b}$. Our equation is
$$(1+y)^n = frac{(1+y)^k - 1}{y}$$
which we can rewrite as
$$(1+y)^k = 1 + y (1+y)^n$$
and taking logarithms, we get
$$k ln(1+y) = ln(1 + y (1+y)^n)$$
Now, assuming $y (1+y)^n underset{a to infty}{longrightarrow} 0$, we get the equivalents
$$k y underset{a to infty}{sim} y (1+y)^n$$
dividing by $y$ and taking logarithms yields
$$ln(k) underset{a to infty}{sim} n ln(1+y) underset{a to infty}{sim} n y$$
and finally we get
$$y underset{a to infty}{sim} frac{ln(k)}{n} = frac{ln(b-a)}{2b}$$
EDIT : There was a mistake initially in assuming the condition $y (1+y)^n underset{a to infty}{longrightarrow} 0$ was automatic, but it's only true under the assumption that $k ln(k) = o(n)$. In this section, we prove the following three properties are equivalent :
1) $y (1+y)^n underset{a to infty}{longrightarrow} 0$
2) $y sim frac{ln(k)}{n}$
3) $k ln(k) = o(n)$
We have just proved that $1) Longrightarrow 2)$.
Let's prove $2) Longrightarrow 3)$. We assume 2), that is $y sim frac{ln(k)}{n}$. We can then compute that
begin{eqnarray*}
y (1+y)^n &=& frac{ln(k)}{n} e^{n ln(1 + frac{ln(k)}{n})} \
&=& frac{ln(k)}{n} e^{ln(k) - frac{(ln(k))^2}{2n} + oleft(frac{(ln(k))^2}{n}right)} \
&=& frac{ln(k)}{n} k , e^{- frac{(ln(k))^2}{2n} + oleft(frac{(ln(k))^2}{n}right)} \
&sim& frac{k ln(k)}{n}
end{eqnarray*}
Which implies that $frac{k ln(k)}{n} to 0$, which is property 3).
And finally, we prove that $3) Longrightarrow 1)$. We assume $k ln(k) = o(n)$. Let $epsilon > 0$ be a fixed parameter. We prove that for $a$ sufficiently large
$$frac{ln(k) - epsilon}{n} < y < frac{ln(k) + epsilon}{n}$$
To do that, we compute the asymptotic expansion of $f_{a,b}left(1+ frac{ln(k) pm epsilon}{n}right)$ and look at its sign. Indeed we have
begin{eqnarray*}
left(1 + frac{ln(k) pm epsilon}{n}right)^n &=& e^{n lnleft(1+ frac{ln(k) pm epsilon}{n}right)}\
&=& e^{(ln(k) pm epsilon) - frac{(ln(k) pm epsilon)^2}{2n} + oleft(frac{(ln k pm epsilon)^2}{2n}right)}\
&=& k , e^{pmepsilon} e^{- frac{(ln(k) pm epsilon)^2}{2n} + oleft(frac{(ln k pm epsilon)^2}{2n}right)}\
&=& k , e^{pmepsilon} + o(k)
end{eqnarray*}
And
begin{eqnarray*}
frac{left(1 + frac{ln(k) pm epsilon}{n}right)^k-1}{frac{ln(k) pm epsilon}{n}} &=& frac{n}{ln(k) pm epsilon} left(e^{k lnleft(1+ frac{ln(k) pm epsilon}{n}right)}-1right) \
&=& frac{n}{ln(k) pm epsilon} left(e^{frac{k(ln(k) pm epsilon)}{n} + oleft(frac{k(ln(k) pm epsilon)}{n}right)} - 1 right)\
&=& k + o(k)
end{eqnarray*}
So we finally get
$$f_{a,b}left(1+ frac{ln(k) pm epsilon}{n}right) sim (e^{pmepsilon}-1) , k $$
This means that for $a$ sufficiently large, we have $f_{a,b}left(1+ frac{ln(k) - epsilon}{n}right) < 0$ and $f_{a,b}left(1+ frac{ln(k) + epsilon}{n}right) > 0$ and so there is a root of $f_{a,b}$ between those two boundaries. But because there is only one positive root, it must be $y$, and we deduce
$$frac{ln(k) - epsilon}{n} < y < frac{ln(k) + epsilon}{n}$$
And now we can asymptotically bound the quantity $y (1+y)^n$ by
$$ underbrace{frac{ln(k) - epsilon}{n} left(1 + frac{ln(k) - epsilon}{n}right)^n}_{sim frac{k (ln(k)-epsilon) e^{-epsilon}}{n}} < y (1+y)^n < underbrace{frac{ln(k) + epsilon}{n} left(1 + frac{ln(k) + epsilon}{n}right)^n }_{sim frac{k (ln(k)+epsilon) e^{epsilon}}{n}}$$
where both bounds converge to $0$ from the assumption that $k ln(k) = o(n)$. Which proves 1).
edited Dec 23 '18 at 2:16
answered Dec 22 '18 at 20:05
Joel CohenJoel Cohen
7,44412238
7,44412238
$begingroup$
Where does the condition $kln(k)=o(n)$ come from? Of course , you showed the equivalences but what is the origin of this condition? How to see that $y(1+y)^nto 0$ if and only if this condition holds? Is it possible to say this directly without using statement 2? Or asked in other words: How did you find this condition to be necessary? The factor $y$ tends to $0$ and the factor $(1+y)^n$ then needs to exist, right? Is $kln(k)=o(n)$ necessary to have convergence of $(1+y)^n$?
$endgroup$
– J. Doe
Dec 23 '18 at 10:42
1
$begingroup$
Those are good questions. In hindsight, my approach can be simplified, and contains a mistake (we have $1) Leftrightarrow 3)$ and $1) Rightarrow 2)$ but I don't think $2) Rightarrow 1)$). I'll try to update to correct the mistake and take a simpler approach (using the fact the condition $y (1+y)^n$ is actually equivalent to $ky to 0$, which simplifies the proof).
$endgroup$
– Joel Cohen
Dec 23 '18 at 11:18
$begingroup$
I think you are pointing to the following: 1) implies 2) and 3). On the other hand, 2) and 3) together imply $kysimfrac{kln k}{n}to 0$ as $atoinfty$. Using $e^{ky}sim e^{kln(1+y)}=(1+y)^k=1+y(1+y)^n$ and $e^{ky}to e^0=1$ as $atoinfty$, this implies that the RHS tends to $1$, i.e. $1+y(1+y)^nto 1$ as $atoinfty$, meaning that $y(1+y)^nto 0$ as $atoinfty$. Hence, 1) exactly if $kyto 0$ as $atoinfty$. - - In particular, if $b-a=textrm{const}$, the condition $kyto 0$ as $atoinfty$ is satisfied; at least, if $b-a$ is no multiple of $e$ (which I think has to be excluded?)-
$endgroup$
– J. Doe
Dec 25 '18 at 16:22
$begingroup$
Addendum: For example, $b=2a$ would not fit into the framework (unless we assume that $y=o(1/a)$ as $atoinfty$ in order to ensure that $kyto 0$ as $atoinfty$)?
$endgroup$
– J. Doe
Dec 26 '18 at 11:04
$begingroup$
Yes $y(1+y)^n to 0$ is equivalent to $(1+y)^k to 1$ (from equation $(*)$). And taking logarithms, this is equivalent to $k ln(1+y) to 0$, which is equivalent to $kyto 0$ (because $ln(1+y) sim y$).
$endgroup$
– Joel Cohen
Dec 27 '18 at 12:07
add a comment |
$begingroup$
Where does the condition $kln(k)=o(n)$ come from? Of course , you showed the equivalences but what is the origin of this condition? How to see that $y(1+y)^nto 0$ if and only if this condition holds? Is it possible to say this directly without using statement 2? Or asked in other words: How did you find this condition to be necessary? The factor $y$ tends to $0$ and the factor $(1+y)^n$ then needs to exist, right? Is $kln(k)=o(n)$ necessary to have convergence of $(1+y)^n$?
$endgroup$
– J. Doe
Dec 23 '18 at 10:42
1
$begingroup$
Those are good questions. In hindsight, my approach can be simplified, and contains a mistake (we have $1) Leftrightarrow 3)$ and $1) Rightarrow 2)$ but I don't think $2) Rightarrow 1)$). I'll try to update to correct the mistake and take a simpler approach (using the fact the condition $y (1+y)^n$ is actually equivalent to $ky to 0$, which simplifies the proof).
$endgroup$
– Joel Cohen
Dec 23 '18 at 11:18
$begingroup$
I think you are pointing to the following: 1) implies 2) and 3). On the other hand, 2) and 3) together imply $kysimfrac{kln k}{n}to 0$ as $atoinfty$. Using $e^{ky}sim e^{kln(1+y)}=(1+y)^k=1+y(1+y)^n$ and $e^{ky}to e^0=1$ as $atoinfty$, this implies that the RHS tends to $1$, i.e. $1+y(1+y)^nto 1$ as $atoinfty$, meaning that $y(1+y)^nto 0$ as $atoinfty$. Hence, 1) exactly if $kyto 0$ as $atoinfty$. - - In particular, if $b-a=textrm{const}$, the condition $kyto 0$ as $atoinfty$ is satisfied; at least, if $b-a$ is no multiple of $e$ (which I think has to be excluded?)-
$endgroup$
– J. Doe
Dec 25 '18 at 16:22
$begingroup$
Addendum: For example, $b=2a$ would not fit into the framework (unless we assume that $y=o(1/a)$ as $atoinfty$ in order to ensure that $kyto 0$ as $atoinfty$)?
$endgroup$
– J. Doe
Dec 26 '18 at 11:04
$begingroup$
Yes $y(1+y)^n to 0$ is equivalent to $(1+y)^k to 1$ (from equation $(*)$). And taking logarithms, this is equivalent to $k ln(1+y) to 0$, which is equivalent to $kyto 0$ (because $ln(1+y) sim y$).
$endgroup$
– Joel Cohen
Dec 27 '18 at 12:07
$begingroup$
Where does the condition $kln(k)=o(n)$ come from? Of course , you showed the equivalences but what is the origin of this condition? How to see that $y(1+y)^nto 0$ if and only if this condition holds? Is it possible to say this directly without using statement 2? Or asked in other words: How did you find this condition to be necessary? The factor $y$ tends to $0$ and the factor $(1+y)^n$ then needs to exist, right? Is $kln(k)=o(n)$ necessary to have convergence of $(1+y)^n$?
$endgroup$
– J. Doe
Dec 23 '18 at 10:42
$begingroup$
Where does the condition $kln(k)=o(n)$ come from? Of course , you showed the equivalences but what is the origin of this condition? How to see that $y(1+y)^nto 0$ if and only if this condition holds? Is it possible to say this directly without using statement 2? Or asked in other words: How did you find this condition to be necessary? The factor $y$ tends to $0$ and the factor $(1+y)^n$ then needs to exist, right? Is $kln(k)=o(n)$ necessary to have convergence of $(1+y)^n$?
$endgroup$
– J. Doe
Dec 23 '18 at 10:42
1
1
$begingroup$
Those are good questions. In hindsight, my approach can be simplified, and contains a mistake (we have $1) Leftrightarrow 3)$ and $1) Rightarrow 2)$ but I don't think $2) Rightarrow 1)$). I'll try to update to correct the mistake and take a simpler approach (using the fact the condition $y (1+y)^n$ is actually equivalent to $ky to 0$, which simplifies the proof).
$endgroup$
– Joel Cohen
Dec 23 '18 at 11:18
$begingroup$
Those are good questions. In hindsight, my approach can be simplified, and contains a mistake (we have $1) Leftrightarrow 3)$ and $1) Rightarrow 2)$ but I don't think $2) Rightarrow 1)$). I'll try to update to correct the mistake and take a simpler approach (using the fact the condition $y (1+y)^n$ is actually equivalent to $ky to 0$, which simplifies the proof).
$endgroup$
– Joel Cohen
Dec 23 '18 at 11:18
$begingroup$
I think you are pointing to the following: 1) implies 2) and 3). On the other hand, 2) and 3) together imply $kysimfrac{kln k}{n}to 0$ as $atoinfty$. Using $e^{ky}sim e^{kln(1+y)}=(1+y)^k=1+y(1+y)^n$ and $e^{ky}to e^0=1$ as $atoinfty$, this implies that the RHS tends to $1$, i.e. $1+y(1+y)^nto 1$ as $atoinfty$, meaning that $y(1+y)^nto 0$ as $atoinfty$. Hence, 1) exactly if $kyto 0$ as $atoinfty$. - - In particular, if $b-a=textrm{const}$, the condition $kyto 0$ as $atoinfty$ is satisfied; at least, if $b-a$ is no multiple of $e$ (which I think has to be excluded?)-
$endgroup$
– J. Doe
Dec 25 '18 at 16:22
$begingroup$
I think you are pointing to the following: 1) implies 2) and 3). On the other hand, 2) and 3) together imply $kysimfrac{kln k}{n}to 0$ as $atoinfty$. Using $e^{ky}sim e^{kln(1+y)}=(1+y)^k=1+y(1+y)^n$ and $e^{ky}to e^0=1$ as $atoinfty$, this implies that the RHS tends to $1$, i.e. $1+y(1+y)^nto 1$ as $atoinfty$, meaning that $y(1+y)^nto 0$ as $atoinfty$. Hence, 1) exactly if $kyto 0$ as $atoinfty$. - - In particular, if $b-a=textrm{const}$, the condition $kyto 0$ as $atoinfty$ is satisfied; at least, if $b-a$ is no multiple of $e$ (which I think has to be excluded?)-
$endgroup$
– J. Doe
Dec 25 '18 at 16:22
$begingroup$
Addendum: For example, $b=2a$ would not fit into the framework (unless we assume that $y=o(1/a)$ as $atoinfty$ in order to ensure that $kyto 0$ as $atoinfty$)?
$endgroup$
– J. Doe
Dec 26 '18 at 11:04
$begingroup$
Addendum: For example, $b=2a$ would not fit into the framework (unless we assume that $y=o(1/a)$ as $atoinfty$ in order to ensure that $kyto 0$ as $atoinfty$)?
$endgroup$
– J. Doe
Dec 26 '18 at 11:04
$begingroup$
Yes $y(1+y)^n to 0$ is equivalent to $(1+y)^k to 1$ (from equation $(*)$). And taking logarithms, this is equivalent to $k ln(1+y) to 0$, which is equivalent to $kyto 0$ (because $ln(1+y) sim y$).
$endgroup$
– Joel Cohen
Dec 27 '18 at 12:07
$begingroup$
Yes $y(1+y)^n to 0$ is equivalent to $(1+y)^k to 1$ (from equation $(*)$). And taking logarithms, this is equivalent to $k ln(1+y) to 0$, which is equivalent to $kyto 0$ (because $ln(1+y) sim y$).
$endgroup$
– Joel Cohen
Dec 27 '18 at 12:07
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3049316%2fasymptotics-of-a-root%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
$begingroup$
How does $b$ vary with $a$ ?
$endgroup$
– Joel Cohen
Dec 22 '18 at 19:18
$begingroup$
@JoelCohen I always assume that $bgeq a+2$, i.e the difference $b-a$ is at least 2. Hence, if $atoinfty$, then, automatically, $btoinfty$.
$endgroup$
– J. Doe
Dec 22 '18 at 19:26
1
$begingroup$
@JDoe The answer really depends on the rate of growth of $b-a$ compared to $b$. As is shown in my answer below, if $(b-a) ln(b-a) = o(b)$, then we can show $y_{a,b} sim frac{ln(b-a)}{2b}$. In cases where $b-a$ grows faster, I don't know the answer (except that it isn't $ frac{ln(b-a)}{2b}$).
$endgroup$
– Joel Cohen
Dec 23 '18 at 2:26