Expected value of Normal Lognormal Mixture












0












$begingroup$


I need to compute the following covariance:
begin{equation}
Cov(X, exp(-a X))
end{equation}

where X follows a normal distribution, $X = N(0.0, sigma^2)$, and $a$ is a constant scalar.



My findings:
From the definition of covariance I concluded that
begin{equation}
Cov(X, exp(-a X)) = E[X exp(-ax)]
end{equation}

as X is zero-mean. Hence it boils down to finding the first moment of the normal lognormal mixture.



Upon searching stackexchange and the internet I only found one result which treats this topic (the work by Yang): http://repec.org/esAUSM04/up.21034.1077779387.pdf



I gives the first moments of the mixture $u=e^{1/2 eta} epsilon$. The one I am interested in is stated as:
begin{equation}
E(u) = frac{1}{2} rho sigma e^{frac{1}{8} sigma^2}
end{equation}



I cannot follow the "derivation" of this equation (none is actually given in the paper), but I believe that it is readily applicable to my LNL mixture.



The expected value has a factor which contains the covariance of the random variables considered by Yang and the other contains the exponential of the process $eta$.
In my case $epsilon$ does not have unit variance, but variance $sigma^2$.
Also my $eta$ is defined as $-2 a X$ to apply the logic of Yang.
Since these two processes are fully negatively correlated, I assume that
the Expected value should be:



begin{equation}
E[X exp(-ax)] = - a sigma^2 exp(frac{1}{2} a^2 sigma^2)
end{equation}



In simulations, this expectation matches the monte-carlo derived moment very well, hence I guess that above reasoning is correct.



My questions:



1) Is above reasoning really correct?



2) How did Yang compute the expected value? Understanding the derivation would allow me to directly start from $X exp(-aX)$, instead of fitting my mixture to his shape.










share|cite|improve this question









$endgroup$












  • $begingroup$
    Can't speak for how someone else calculated the expected value, but a reasonable approach would seem to be using $xe^{-ax} = -frac{partial}{partial a}e^{-ax}$ and then the MGF of the normal
    $endgroup$
    – Nadiels
    Dec 18 '18 at 12:16
















0












$begingroup$


I need to compute the following covariance:
begin{equation}
Cov(X, exp(-a X))
end{equation}

where X follows a normal distribution, $X = N(0.0, sigma^2)$, and $a$ is a constant scalar.



My findings:
From the definition of covariance I concluded that
begin{equation}
Cov(X, exp(-a X)) = E[X exp(-ax)]
end{equation}

as X is zero-mean. Hence it boils down to finding the first moment of the normal lognormal mixture.



Upon searching stackexchange and the internet I only found one result which treats this topic (the work by Yang): http://repec.org/esAUSM04/up.21034.1077779387.pdf



I gives the first moments of the mixture $u=e^{1/2 eta} epsilon$. The one I am interested in is stated as:
begin{equation}
E(u) = frac{1}{2} rho sigma e^{frac{1}{8} sigma^2}
end{equation}



I cannot follow the "derivation" of this equation (none is actually given in the paper), but I believe that it is readily applicable to my LNL mixture.



The expected value has a factor which contains the covariance of the random variables considered by Yang and the other contains the exponential of the process $eta$.
In my case $epsilon$ does not have unit variance, but variance $sigma^2$.
Also my $eta$ is defined as $-2 a X$ to apply the logic of Yang.
Since these two processes are fully negatively correlated, I assume that
the Expected value should be:



begin{equation}
E[X exp(-ax)] = - a sigma^2 exp(frac{1}{2} a^2 sigma^2)
end{equation}



In simulations, this expectation matches the monte-carlo derived moment very well, hence I guess that above reasoning is correct.



My questions:



1) Is above reasoning really correct?



2) How did Yang compute the expected value? Understanding the derivation would allow me to directly start from $X exp(-aX)$, instead of fitting my mixture to his shape.










share|cite|improve this question









$endgroup$












  • $begingroup$
    Can't speak for how someone else calculated the expected value, but a reasonable approach would seem to be using $xe^{-ax} = -frac{partial}{partial a}e^{-ax}$ and then the MGF of the normal
    $endgroup$
    – Nadiels
    Dec 18 '18 at 12:16














0












0








0





$begingroup$


I need to compute the following covariance:
begin{equation}
Cov(X, exp(-a X))
end{equation}

where X follows a normal distribution, $X = N(0.0, sigma^2)$, and $a$ is a constant scalar.



My findings:
From the definition of covariance I concluded that
begin{equation}
Cov(X, exp(-a X)) = E[X exp(-ax)]
end{equation}

as X is zero-mean. Hence it boils down to finding the first moment of the normal lognormal mixture.



Upon searching stackexchange and the internet I only found one result which treats this topic (the work by Yang): http://repec.org/esAUSM04/up.21034.1077779387.pdf



I gives the first moments of the mixture $u=e^{1/2 eta} epsilon$. The one I am interested in is stated as:
begin{equation}
E(u) = frac{1}{2} rho sigma e^{frac{1}{8} sigma^2}
end{equation}



I cannot follow the "derivation" of this equation (none is actually given in the paper), but I believe that it is readily applicable to my LNL mixture.



The expected value has a factor which contains the covariance of the random variables considered by Yang and the other contains the exponential of the process $eta$.
In my case $epsilon$ does not have unit variance, but variance $sigma^2$.
Also my $eta$ is defined as $-2 a X$ to apply the logic of Yang.
Since these two processes are fully negatively correlated, I assume that
the Expected value should be:



begin{equation}
E[X exp(-ax)] = - a sigma^2 exp(frac{1}{2} a^2 sigma^2)
end{equation}



In simulations, this expectation matches the monte-carlo derived moment very well, hence I guess that above reasoning is correct.



My questions:



1) Is above reasoning really correct?



2) How did Yang compute the expected value? Understanding the derivation would allow me to directly start from $X exp(-aX)$, instead of fitting my mixture to his shape.










share|cite|improve this question









$endgroup$




I need to compute the following covariance:
begin{equation}
Cov(X, exp(-a X))
end{equation}

where X follows a normal distribution, $X = N(0.0, sigma^2)$, and $a$ is a constant scalar.



My findings:
From the definition of covariance I concluded that
begin{equation}
Cov(X, exp(-a X)) = E[X exp(-ax)]
end{equation}

as X is zero-mean. Hence it boils down to finding the first moment of the normal lognormal mixture.



Upon searching stackexchange and the internet I only found one result which treats this topic (the work by Yang): http://repec.org/esAUSM04/up.21034.1077779387.pdf



I gives the first moments of the mixture $u=e^{1/2 eta} epsilon$. The one I am interested in is stated as:
begin{equation}
E(u) = frac{1}{2} rho sigma e^{frac{1}{8} sigma^2}
end{equation}



I cannot follow the "derivation" of this equation (none is actually given in the paper), but I believe that it is readily applicable to my LNL mixture.



The expected value has a factor which contains the covariance of the random variables considered by Yang and the other contains the exponential of the process $eta$.
In my case $epsilon$ does not have unit variance, but variance $sigma^2$.
Also my $eta$ is defined as $-2 a X$ to apply the logic of Yang.
Since these two processes are fully negatively correlated, I assume that
the Expected value should be:



begin{equation}
E[X exp(-ax)] = - a sigma^2 exp(frac{1}{2} a^2 sigma^2)
end{equation}



In simulations, this expectation matches the monte-carlo derived moment very well, hence I guess that above reasoning is correct.



My questions:



1) Is above reasoning really correct?



2) How did Yang compute the expected value? Understanding the derivation would allow me to directly start from $X exp(-aX)$, instead of fitting my mixture to his shape.







normal-distribution expected-value






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Dec 18 '18 at 11:38









ElarionElarion

134




134












  • $begingroup$
    Can't speak for how someone else calculated the expected value, but a reasonable approach would seem to be using $xe^{-ax} = -frac{partial}{partial a}e^{-ax}$ and then the MGF of the normal
    $endgroup$
    – Nadiels
    Dec 18 '18 at 12:16


















  • $begingroup$
    Can't speak for how someone else calculated the expected value, but a reasonable approach would seem to be using $xe^{-ax} = -frac{partial}{partial a}e^{-ax}$ and then the MGF of the normal
    $endgroup$
    – Nadiels
    Dec 18 '18 at 12:16
















$begingroup$
Can't speak for how someone else calculated the expected value, but a reasonable approach would seem to be using $xe^{-ax} = -frac{partial}{partial a}e^{-ax}$ and then the MGF of the normal
$endgroup$
– Nadiels
Dec 18 '18 at 12:16




$begingroup$
Can't speak for how someone else calculated the expected value, but a reasonable approach would seem to be using $xe^{-ax} = -frac{partial}{partial a}e^{-ax}$ and then the MGF of the normal
$endgroup$
– Nadiels
Dec 18 '18 at 12:16










1 Answer
1






active

oldest

votes


















0












$begingroup$

So while adapting the result from the paper to the perfectly correlated case would work it isn't the approach I would suggest, instead I would go with the approach in my comment -- it doesn't make anything simpler by considering the bivariate case. That said if you are curious as to how the result you are interested in is derived then the following would be one way of going about it.



Let
$$
begin{bmatrix}
X \ Y
end{bmatrix} sim
mathcal{N}left(begin{bmatrix} 0 \ 0 end{bmatrix},
begin{bmatrix}
1 & rho sigma \ rho sigma & sigma^2
end{bmatrix}
right),
$$

then
$$
Y mid X=x sim mathcal{N}(rhosigma x, sigma^2(1-rho^2).
$$

So the idea is just going to be to use the MGF, and properties of the conditional expectation, it is a little tedious, but it should go something like
$$
begin{align}
mathbb{E}left[Xe^{frac{Y}{2}}right] &=
mathbb{E}left[ Xmathbb{E}left[ e^{frac{Y}{2}} ; big| ; X right]right] \
&= e^{frac{sigma^2(1-rho^2)}{2^3}}mathbb{E}left[ X e^{frac{rho sigma X}{2}}right] \
&=frac{2}{rho}e^{frac{sigma^2(1-rho)}{2^3}}frac{partial}{partial sigma}mathbb{E}left[e^{frac{rhosigma X}{2}} right] \
&=frac{2}{rho}e^{frac{sigma^2(1-rho^2)}{2^3}}frac{partial}{partial sigma} e^{frac{rho^2 sigma^2}{2^3}} \
&= frac{2}{rho}e^{frac{sigma^2(1-rho^2)}{2^3}} cdot frac{2 rho^2 sigma}{2^3}e^{frac{rho^2 sigma^2}{2^3}} \
&=frac{1}{2} rho sigma e^{frac{sigma^2}{2^3}}
end{align}
$$






share|cite|improve this answer









$endgroup$













  • $begingroup$
    This is great! After reading up on the law of total expectation and following your first comment I was able to understand the steps you carried out. In fact the good news is that they can be applied in the same sequence to solve my original problem using $rho = -1$, $X sim mathcal{N}(0.0, sigma^2)$ and $Y sim mathcal{N}(0.0, a^2 sigma^2)$. The result obtained in this way is the one I assumed and postulated in the original post. Thanks!!
    $endgroup$
    – Elarion
    Dec 18 '18 at 19:17













Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3045063%2fexpected-value-of-normal-lognormal-mixture%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









0












$begingroup$

So while adapting the result from the paper to the perfectly correlated case would work it isn't the approach I would suggest, instead I would go with the approach in my comment -- it doesn't make anything simpler by considering the bivariate case. That said if you are curious as to how the result you are interested in is derived then the following would be one way of going about it.



Let
$$
begin{bmatrix}
X \ Y
end{bmatrix} sim
mathcal{N}left(begin{bmatrix} 0 \ 0 end{bmatrix},
begin{bmatrix}
1 & rho sigma \ rho sigma & sigma^2
end{bmatrix}
right),
$$

then
$$
Y mid X=x sim mathcal{N}(rhosigma x, sigma^2(1-rho^2).
$$

So the idea is just going to be to use the MGF, and properties of the conditional expectation, it is a little tedious, but it should go something like
$$
begin{align}
mathbb{E}left[Xe^{frac{Y}{2}}right] &=
mathbb{E}left[ Xmathbb{E}left[ e^{frac{Y}{2}} ; big| ; X right]right] \
&= e^{frac{sigma^2(1-rho^2)}{2^3}}mathbb{E}left[ X e^{frac{rho sigma X}{2}}right] \
&=frac{2}{rho}e^{frac{sigma^2(1-rho)}{2^3}}frac{partial}{partial sigma}mathbb{E}left[e^{frac{rhosigma X}{2}} right] \
&=frac{2}{rho}e^{frac{sigma^2(1-rho^2)}{2^3}}frac{partial}{partial sigma} e^{frac{rho^2 sigma^2}{2^3}} \
&= frac{2}{rho}e^{frac{sigma^2(1-rho^2)}{2^3}} cdot frac{2 rho^2 sigma}{2^3}e^{frac{rho^2 sigma^2}{2^3}} \
&=frac{1}{2} rho sigma e^{frac{sigma^2}{2^3}}
end{align}
$$






share|cite|improve this answer









$endgroup$













  • $begingroup$
    This is great! After reading up on the law of total expectation and following your first comment I was able to understand the steps you carried out. In fact the good news is that they can be applied in the same sequence to solve my original problem using $rho = -1$, $X sim mathcal{N}(0.0, sigma^2)$ and $Y sim mathcal{N}(0.0, a^2 sigma^2)$. The result obtained in this way is the one I assumed and postulated in the original post. Thanks!!
    $endgroup$
    – Elarion
    Dec 18 '18 at 19:17


















0












$begingroup$

So while adapting the result from the paper to the perfectly correlated case would work it isn't the approach I would suggest, instead I would go with the approach in my comment -- it doesn't make anything simpler by considering the bivariate case. That said if you are curious as to how the result you are interested in is derived then the following would be one way of going about it.



Let
$$
begin{bmatrix}
X \ Y
end{bmatrix} sim
mathcal{N}left(begin{bmatrix} 0 \ 0 end{bmatrix},
begin{bmatrix}
1 & rho sigma \ rho sigma & sigma^2
end{bmatrix}
right),
$$

then
$$
Y mid X=x sim mathcal{N}(rhosigma x, sigma^2(1-rho^2).
$$

So the idea is just going to be to use the MGF, and properties of the conditional expectation, it is a little tedious, but it should go something like
$$
begin{align}
mathbb{E}left[Xe^{frac{Y}{2}}right] &=
mathbb{E}left[ Xmathbb{E}left[ e^{frac{Y}{2}} ; big| ; X right]right] \
&= e^{frac{sigma^2(1-rho^2)}{2^3}}mathbb{E}left[ X e^{frac{rho sigma X}{2}}right] \
&=frac{2}{rho}e^{frac{sigma^2(1-rho)}{2^3}}frac{partial}{partial sigma}mathbb{E}left[e^{frac{rhosigma X}{2}} right] \
&=frac{2}{rho}e^{frac{sigma^2(1-rho^2)}{2^3}}frac{partial}{partial sigma} e^{frac{rho^2 sigma^2}{2^3}} \
&= frac{2}{rho}e^{frac{sigma^2(1-rho^2)}{2^3}} cdot frac{2 rho^2 sigma}{2^3}e^{frac{rho^2 sigma^2}{2^3}} \
&=frac{1}{2} rho sigma e^{frac{sigma^2}{2^3}}
end{align}
$$






share|cite|improve this answer









$endgroup$













  • $begingroup$
    This is great! After reading up on the law of total expectation and following your first comment I was able to understand the steps you carried out. In fact the good news is that they can be applied in the same sequence to solve my original problem using $rho = -1$, $X sim mathcal{N}(0.0, sigma^2)$ and $Y sim mathcal{N}(0.0, a^2 sigma^2)$. The result obtained in this way is the one I assumed and postulated in the original post. Thanks!!
    $endgroup$
    – Elarion
    Dec 18 '18 at 19:17
















0












0








0





$begingroup$

So while adapting the result from the paper to the perfectly correlated case would work it isn't the approach I would suggest, instead I would go with the approach in my comment -- it doesn't make anything simpler by considering the bivariate case. That said if you are curious as to how the result you are interested in is derived then the following would be one way of going about it.



Let
$$
begin{bmatrix}
X \ Y
end{bmatrix} sim
mathcal{N}left(begin{bmatrix} 0 \ 0 end{bmatrix},
begin{bmatrix}
1 & rho sigma \ rho sigma & sigma^2
end{bmatrix}
right),
$$

then
$$
Y mid X=x sim mathcal{N}(rhosigma x, sigma^2(1-rho^2).
$$

So the idea is just going to be to use the MGF, and properties of the conditional expectation, it is a little tedious, but it should go something like
$$
begin{align}
mathbb{E}left[Xe^{frac{Y}{2}}right] &=
mathbb{E}left[ Xmathbb{E}left[ e^{frac{Y}{2}} ; big| ; X right]right] \
&= e^{frac{sigma^2(1-rho^2)}{2^3}}mathbb{E}left[ X e^{frac{rho sigma X}{2}}right] \
&=frac{2}{rho}e^{frac{sigma^2(1-rho)}{2^3}}frac{partial}{partial sigma}mathbb{E}left[e^{frac{rhosigma X}{2}} right] \
&=frac{2}{rho}e^{frac{sigma^2(1-rho^2)}{2^3}}frac{partial}{partial sigma} e^{frac{rho^2 sigma^2}{2^3}} \
&= frac{2}{rho}e^{frac{sigma^2(1-rho^2)}{2^3}} cdot frac{2 rho^2 sigma}{2^3}e^{frac{rho^2 sigma^2}{2^3}} \
&=frac{1}{2} rho sigma e^{frac{sigma^2}{2^3}}
end{align}
$$






share|cite|improve this answer









$endgroup$



So while adapting the result from the paper to the perfectly correlated case would work it isn't the approach I would suggest, instead I would go with the approach in my comment -- it doesn't make anything simpler by considering the bivariate case. That said if you are curious as to how the result you are interested in is derived then the following would be one way of going about it.



Let
$$
begin{bmatrix}
X \ Y
end{bmatrix} sim
mathcal{N}left(begin{bmatrix} 0 \ 0 end{bmatrix},
begin{bmatrix}
1 & rho sigma \ rho sigma & sigma^2
end{bmatrix}
right),
$$

then
$$
Y mid X=x sim mathcal{N}(rhosigma x, sigma^2(1-rho^2).
$$

So the idea is just going to be to use the MGF, and properties of the conditional expectation, it is a little tedious, but it should go something like
$$
begin{align}
mathbb{E}left[Xe^{frac{Y}{2}}right] &=
mathbb{E}left[ Xmathbb{E}left[ e^{frac{Y}{2}} ; big| ; X right]right] \
&= e^{frac{sigma^2(1-rho^2)}{2^3}}mathbb{E}left[ X e^{frac{rho sigma X}{2}}right] \
&=frac{2}{rho}e^{frac{sigma^2(1-rho)}{2^3}}frac{partial}{partial sigma}mathbb{E}left[e^{frac{rhosigma X}{2}} right] \
&=frac{2}{rho}e^{frac{sigma^2(1-rho^2)}{2^3}}frac{partial}{partial sigma} e^{frac{rho^2 sigma^2}{2^3}} \
&= frac{2}{rho}e^{frac{sigma^2(1-rho^2)}{2^3}} cdot frac{2 rho^2 sigma}{2^3}e^{frac{rho^2 sigma^2}{2^3}} \
&=frac{1}{2} rho sigma e^{frac{sigma^2}{2^3}}
end{align}
$$







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Dec 18 '18 at 12:56









NadielsNadiels

2,385413




2,385413












  • $begingroup$
    This is great! After reading up on the law of total expectation and following your first comment I was able to understand the steps you carried out. In fact the good news is that they can be applied in the same sequence to solve my original problem using $rho = -1$, $X sim mathcal{N}(0.0, sigma^2)$ and $Y sim mathcal{N}(0.0, a^2 sigma^2)$. The result obtained in this way is the one I assumed and postulated in the original post. Thanks!!
    $endgroup$
    – Elarion
    Dec 18 '18 at 19:17




















  • $begingroup$
    This is great! After reading up on the law of total expectation and following your first comment I was able to understand the steps you carried out. In fact the good news is that they can be applied in the same sequence to solve my original problem using $rho = -1$, $X sim mathcal{N}(0.0, sigma^2)$ and $Y sim mathcal{N}(0.0, a^2 sigma^2)$. The result obtained in this way is the one I assumed and postulated in the original post. Thanks!!
    $endgroup$
    – Elarion
    Dec 18 '18 at 19:17


















$begingroup$
This is great! After reading up on the law of total expectation and following your first comment I was able to understand the steps you carried out. In fact the good news is that they can be applied in the same sequence to solve my original problem using $rho = -1$, $X sim mathcal{N}(0.0, sigma^2)$ and $Y sim mathcal{N}(0.0, a^2 sigma^2)$. The result obtained in this way is the one I assumed and postulated in the original post. Thanks!!
$endgroup$
– Elarion
Dec 18 '18 at 19:17






$begingroup$
This is great! After reading up on the law of total expectation and following your first comment I was able to understand the steps you carried out. In fact the good news is that they can be applied in the same sequence to solve my original problem using $rho = -1$, $X sim mathcal{N}(0.0, sigma^2)$ and $Y sim mathcal{N}(0.0, a^2 sigma^2)$. The result obtained in this way is the one I assumed and postulated in the original post. Thanks!!
$endgroup$
– Elarion
Dec 18 '18 at 19:17




















draft saved

draft discarded




















































Thanks for contributing an answer to Mathematics Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3045063%2fexpected-value-of-normal-lognormal-mixture%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Le Mesnil-Réaume

Ida-Boy-Ed-Garten

web3.py web3.isConnected() returns false always