Rigorous proof that $int_{Omega}X;dP=int_{-infty}^{infty}xf(x);dx$
$begingroup$
I'm trying to prove rigorously that $int_{Omega}X;dP=int_{-infty}^{infty}xf(x);dx$. Where $f$ is the pdf of the random variable $X$.
I can't find a proof on the wikipedia article, or if it's there then it's disguised enough that I can't recognize it. Basically what I've got is a sort of semi-rigorous (and probably incorrect) proof of the equality, but maybe somebody could help me flesh out the details.
Proof:
Beginning from the definition that $E[X]:=int_{Omega}X;dP$.
Given a random variable $X:Omegarightarrowmathbb{R}$, we have the induced measure on $(mathbb{R}, mathscr{B}(mathbb{R}))$ given by $P({Xin A})$. Then by the Radon-Nikodym theorem there exists a measurable function $f:mathbb{R}rightarrow [0,infty)$ such that $$P({Xin A}) = int_Afdmu,$$
where $dmu$ is the Lebesgue measure. From here it gets a bit hand-wavy. Basically since this measure was induced by the random variable $X$, then on $mathbb{R}$ this random variable is simply given by the identity function $g(x)=x$. And thus by definition we write the expected value of $g$ with respect to our Radon-Nikodym produced measure in the form $$int_{-infty}^{infty}x;dBig(int_AfdmuBig).$$
Now by the Fundamental Theorem of Calculus, this becomes $$int_{-infty}^{infty}xf(x);dx.$$
What does everyone think about this?
probability-theory measure-theory expected-value faq
$endgroup$
add a comment |
$begingroup$
I'm trying to prove rigorously that $int_{Omega}X;dP=int_{-infty}^{infty}xf(x);dx$. Where $f$ is the pdf of the random variable $X$.
I can't find a proof on the wikipedia article, or if it's there then it's disguised enough that I can't recognize it. Basically what I've got is a sort of semi-rigorous (and probably incorrect) proof of the equality, but maybe somebody could help me flesh out the details.
Proof:
Beginning from the definition that $E[X]:=int_{Omega}X;dP$.
Given a random variable $X:Omegarightarrowmathbb{R}$, we have the induced measure on $(mathbb{R}, mathscr{B}(mathbb{R}))$ given by $P({Xin A})$. Then by the Radon-Nikodym theorem there exists a measurable function $f:mathbb{R}rightarrow [0,infty)$ such that $$P({Xin A}) = int_Afdmu,$$
where $dmu$ is the Lebesgue measure. From here it gets a bit hand-wavy. Basically since this measure was induced by the random variable $X$, then on $mathbb{R}$ this random variable is simply given by the identity function $g(x)=x$. And thus by definition we write the expected value of $g$ with respect to our Radon-Nikodym produced measure in the form $$int_{-infty}^{infty}x;dBig(int_AfdmuBig).$$
Now by the Fundamental Theorem of Calculus, this becomes $$int_{-infty}^{infty}xf(x);dx.$$
What does everyone think about this?
probability-theory measure-theory expected-value faq
$endgroup$
add a comment |
$begingroup$
I'm trying to prove rigorously that $int_{Omega}X;dP=int_{-infty}^{infty}xf(x);dx$. Where $f$ is the pdf of the random variable $X$.
I can't find a proof on the wikipedia article, or if it's there then it's disguised enough that I can't recognize it. Basically what I've got is a sort of semi-rigorous (and probably incorrect) proof of the equality, but maybe somebody could help me flesh out the details.
Proof:
Beginning from the definition that $E[X]:=int_{Omega}X;dP$.
Given a random variable $X:Omegarightarrowmathbb{R}$, we have the induced measure on $(mathbb{R}, mathscr{B}(mathbb{R}))$ given by $P({Xin A})$. Then by the Radon-Nikodym theorem there exists a measurable function $f:mathbb{R}rightarrow [0,infty)$ such that $$P({Xin A}) = int_Afdmu,$$
where $dmu$ is the Lebesgue measure. From here it gets a bit hand-wavy. Basically since this measure was induced by the random variable $X$, then on $mathbb{R}$ this random variable is simply given by the identity function $g(x)=x$. And thus by definition we write the expected value of $g$ with respect to our Radon-Nikodym produced measure in the form $$int_{-infty}^{infty}x;dBig(int_AfdmuBig).$$
Now by the Fundamental Theorem of Calculus, this becomes $$int_{-infty}^{infty}xf(x);dx.$$
What does everyone think about this?
probability-theory measure-theory expected-value faq
$endgroup$
I'm trying to prove rigorously that $int_{Omega}X;dP=int_{-infty}^{infty}xf(x);dx$. Where $f$ is the pdf of the random variable $X$.
I can't find a proof on the wikipedia article, or if it's there then it's disguised enough that I can't recognize it. Basically what I've got is a sort of semi-rigorous (and probably incorrect) proof of the equality, but maybe somebody could help me flesh out the details.
Proof:
Beginning from the definition that $E[X]:=int_{Omega}X;dP$.
Given a random variable $X:Omegarightarrowmathbb{R}$, we have the induced measure on $(mathbb{R}, mathscr{B}(mathbb{R}))$ given by $P({Xin A})$. Then by the Radon-Nikodym theorem there exists a measurable function $f:mathbb{R}rightarrow [0,infty)$ such that $$P({Xin A}) = int_Afdmu,$$
where $dmu$ is the Lebesgue measure. From here it gets a bit hand-wavy. Basically since this measure was induced by the random variable $X$, then on $mathbb{R}$ this random variable is simply given by the identity function $g(x)=x$. And thus by definition we write the expected value of $g$ with respect to our Radon-Nikodym produced measure in the form $$int_{-infty}^{infty}x;dBig(int_AfdmuBig).$$
Now by the Fundamental Theorem of Calculus, this becomes $$int_{-infty}^{infty}xf(x);dx.$$
What does everyone think about this?
probability-theory measure-theory expected-value faq
probability-theory measure-theory expected-value faq
edited Nov 13 '18 at 12:02
Lee David Chung Lin
4,13031141
4,13031141
asked May 26 '13 at 6:22
ThothThoth
3,18122053
3,18122053
add a comment |
add a comment |
3 Answers
3
active
oldest
votes
$begingroup$
I'll try to provide a little more general result. Let $(Omega, mathcal{E}, P)$ be a probability space and let $Xcolon Omegalongrightarrow mathbb{R}$ be a random variable, i.e for each $Iin mathcal{B}$, $X^{-1}(I)inmathcal{E}$, where $mathcal{B}$ is the usual Borel $sigma-$algebra on $mathbb{R}$. Let us write $mu:=mu_{X}$ for the probability distribution of $X$, i.e for the measure defined on $mathcal{B}$ by $mu(I):=P(X^{-1}(I))$ for each $Iinmathcal{B}$. Then the following holds.
Theorem (Abstract-Concrete Formula): Let $phicolonmathbb{R}longrightarrow mathbb{R}$ be a borelian function, i.e $phi^{-1}(I)inmathcal{B}$ for every $Iinmathcal{B}$, and write $phi(X)$ for the composition $phicirc X$. Suppose at least one between the integrals $$int_{Omega} phi(X) dPquadtext{and}quad int_{mathbb{R}}phi (x) dmu $$
exists (resp. exists and it is finite). Then also the other one exists (resp. exists and it is finite) and it holds that $$int_{Omega} phi(X) dP=int_{mathbb{R}}phi (x) dmu .$$
In particular, $phi(X)$ is summable with respect to $P$ if and only if $phi$ is summable with respect to $mu$.
(When I say that a Lebesgue integral for a measurable function exists, I allow that it is not finite). The proof of this fact is quite straightforward but it requires some measure theory results such as the approximation theorem with simple functions and the Lebesgue's monotone convergence theorem. Indeed, suppose first that $phi$ is a (finitely) simple and positive function. Then also $phi(X)$ is simple (and positive) and (therefore) both mentioned integrals always exist. Writing $phi=sum_{i=1}^{n} c_{i}1_{E_{i}}$, where $n=vert phi(mathbb{R})vert$, $phi (mathbb{R})={c_{1},cdots,c_{n}}$ and $E_{i}:=phi^{-1}({c_{i}})$, we get $$int_{mathbb{R}}phi (x) dmu=sum_{i=1}^{n}c_{i}mu (E_{i})=sum_{i} c_{i}P(X^{-1}(E_{i}))=sum_{i} c_{i}P(X^{-1}(phi^{-1}({c_{i}})))=int_{Omega} phi(X) dP.$$ Assume now $phi$ is a non-negative borelian function. Then there exists a non-decreasing sequence $(phi_{n})_{nin mathbb{N}}$ of simple, positive functions such that $limlimits_{nto infty}phi_{n}(x)=phi (x)$ for every $xinmathbb{R}$. By monotone convergence theorem we get immediately that $$int_{mathbb{R}}phi (x) dmu=int_{mathbb{R}}(lim_{ntoinfty}phi_{n} (x)) dmu =lim_{ntoinfty} int_{mathbb{R}}phi_{n} (x) dmu=lim_{ntoinfty} int_{Omega}phi_{n} (X) dP=int_{Omega} phi(X) dP.$$ Finally, suppose only $phicolon mathbb{R}longrightarrowmathbb{R}$ is a borelian function, with no further restrictions. Then one can write $phi=phi^{+}-phi^{-}$ (here for notation). Suppose $int_{mathbb{R}}phi (x) dmu$ exists: then at least one between $int_{mathbb{R}}phi^{+} (x) dmu$ and $int_{mathbb{R}}phi^{-}(x) dmu$ (say, the first one) must be finite and hence also $int_{Omega}(phi(X))^{+} dP$ is finite, i.e $int_{Omega}phi(X) dP$ exists. It is now clear that we can conclude with our thesis.
Corollary: In the previous situation, if $E[X]<+infty$, then $$E[X]:=int_{Omega} X dP=int_{mathbb{R}} x dmu.$$ In particular, if $mu$ is absolutely continuous with respect to Lebesgue's Measure (on $mathbb{R}$) and has density $f$, then we get $E[X]=int_{mathbb{R}} xf(x) dx$ (by (a straightforward consequence of) Radon-Nikodym theorem).
$endgroup$
1
$begingroup$
It took me a while to fully digest this but I now see that it is exactly the answer I was looking for, thanks.
$endgroup$
– Thoth
May 29 '13 at 19:54
1
$begingroup$
You're welcome!
$endgroup$
– Marco Vergura
May 29 '13 at 21:20
1
$begingroup$
"Let $phi: mathbb Rto mathbb R$ be a borelian function […]" Doesn't "borelian" mean Arctic? As in aurora borealis?
$endgroup$
– kahen
May 30 '13 at 8:45
1
$begingroup$
@kahen Quite a deep comment, I'd say...However, one talks about borelian sets, so I don't see why you can't just use the same attribute for a function: borelian $leftrightarrow$ Borel measurable. Besides this, I might also call it ice-cream function if I define what such a function is: in mathematics, at least, it is not so true that nomina nuda tenemus.
$endgroup$
– Marco Vergura
May 30 '13 at 9:07
2
$begingroup$
@MarcoVergura in English mathematical parlance one speaks of "Borel functions" and "Borel sets". Names of mathematicians are very commonly used as adjectives.
$endgroup$
– kahen
May 30 '13 at 9:15
|
show 2 more comments
$begingroup$
If $Xgeqslant0$ almost surely, Fubini yields
$$
E[X]=int_Omega Xmathrm dP=int_Omegaint_0^inftymathbf 1_{tleqslant X}mathrm dtmathrm dP=int_0^inftyint_Omegamathbf 1_{tleqslant X}mathrm dPmathrm dt=int_0^infty P[Xgeqslant t]mathrm dt.
$$
Now, for each $t$,
$$
P[Xgeqslant t]=int_0^infty mathbf 1_{xgeqslant t}f(x)mathrm dx,
$$
hence Fubini again yields
$$
E[X]=int_0^inftyint_0^infty mathbf 1_{xgeqslant t}f(x)mathrm dxmathrm dt=int_0^inftyint_0^infty mathbf 1_{xgeqslant t}mathrm dtf(x)mathrm dx=int_0^infty xf(x)mathrm dx.
$$
If $X$ is real valued with $P[Xgt0]cdot P[Xlt0]ne0$, use the identity $E[X]=aE[Y]-bE[Z]$ where $a=P[Xgt0]$, $b=P[Xlt0]$, $Y$ has the distribution of $X$ conditioned on $Xgt0$ and $Z$ has the distribution of $-X$ conditioned on $Xlt0$, that is, $a=1-F(0)$, $b=F(0)$,
$$
f_Y(y)=frac{f(y)}{a}mathbf 1_{ygt0},qquad f_Z(z)=frac{f(-z)}{b}mathbf 1_{zgt0}.
$$
This yields
$$
E[X]=aint_0^infty yfrac{f(y)}{a}mathrm dy-bint_0^infty zfrac{f(-z)}{b}mathrm dz=int_{-infty}^{+infty}xf(x)mathrm dx.
$$
$endgroup$
1
$begingroup$
could you explain what $int_0^{infty}1_{tleq X}dt$ means?
$endgroup$
– Thoth
May 26 '13 at 7:30
1
$begingroup$
The random variable $U=mathbf 1_{tleqslant X}$ is defined by $U(omega)=1$ if $tleqslant X(omega)$ and $U(omega)=0$ otherwise.
$endgroup$
– Did
May 26 '13 at 7:33
1
$begingroup$
but what does it mean when you're integrating with respect to $t$? I can't figure out how that equals $X$
$endgroup$
– Thoth
May 26 '13 at 7:36
1
$begingroup$
For every $sgeqslant0$, $intlimits_0^inftymathbf 1_{tleqslant s}mathrm dt=intlimits_0^smathrm dt=s$. This applies to $s$ a real number and, pointwisely, to $s$ random.
$endgroup$
– Did
May 26 '13 at 7:41
1
$begingroup$
oh ok I see, thanks
$endgroup$
– Thoth
May 26 '13 at 7:43
|
show 1 more comment
$begingroup$
For non-negative $X$, argue as in @Did's proof the well-known result $E[X]=int_0^infty P[Xge t],dt$. For general integrable $X$ we have by definition $E[X]=E[X^+]-E[X^-]$. Since $X^+$ is non-negative, its expectation equals
$$
begin{align}
int_0^infty P[X^+ge t],dt&=int_0^infty P[Xge t],dt\
&=int_{t=0}^inftyint_{x=0}^infty I[xge t]f(x),dx,dtstackrel{rm Fubini}=int_{x=0}^inftyunderbrace{int_{t=0}^infty I[tle x]dt}_x, f(x)dx
end{align}
$$
and similarly the expectation of $X^-$ equals
$$
begin{align}
int_0^infty P[X^-ge t],dt&=int_0^infty P[Xle -t],dt\
&=int_{t=0}^inftyint_{x=-infty}^0 I[xle -t]f(x),dx,dt
stackrel{rm Fubini}=int_{x=-infty}^0underbrace{int_{t=0}^infty I[tle -x]dt}_{-x}, f(x)dx.
end{align}
$$
Adding these together gives $E[X]=int_{-infty}^0 xf(x)dx +int_0^infty xf(x)dx$.
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f402640%2frigorous-proof-that-int-omegax-dp-int-infty-inftyxfx-dx%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
I'll try to provide a little more general result. Let $(Omega, mathcal{E}, P)$ be a probability space and let $Xcolon Omegalongrightarrow mathbb{R}$ be a random variable, i.e for each $Iin mathcal{B}$, $X^{-1}(I)inmathcal{E}$, where $mathcal{B}$ is the usual Borel $sigma-$algebra on $mathbb{R}$. Let us write $mu:=mu_{X}$ for the probability distribution of $X$, i.e for the measure defined on $mathcal{B}$ by $mu(I):=P(X^{-1}(I))$ for each $Iinmathcal{B}$. Then the following holds.
Theorem (Abstract-Concrete Formula): Let $phicolonmathbb{R}longrightarrow mathbb{R}$ be a borelian function, i.e $phi^{-1}(I)inmathcal{B}$ for every $Iinmathcal{B}$, and write $phi(X)$ for the composition $phicirc X$. Suppose at least one between the integrals $$int_{Omega} phi(X) dPquadtext{and}quad int_{mathbb{R}}phi (x) dmu $$
exists (resp. exists and it is finite). Then also the other one exists (resp. exists and it is finite) and it holds that $$int_{Omega} phi(X) dP=int_{mathbb{R}}phi (x) dmu .$$
In particular, $phi(X)$ is summable with respect to $P$ if and only if $phi$ is summable with respect to $mu$.
(When I say that a Lebesgue integral for a measurable function exists, I allow that it is not finite). The proof of this fact is quite straightforward but it requires some measure theory results such as the approximation theorem with simple functions and the Lebesgue's monotone convergence theorem. Indeed, suppose first that $phi$ is a (finitely) simple and positive function. Then also $phi(X)$ is simple (and positive) and (therefore) both mentioned integrals always exist. Writing $phi=sum_{i=1}^{n} c_{i}1_{E_{i}}$, where $n=vert phi(mathbb{R})vert$, $phi (mathbb{R})={c_{1},cdots,c_{n}}$ and $E_{i}:=phi^{-1}({c_{i}})$, we get $$int_{mathbb{R}}phi (x) dmu=sum_{i=1}^{n}c_{i}mu (E_{i})=sum_{i} c_{i}P(X^{-1}(E_{i}))=sum_{i} c_{i}P(X^{-1}(phi^{-1}({c_{i}})))=int_{Omega} phi(X) dP.$$ Assume now $phi$ is a non-negative borelian function. Then there exists a non-decreasing sequence $(phi_{n})_{nin mathbb{N}}$ of simple, positive functions such that $limlimits_{nto infty}phi_{n}(x)=phi (x)$ for every $xinmathbb{R}$. By monotone convergence theorem we get immediately that $$int_{mathbb{R}}phi (x) dmu=int_{mathbb{R}}(lim_{ntoinfty}phi_{n} (x)) dmu =lim_{ntoinfty} int_{mathbb{R}}phi_{n} (x) dmu=lim_{ntoinfty} int_{Omega}phi_{n} (X) dP=int_{Omega} phi(X) dP.$$ Finally, suppose only $phicolon mathbb{R}longrightarrowmathbb{R}$ is a borelian function, with no further restrictions. Then one can write $phi=phi^{+}-phi^{-}$ (here for notation). Suppose $int_{mathbb{R}}phi (x) dmu$ exists: then at least one between $int_{mathbb{R}}phi^{+} (x) dmu$ and $int_{mathbb{R}}phi^{-}(x) dmu$ (say, the first one) must be finite and hence also $int_{Omega}(phi(X))^{+} dP$ is finite, i.e $int_{Omega}phi(X) dP$ exists. It is now clear that we can conclude with our thesis.
Corollary: In the previous situation, if $E[X]<+infty$, then $$E[X]:=int_{Omega} X dP=int_{mathbb{R}} x dmu.$$ In particular, if $mu$ is absolutely continuous with respect to Lebesgue's Measure (on $mathbb{R}$) and has density $f$, then we get $E[X]=int_{mathbb{R}} xf(x) dx$ (by (a straightforward consequence of) Radon-Nikodym theorem).
$endgroup$
1
$begingroup$
It took me a while to fully digest this but I now see that it is exactly the answer I was looking for, thanks.
$endgroup$
– Thoth
May 29 '13 at 19:54
1
$begingroup$
You're welcome!
$endgroup$
– Marco Vergura
May 29 '13 at 21:20
1
$begingroup$
"Let $phi: mathbb Rto mathbb R$ be a borelian function […]" Doesn't "borelian" mean Arctic? As in aurora borealis?
$endgroup$
– kahen
May 30 '13 at 8:45
1
$begingroup$
@kahen Quite a deep comment, I'd say...However, one talks about borelian sets, so I don't see why you can't just use the same attribute for a function: borelian $leftrightarrow$ Borel measurable. Besides this, I might also call it ice-cream function if I define what such a function is: in mathematics, at least, it is not so true that nomina nuda tenemus.
$endgroup$
– Marco Vergura
May 30 '13 at 9:07
2
$begingroup$
@MarcoVergura in English mathematical parlance one speaks of "Borel functions" and "Borel sets". Names of mathematicians are very commonly used as adjectives.
$endgroup$
– kahen
May 30 '13 at 9:15
|
show 2 more comments
$begingroup$
I'll try to provide a little more general result. Let $(Omega, mathcal{E}, P)$ be a probability space and let $Xcolon Omegalongrightarrow mathbb{R}$ be a random variable, i.e for each $Iin mathcal{B}$, $X^{-1}(I)inmathcal{E}$, where $mathcal{B}$ is the usual Borel $sigma-$algebra on $mathbb{R}$. Let us write $mu:=mu_{X}$ for the probability distribution of $X$, i.e for the measure defined on $mathcal{B}$ by $mu(I):=P(X^{-1}(I))$ for each $Iinmathcal{B}$. Then the following holds.
Theorem (Abstract-Concrete Formula): Let $phicolonmathbb{R}longrightarrow mathbb{R}$ be a borelian function, i.e $phi^{-1}(I)inmathcal{B}$ for every $Iinmathcal{B}$, and write $phi(X)$ for the composition $phicirc X$. Suppose at least one between the integrals $$int_{Omega} phi(X) dPquadtext{and}quad int_{mathbb{R}}phi (x) dmu $$
exists (resp. exists and it is finite). Then also the other one exists (resp. exists and it is finite) and it holds that $$int_{Omega} phi(X) dP=int_{mathbb{R}}phi (x) dmu .$$
In particular, $phi(X)$ is summable with respect to $P$ if and only if $phi$ is summable with respect to $mu$.
(When I say that a Lebesgue integral for a measurable function exists, I allow that it is not finite). The proof of this fact is quite straightforward but it requires some measure theory results such as the approximation theorem with simple functions and the Lebesgue's monotone convergence theorem. Indeed, suppose first that $phi$ is a (finitely) simple and positive function. Then also $phi(X)$ is simple (and positive) and (therefore) both mentioned integrals always exist. Writing $phi=sum_{i=1}^{n} c_{i}1_{E_{i}}$, where $n=vert phi(mathbb{R})vert$, $phi (mathbb{R})={c_{1},cdots,c_{n}}$ and $E_{i}:=phi^{-1}({c_{i}})$, we get $$int_{mathbb{R}}phi (x) dmu=sum_{i=1}^{n}c_{i}mu (E_{i})=sum_{i} c_{i}P(X^{-1}(E_{i}))=sum_{i} c_{i}P(X^{-1}(phi^{-1}({c_{i}})))=int_{Omega} phi(X) dP.$$ Assume now $phi$ is a non-negative borelian function. Then there exists a non-decreasing sequence $(phi_{n})_{nin mathbb{N}}$ of simple, positive functions such that $limlimits_{nto infty}phi_{n}(x)=phi (x)$ for every $xinmathbb{R}$. By monotone convergence theorem we get immediately that $$int_{mathbb{R}}phi (x) dmu=int_{mathbb{R}}(lim_{ntoinfty}phi_{n} (x)) dmu =lim_{ntoinfty} int_{mathbb{R}}phi_{n} (x) dmu=lim_{ntoinfty} int_{Omega}phi_{n} (X) dP=int_{Omega} phi(X) dP.$$ Finally, suppose only $phicolon mathbb{R}longrightarrowmathbb{R}$ is a borelian function, with no further restrictions. Then one can write $phi=phi^{+}-phi^{-}$ (here for notation). Suppose $int_{mathbb{R}}phi (x) dmu$ exists: then at least one between $int_{mathbb{R}}phi^{+} (x) dmu$ and $int_{mathbb{R}}phi^{-}(x) dmu$ (say, the first one) must be finite and hence also $int_{Omega}(phi(X))^{+} dP$ is finite, i.e $int_{Omega}phi(X) dP$ exists. It is now clear that we can conclude with our thesis.
Corollary: In the previous situation, if $E[X]<+infty$, then $$E[X]:=int_{Omega} X dP=int_{mathbb{R}} x dmu.$$ In particular, if $mu$ is absolutely continuous with respect to Lebesgue's Measure (on $mathbb{R}$) and has density $f$, then we get $E[X]=int_{mathbb{R}} xf(x) dx$ (by (a straightforward consequence of) Radon-Nikodym theorem).
$endgroup$
1
$begingroup$
It took me a while to fully digest this but I now see that it is exactly the answer I was looking for, thanks.
$endgroup$
– Thoth
May 29 '13 at 19:54
1
$begingroup$
You're welcome!
$endgroup$
– Marco Vergura
May 29 '13 at 21:20
1
$begingroup$
"Let $phi: mathbb Rto mathbb R$ be a borelian function […]" Doesn't "borelian" mean Arctic? As in aurora borealis?
$endgroup$
– kahen
May 30 '13 at 8:45
1
$begingroup$
@kahen Quite a deep comment, I'd say...However, one talks about borelian sets, so I don't see why you can't just use the same attribute for a function: borelian $leftrightarrow$ Borel measurable. Besides this, I might also call it ice-cream function if I define what such a function is: in mathematics, at least, it is not so true that nomina nuda tenemus.
$endgroup$
– Marco Vergura
May 30 '13 at 9:07
2
$begingroup$
@MarcoVergura in English mathematical parlance one speaks of "Borel functions" and "Borel sets". Names of mathematicians are very commonly used as adjectives.
$endgroup$
– kahen
May 30 '13 at 9:15
|
show 2 more comments
$begingroup$
I'll try to provide a little more general result. Let $(Omega, mathcal{E}, P)$ be a probability space and let $Xcolon Omegalongrightarrow mathbb{R}$ be a random variable, i.e for each $Iin mathcal{B}$, $X^{-1}(I)inmathcal{E}$, where $mathcal{B}$ is the usual Borel $sigma-$algebra on $mathbb{R}$. Let us write $mu:=mu_{X}$ for the probability distribution of $X$, i.e for the measure defined on $mathcal{B}$ by $mu(I):=P(X^{-1}(I))$ for each $Iinmathcal{B}$. Then the following holds.
Theorem (Abstract-Concrete Formula): Let $phicolonmathbb{R}longrightarrow mathbb{R}$ be a borelian function, i.e $phi^{-1}(I)inmathcal{B}$ for every $Iinmathcal{B}$, and write $phi(X)$ for the composition $phicirc X$. Suppose at least one between the integrals $$int_{Omega} phi(X) dPquadtext{and}quad int_{mathbb{R}}phi (x) dmu $$
exists (resp. exists and it is finite). Then also the other one exists (resp. exists and it is finite) and it holds that $$int_{Omega} phi(X) dP=int_{mathbb{R}}phi (x) dmu .$$
In particular, $phi(X)$ is summable with respect to $P$ if and only if $phi$ is summable with respect to $mu$.
(When I say that a Lebesgue integral for a measurable function exists, I allow that it is not finite). The proof of this fact is quite straightforward but it requires some measure theory results such as the approximation theorem with simple functions and the Lebesgue's monotone convergence theorem. Indeed, suppose first that $phi$ is a (finitely) simple and positive function. Then also $phi(X)$ is simple (and positive) and (therefore) both mentioned integrals always exist. Writing $phi=sum_{i=1}^{n} c_{i}1_{E_{i}}$, where $n=vert phi(mathbb{R})vert$, $phi (mathbb{R})={c_{1},cdots,c_{n}}$ and $E_{i}:=phi^{-1}({c_{i}})$, we get $$int_{mathbb{R}}phi (x) dmu=sum_{i=1}^{n}c_{i}mu (E_{i})=sum_{i} c_{i}P(X^{-1}(E_{i}))=sum_{i} c_{i}P(X^{-1}(phi^{-1}({c_{i}})))=int_{Omega} phi(X) dP.$$ Assume now $phi$ is a non-negative borelian function. Then there exists a non-decreasing sequence $(phi_{n})_{nin mathbb{N}}$ of simple, positive functions such that $limlimits_{nto infty}phi_{n}(x)=phi (x)$ for every $xinmathbb{R}$. By monotone convergence theorem we get immediately that $$int_{mathbb{R}}phi (x) dmu=int_{mathbb{R}}(lim_{ntoinfty}phi_{n} (x)) dmu =lim_{ntoinfty} int_{mathbb{R}}phi_{n} (x) dmu=lim_{ntoinfty} int_{Omega}phi_{n} (X) dP=int_{Omega} phi(X) dP.$$ Finally, suppose only $phicolon mathbb{R}longrightarrowmathbb{R}$ is a borelian function, with no further restrictions. Then one can write $phi=phi^{+}-phi^{-}$ (here for notation). Suppose $int_{mathbb{R}}phi (x) dmu$ exists: then at least one between $int_{mathbb{R}}phi^{+} (x) dmu$ and $int_{mathbb{R}}phi^{-}(x) dmu$ (say, the first one) must be finite and hence also $int_{Omega}(phi(X))^{+} dP$ is finite, i.e $int_{Omega}phi(X) dP$ exists. It is now clear that we can conclude with our thesis.
Corollary: In the previous situation, if $E[X]<+infty$, then $$E[X]:=int_{Omega} X dP=int_{mathbb{R}} x dmu.$$ In particular, if $mu$ is absolutely continuous with respect to Lebesgue's Measure (on $mathbb{R}$) and has density $f$, then we get $E[X]=int_{mathbb{R}} xf(x) dx$ (by (a straightforward consequence of) Radon-Nikodym theorem).
$endgroup$
I'll try to provide a little more general result. Let $(Omega, mathcal{E}, P)$ be a probability space and let $Xcolon Omegalongrightarrow mathbb{R}$ be a random variable, i.e for each $Iin mathcal{B}$, $X^{-1}(I)inmathcal{E}$, where $mathcal{B}$ is the usual Borel $sigma-$algebra on $mathbb{R}$. Let us write $mu:=mu_{X}$ for the probability distribution of $X$, i.e for the measure defined on $mathcal{B}$ by $mu(I):=P(X^{-1}(I))$ for each $Iinmathcal{B}$. Then the following holds.
Theorem (Abstract-Concrete Formula): Let $phicolonmathbb{R}longrightarrow mathbb{R}$ be a borelian function, i.e $phi^{-1}(I)inmathcal{B}$ for every $Iinmathcal{B}$, and write $phi(X)$ for the composition $phicirc X$. Suppose at least one between the integrals $$int_{Omega} phi(X) dPquadtext{and}quad int_{mathbb{R}}phi (x) dmu $$
exists (resp. exists and it is finite). Then also the other one exists (resp. exists and it is finite) and it holds that $$int_{Omega} phi(X) dP=int_{mathbb{R}}phi (x) dmu .$$
In particular, $phi(X)$ is summable with respect to $P$ if and only if $phi$ is summable with respect to $mu$.
(When I say that a Lebesgue integral for a measurable function exists, I allow that it is not finite). The proof of this fact is quite straightforward but it requires some measure theory results such as the approximation theorem with simple functions and the Lebesgue's monotone convergence theorem. Indeed, suppose first that $phi$ is a (finitely) simple and positive function. Then also $phi(X)$ is simple (and positive) and (therefore) both mentioned integrals always exist. Writing $phi=sum_{i=1}^{n} c_{i}1_{E_{i}}$, where $n=vert phi(mathbb{R})vert$, $phi (mathbb{R})={c_{1},cdots,c_{n}}$ and $E_{i}:=phi^{-1}({c_{i}})$, we get $$int_{mathbb{R}}phi (x) dmu=sum_{i=1}^{n}c_{i}mu (E_{i})=sum_{i} c_{i}P(X^{-1}(E_{i}))=sum_{i} c_{i}P(X^{-1}(phi^{-1}({c_{i}})))=int_{Omega} phi(X) dP.$$ Assume now $phi$ is a non-negative borelian function. Then there exists a non-decreasing sequence $(phi_{n})_{nin mathbb{N}}$ of simple, positive functions such that $limlimits_{nto infty}phi_{n}(x)=phi (x)$ for every $xinmathbb{R}$. By monotone convergence theorem we get immediately that $$int_{mathbb{R}}phi (x) dmu=int_{mathbb{R}}(lim_{ntoinfty}phi_{n} (x)) dmu =lim_{ntoinfty} int_{mathbb{R}}phi_{n} (x) dmu=lim_{ntoinfty} int_{Omega}phi_{n} (X) dP=int_{Omega} phi(X) dP.$$ Finally, suppose only $phicolon mathbb{R}longrightarrowmathbb{R}$ is a borelian function, with no further restrictions. Then one can write $phi=phi^{+}-phi^{-}$ (here for notation). Suppose $int_{mathbb{R}}phi (x) dmu$ exists: then at least one between $int_{mathbb{R}}phi^{+} (x) dmu$ and $int_{mathbb{R}}phi^{-}(x) dmu$ (say, the first one) must be finite and hence also $int_{Omega}(phi(X))^{+} dP$ is finite, i.e $int_{Omega}phi(X) dP$ exists. It is now clear that we can conclude with our thesis.
Corollary: In the previous situation, if $E[X]<+infty$, then $$E[X]:=int_{Omega} X dP=int_{mathbb{R}} x dmu.$$ In particular, if $mu$ is absolutely continuous with respect to Lebesgue's Measure (on $mathbb{R}$) and has density $f$, then we get $E[X]=int_{mathbb{R}} xf(x) dx$ (by (a straightforward consequence of) Radon-Nikodym theorem).
edited May 30 '13 at 8:12
answered May 26 '13 at 8:34
Marco VerguraMarco Vergura
3,1011930
3,1011930
1
$begingroup$
It took me a while to fully digest this but I now see that it is exactly the answer I was looking for, thanks.
$endgroup$
– Thoth
May 29 '13 at 19:54
1
$begingroup$
You're welcome!
$endgroup$
– Marco Vergura
May 29 '13 at 21:20
1
$begingroup$
"Let $phi: mathbb Rto mathbb R$ be a borelian function […]" Doesn't "borelian" mean Arctic? As in aurora borealis?
$endgroup$
– kahen
May 30 '13 at 8:45
1
$begingroup$
@kahen Quite a deep comment, I'd say...However, one talks about borelian sets, so I don't see why you can't just use the same attribute for a function: borelian $leftrightarrow$ Borel measurable. Besides this, I might also call it ice-cream function if I define what such a function is: in mathematics, at least, it is not so true that nomina nuda tenemus.
$endgroup$
– Marco Vergura
May 30 '13 at 9:07
2
$begingroup$
@MarcoVergura in English mathematical parlance one speaks of "Borel functions" and "Borel sets". Names of mathematicians are very commonly used as adjectives.
$endgroup$
– kahen
May 30 '13 at 9:15
|
show 2 more comments
1
$begingroup$
It took me a while to fully digest this but I now see that it is exactly the answer I was looking for, thanks.
$endgroup$
– Thoth
May 29 '13 at 19:54
1
$begingroup$
You're welcome!
$endgroup$
– Marco Vergura
May 29 '13 at 21:20
1
$begingroup$
"Let $phi: mathbb Rto mathbb R$ be a borelian function […]" Doesn't "borelian" mean Arctic? As in aurora borealis?
$endgroup$
– kahen
May 30 '13 at 8:45
1
$begingroup$
@kahen Quite a deep comment, I'd say...However, one talks about borelian sets, so I don't see why you can't just use the same attribute for a function: borelian $leftrightarrow$ Borel measurable. Besides this, I might also call it ice-cream function if I define what such a function is: in mathematics, at least, it is not so true that nomina nuda tenemus.
$endgroup$
– Marco Vergura
May 30 '13 at 9:07
2
$begingroup$
@MarcoVergura in English mathematical parlance one speaks of "Borel functions" and "Borel sets". Names of mathematicians are very commonly used as adjectives.
$endgroup$
– kahen
May 30 '13 at 9:15
1
1
$begingroup$
It took me a while to fully digest this but I now see that it is exactly the answer I was looking for, thanks.
$endgroup$
– Thoth
May 29 '13 at 19:54
$begingroup$
It took me a while to fully digest this but I now see that it is exactly the answer I was looking for, thanks.
$endgroup$
– Thoth
May 29 '13 at 19:54
1
1
$begingroup$
You're welcome!
$endgroup$
– Marco Vergura
May 29 '13 at 21:20
$begingroup$
You're welcome!
$endgroup$
– Marco Vergura
May 29 '13 at 21:20
1
1
$begingroup$
"Let $phi: mathbb Rto mathbb R$ be a borelian function […]" Doesn't "borelian" mean Arctic? As in aurora borealis?
$endgroup$
– kahen
May 30 '13 at 8:45
$begingroup$
"Let $phi: mathbb Rto mathbb R$ be a borelian function […]" Doesn't "borelian" mean Arctic? As in aurora borealis?
$endgroup$
– kahen
May 30 '13 at 8:45
1
1
$begingroup$
@kahen Quite a deep comment, I'd say...However, one talks about borelian sets, so I don't see why you can't just use the same attribute for a function: borelian $leftrightarrow$ Borel measurable. Besides this, I might also call it ice-cream function if I define what such a function is: in mathematics, at least, it is not so true that nomina nuda tenemus.
$endgroup$
– Marco Vergura
May 30 '13 at 9:07
$begingroup$
@kahen Quite a deep comment, I'd say...However, one talks about borelian sets, so I don't see why you can't just use the same attribute for a function: borelian $leftrightarrow$ Borel measurable. Besides this, I might also call it ice-cream function if I define what such a function is: in mathematics, at least, it is not so true that nomina nuda tenemus.
$endgroup$
– Marco Vergura
May 30 '13 at 9:07
2
2
$begingroup$
@MarcoVergura in English mathematical parlance one speaks of "Borel functions" and "Borel sets". Names of mathematicians are very commonly used as adjectives.
$endgroup$
– kahen
May 30 '13 at 9:15
$begingroup$
@MarcoVergura in English mathematical parlance one speaks of "Borel functions" and "Borel sets". Names of mathematicians are very commonly used as adjectives.
$endgroup$
– kahen
May 30 '13 at 9:15
|
show 2 more comments
$begingroup$
If $Xgeqslant0$ almost surely, Fubini yields
$$
E[X]=int_Omega Xmathrm dP=int_Omegaint_0^inftymathbf 1_{tleqslant X}mathrm dtmathrm dP=int_0^inftyint_Omegamathbf 1_{tleqslant X}mathrm dPmathrm dt=int_0^infty P[Xgeqslant t]mathrm dt.
$$
Now, for each $t$,
$$
P[Xgeqslant t]=int_0^infty mathbf 1_{xgeqslant t}f(x)mathrm dx,
$$
hence Fubini again yields
$$
E[X]=int_0^inftyint_0^infty mathbf 1_{xgeqslant t}f(x)mathrm dxmathrm dt=int_0^inftyint_0^infty mathbf 1_{xgeqslant t}mathrm dtf(x)mathrm dx=int_0^infty xf(x)mathrm dx.
$$
If $X$ is real valued with $P[Xgt0]cdot P[Xlt0]ne0$, use the identity $E[X]=aE[Y]-bE[Z]$ where $a=P[Xgt0]$, $b=P[Xlt0]$, $Y$ has the distribution of $X$ conditioned on $Xgt0$ and $Z$ has the distribution of $-X$ conditioned on $Xlt0$, that is, $a=1-F(0)$, $b=F(0)$,
$$
f_Y(y)=frac{f(y)}{a}mathbf 1_{ygt0},qquad f_Z(z)=frac{f(-z)}{b}mathbf 1_{zgt0}.
$$
This yields
$$
E[X]=aint_0^infty yfrac{f(y)}{a}mathrm dy-bint_0^infty zfrac{f(-z)}{b}mathrm dz=int_{-infty}^{+infty}xf(x)mathrm dx.
$$
$endgroup$
1
$begingroup$
could you explain what $int_0^{infty}1_{tleq X}dt$ means?
$endgroup$
– Thoth
May 26 '13 at 7:30
1
$begingroup$
The random variable $U=mathbf 1_{tleqslant X}$ is defined by $U(omega)=1$ if $tleqslant X(omega)$ and $U(omega)=0$ otherwise.
$endgroup$
– Did
May 26 '13 at 7:33
1
$begingroup$
but what does it mean when you're integrating with respect to $t$? I can't figure out how that equals $X$
$endgroup$
– Thoth
May 26 '13 at 7:36
1
$begingroup$
For every $sgeqslant0$, $intlimits_0^inftymathbf 1_{tleqslant s}mathrm dt=intlimits_0^smathrm dt=s$. This applies to $s$ a real number and, pointwisely, to $s$ random.
$endgroup$
– Did
May 26 '13 at 7:41
1
$begingroup$
oh ok I see, thanks
$endgroup$
– Thoth
May 26 '13 at 7:43
|
show 1 more comment
$begingroup$
If $Xgeqslant0$ almost surely, Fubini yields
$$
E[X]=int_Omega Xmathrm dP=int_Omegaint_0^inftymathbf 1_{tleqslant X}mathrm dtmathrm dP=int_0^inftyint_Omegamathbf 1_{tleqslant X}mathrm dPmathrm dt=int_0^infty P[Xgeqslant t]mathrm dt.
$$
Now, for each $t$,
$$
P[Xgeqslant t]=int_0^infty mathbf 1_{xgeqslant t}f(x)mathrm dx,
$$
hence Fubini again yields
$$
E[X]=int_0^inftyint_0^infty mathbf 1_{xgeqslant t}f(x)mathrm dxmathrm dt=int_0^inftyint_0^infty mathbf 1_{xgeqslant t}mathrm dtf(x)mathrm dx=int_0^infty xf(x)mathrm dx.
$$
If $X$ is real valued with $P[Xgt0]cdot P[Xlt0]ne0$, use the identity $E[X]=aE[Y]-bE[Z]$ where $a=P[Xgt0]$, $b=P[Xlt0]$, $Y$ has the distribution of $X$ conditioned on $Xgt0$ and $Z$ has the distribution of $-X$ conditioned on $Xlt0$, that is, $a=1-F(0)$, $b=F(0)$,
$$
f_Y(y)=frac{f(y)}{a}mathbf 1_{ygt0},qquad f_Z(z)=frac{f(-z)}{b}mathbf 1_{zgt0}.
$$
This yields
$$
E[X]=aint_0^infty yfrac{f(y)}{a}mathrm dy-bint_0^infty zfrac{f(-z)}{b}mathrm dz=int_{-infty}^{+infty}xf(x)mathrm dx.
$$
$endgroup$
1
$begingroup$
could you explain what $int_0^{infty}1_{tleq X}dt$ means?
$endgroup$
– Thoth
May 26 '13 at 7:30
1
$begingroup$
The random variable $U=mathbf 1_{tleqslant X}$ is defined by $U(omega)=1$ if $tleqslant X(omega)$ and $U(omega)=0$ otherwise.
$endgroup$
– Did
May 26 '13 at 7:33
1
$begingroup$
but what does it mean when you're integrating with respect to $t$? I can't figure out how that equals $X$
$endgroup$
– Thoth
May 26 '13 at 7:36
1
$begingroup$
For every $sgeqslant0$, $intlimits_0^inftymathbf 1_{tleqslant s}mathrm dt=intlimits_0^smathrm dt=s$. This applies to $s$ a real number and, pointwisely, to $s$ random.
$endgroup$
– Did
May 26 '13 at 7:41
1
$begingroup$
oh ok I see, thanks
$endgroup$
– Thoth
May 26 '13 at 7:43
|
show 1 more comment
$begingroup$
If $Xgeqslant0$ almost surely, Fubini yields
$$
E[X]=int_Omega Xmathrm dP=int_Omegaint_0^inftymathbf 1_{tleqslant X}mathrm dtmathrm dP=int_0^inftyint_Omegamathbf 1_{tleqslant X}mathrm dPmathrm dt=int_0^infty P[Xgeqslant t]mathrm dt.
$$
Now, for each $t$,
$$
P[Xgeqslant t]=int_0^infty mathbf 1_{xgeqslant t}f(x)mathrm dx,
$$
hence Fubini again yields
$$
E[X]=int_0^inftyint_0^infty mathbf 1_{xgeqslant t}f(x)mathrm dxmathrm dt=int_0^inftyint_0^infty mathbf 1_{xgeqslant t}mathrm dtf(x)mathrm dx=int_0^infty xf(x)mathrm dx.
$$
If $X$ is real valued with $P[Xgt0]cdot P[Xlt0]ne0$, use the identity $E[X]=aE[Y]-bE[Z]$ where $a=P[Xgt0]$, $b=P[Xlt0]$, $Y$ has the distribution of $X$ conditioned on $Xgt0$ and $Z$ has the distribution of $-X$ conditioned on $Xlt0$, that is, $a=1-F(0)$, $b=F(0)$,
$$
f_Y(y)=frac{f(y)}{a}mathbf 1_{ygt0},qquad f_Z(z)=frac{f(-z)}{b}mathbf 1_{zgt0}.
$$
This yields
$$
E[X]=aint_0^infty yfrac{f(y)}{a}mathrm dy-bint_0^infty zfrac{f(-z)}{b}mathrm dz=int_{-infty}^{+infty}xf(x)mathrm dx.
$$
$endgroup$
If $Xgeqslant0$ almost surely, Fubini yields
$$
E[X]=int_Omega Xmathrm dP=int_Omegaint_0^inftymathbf 1_{tleqslant X}mathrm dtmathrm dP=int_0^inftyint_Omegamathbf 1_{tleqslant X}mathrm dPmathrm dt=int_0^infty P[Xgeqslant t]mathrm dt.
$$
Now, for each $t$,
$$
P[Xgeqslant t]=int_0^infty mathbf 1_{xgeqslant t}f(x)mathrm dx,
$$
hence Fubini again yields
$$
E[X]=int_0^inftyint_0^infty mathbf 1_{xgeqslant t}f(x)mathrm dxmathrm dt=int_0^inftyint_0^infty mathbf 1_{xgeqslant t}mathrm dtf(x)mathrm dx=int_0^infty xf(x)mathrm dx.
$$
If $X$ is real valued with $P[Xgt0]cdot P[Xlt0]ne0$, use the identity $E[X]=aE[Y]-bE[Z]$ where $a=P[Xgt0]$, $b=P[Xlt0]$, $Y$ has the distribution of $X$ conditioned on $Xgt0$ and $Z$ has the distribution of $-X$ conditioned on $Xlt0$, that is, $a=1-F(0)$, $b=F(0)$,
$$
f_Y(y)=frac{f(y)}{a}mathbf 1_{ygt0},qquad f_Z(z)=frac{f(-z)}{b}mathbf 1_{zgt0}.
$$
This yields
$$
E[X]=aint_0^infty yfrac{f(y)}{a}mathrm dy-bint_0^infty zfrac{f(-z)}{b}mathrm dz=int_{-infty}^{+infty}xf(x)mathrm dx.
$$
answered May 26 '13 at 6:45
DidDid
247k23223460
247k23223460
1
$begingroup$
could you explain what $int_0^{infty}1_{tleq X}dt$ means?
$endgroup$
– Thoth
May 26 '13 at 7:30
1
$begingroup$
The random variable $U=mathbf 1_{tleqslant X}$ is defined by $U(omega)=1$ if $tleqslant X(omega)$ and $U(omega)=0$ otherwise.
$endgroup$
– Did
May 26 '13 at 7:33
1
$begingroup$
but what does it mean when you're integrating with respect to $t$? I can't figure out how that equals $X$
$endgroup$
– Thoth
May 26 '13 at 7:36
1
$begingroup$
For every $sgeqslant0$, $intlimits_0^inftymathbf 1_{tleqslant s}mathrm dt=intlimits_0^smathrm dt=s$. This applies to $s$ a real number and, pointwisely, to $s$ random.
$endgroup$
– Did
May 26 '13 at 7:41
1
$begingroup$
oh ok I see, thanks
$endgroup$
– Thoth
May 26 '13 at 7:43
|
show 1 more comment
1
$begingroup$
could you explain what $int_0^{infty}1_{tleq X}dt$ means?
$endgroup$
– Thoth
May 26 '13 at 7:30
1
$begingroup$
The random variable $U=mathbf 1_{tleqslant X}$ is defined by $U(omega)=1$ if $tleqslant X(omega)$ and $U(omega)=0$ otherwise.
$endgroup$
– Did
May 26 '13 at 7:33
1
$begingroup$
but what does it mean when you're integrating with respect to $t$? I can't figure out how that equals $X$
$endgroup$
– Thoth
May 26 '13 at 7:36
1
$begingroup$
For every $sgeqslant0$, $intlimits_0^inftymathbf 1_{tleqslant s}mathrm dt=intlimits_0^smathrm dt=s$. This applies to $s$ a real number and, pointwisely, to $s$ random.
$endgroup$
– Did
May 26 '13 at 7:41
1
$begingroup$
oh ok I see, thanks
$endgroup$
– Thoth
May 26 '13 at 7:43
1
1
$begingroup$
could you explain what $int_0^{infty}1_{tleq X}dt$ means?
$endgroup$
– Thoth
May 26 '13 at 7:30
$begingroup$
could you explain what $int_0^{infty}1_{tleq X}dt$ means?
$endgroup$
– Thoth
May 26 '13 at 7:30
1
1
$begingroup$
The random variable $U=mathbf 1_{tleqslant X}$ is defined by $U(omega)=1$ if $tleqslant X(omega)$ and $U(omega)=0$ otherwise.
$endgroup$
– Did
May 26 '13 at 7:33
$begingroup$
The random variable $U=mathbf 1_{tleqslant X}$ is defined by $U(omega)=1$ if $tleqslant X(omega)$ and $U(omega)=0$ otherwise.
$endgroup$
– Did
May 26 '13 at 7:33
1
1
$begingroup$
but what does it mean when you're integrating with respect to $t$? I can't figure out how that equals $X$
$endgroup$
– Thoth
May 26 '13 at 7:36
$begingroup$
but what does it mean when you're integrating with respect to $t$? I can't figure out how that equals $X$
$endgroup$
– Thoth
May 26 '13 at 7:36
1
1
$begingroup$
For every $sgeqslant0$, $intlimits_0^inftymathbf 1_{tleqslant s}mathrm dt=intlimits_0^smathrm dt=s$. This applies to $s$ a real number and, pointwisely, to $s$ random.
$endgroup$
– Did
May 26 '13 at 7:41
$begingroup$
For every $sgeqslant0$, $intlimits_0^inftymathbf 1_{tleqslant s}mathrm dt=intlimits_0^smathrm dt=s$. This applies to $s$ a real number and, pointwisely, to $s$ random.
$endgroup$
– Did
May 26 '13 at 7:41
1
1
$begingroup$
oh ok I see, thanks
$endgroup$
– Thoth
May 26 '13 at 7:43
$begingroup$
oh ok I see, thanks
$endgroup$
– Thoth
May 26 '13 at 7:43
|
show 1 more comment
$begingroup$
For non-negative $X$, argue as in @Did's proof the well-known result $E[X]=int_0^infty P[Xge t],dt$. For general integrable $X$ we have by definition $E[X]=E[X^+]-E[X^-]$. Since $X^+$ is non-negative, its expectation equals
$$
begin{align}
int_0^infty P[X^+ge t],dt&=int_0^infty P[Xge t],dt\
&=int_{t=0}^inftyint_{x=0}^infty I[xge t]f(x),dx,dtstackrel{rm Fubini}=int_{x=0}^inftyunderbrace{int_{t=0}^infty I[tle x]dt}_x, f(x)dx
end{align}
$$
and similarly the expectation of $X^-$ equals
$$
begin{align}
int_0^infty P[X^-ge t],dt&=int_0^infty P[Xle -t],dt\
&=int_{t=0}^inftyint_{x=-infty}^0 I[xle -t]f(x),dx,dt
stackrel{rm Fubini}=int_{x=-infty}^0underbrace{int_{t=0}^infty I[tle -x]dt}_{-x}, f(x)dx.
end{align}
$$
Adding these together gives $E[X]=int_{-infty}^0 xf(x)dx +int_0^infty xf(x)dx$.
$endgroup$
add a comment |
$begingroup$
For non-negative $X$, argue as in @Did's proof the well-known result $E[X]=int_0^infty P[Xge t],dt$. For general integrable $X$ we have by definition $E[X]=E[X^+]-E[X^-]$. Since $X^+$ is non-negative, its expectation equals
$$
begin{align}
int_0^infty P[X^+ge t],dt&=int_0^infty P[Xge t],dt\
&=int_{t=0}^inftyint_{x=0}^infty I[xge t]f(x),dx,dtstackrel{rm Fubini}=int_{x=0}^inftyunderbrace{int_{t=0}^infty I[tle x]dt}_x, f(x)dx
end{align}
$$
and similarly the expectation of $X^-$ equals
$$
begin{align}
int_0^infty P[X^-ge t],dt&=int_0^infty P[Xle -t],dt\
&=int_{t=0}^inftyint_{x=-infty}^0 I[xle -t]f(x),dx,dt
stackrel{rm Fubini}=int_{x=-infty}^0underbrace{int_{t=0}^infty I[tle -x]dt}_{-x}, f(x)dx.
end{align}
$$
Adding these together gives $E[X]=int_{-infty}^0 xf(x)dx +int_0^infty xf(x)dx$.
$endgroup$
add a comment |
$begingroup$
For non-negative $X$, argue as in @Did's proof the well-known result $E[X]=int_0^infty P[Xge t],dt$. For general integrable $X$ we have by definition $E[X]=E[X^+]-E[X^-]$. Since $X^+$ is non-negative, its expectation equals
$$
begin{align}
int_0^infty P[X^+ge t],dt&=int_0^infty P[Xge t],dt\
&=int_{t=0}^inftyint_{x=0}^infty I[xge t]f(x),dx,dtstackrel{rm Fubini}=int_{x=0}^inftyunderbrace{int_{t=0}^infty I[tle x]dt}_x, f(x)dx
end{align}
$$
and similarly the expectation of $X^-$ equals
$$
begin{align}
int_0^infty P[X^-ge t],dt&=int_0^infty P[Xle -t],dt\
&=int_{t=0}^inftyint_{x=-infty}^0 I[xle -t]f(x),dx,dt
stackrel{rm Fubini}=int_{x=-infty}^0underbrace{int_{t=0}^infty I[tle -x]dt}_{-x}, f(x)dx.
end{align}
$$
Adding these together gives $E[X]=int_{-infty}^0 xf(x)dx +int_0^infty xf(x)dx$.
$endgroup$
For non-negative $X$, argue as in @Did's proof the well-known result $E[X]=int_0^infty P[Xge t],dt$. For general integrable $X$ we have by definition $E[X]=E[X^+]-E[X^-]$. Since $X^+$ is non-negative, its expectation equals
$$
begin{align}
int_0^infty P[X^+ge t],dt&=int_0^infty P[Xge t],dt\
&=int_{t=0}^inftyint_{x=0}^infty I[xge t]f(x),dx,dtstackrel{rm Fubini}=int_{x=0}^inftyunderbrace{int_{t=0}^infty I[tle x]dt}_x, f(x)dx
end{align}
$$
and similarly the expectation of $X^-$ equals
$$
begin{align}
int_0^infty P[X^-ge t],dt&=int_0^infty P[Xle -t],dt\
&=int_{t=0}^inftyint_{x=-infty}^0 I[xle -t]f(x),dx,dt
stackrel{rm Fubini}=int_{x=-infty}^0underbrace{int_{t=0}^infty I[tle -x]dt}_{-x}, f(x)dx.
end{align}
$$
Adding these together gives $E[X]=int_{-infty}^0 xf(x)dx +int_0^infty xf(x)dx$.
answered Dec 6 '18 at 23:01
grand_chatgrand_chat
20.2k11226
20.2k11226
add a comment |
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f402640%2frigorous-proof-that-int-omegax-dp-int-infty-inftyxfx-dx%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown