Simple Markov Chain Walk - Expected position after $k$ steps
$begingroup$
I am given the following Markov chain and would like to estimate my expected position after $k$ iterations.
The chain is in $mathbb{Z}$. I start at position $1$, with probability $q_1$, I stay at the same position. With probability $1-q_1$, I move to the position $2$.
When I am at location $i$, either with probability $q_i$ I stay or with probability $1-q_i$ I move to location $i+1$, for $i = 1, 2, ldots, k-1$.
It is important to note, that we cannot go back.
Hypothesis: We assume that, $0 leq q_1 leq q_2 leq ldots leq q_{k-1} leq 1$. This means that the more we more along the chain, the likelier we are to stay at the same position.
We start at position $1$ and perform $k-1$ steps. Hence, the maximum position is $k$ at the end. We say that a location $j$ is unvisited, if after $k-1$ steps, we stop at location $i$ with $i < j$.
I would like to upper bound the expected number of unvisited position and show the following,
$$ mathbb{E}left[ |textrm{unvisited}| right] = sum_{i = 1}^{k-1} mathbb{P}left( textrm{be at $i$ after $k-1$ steps} right)cdot (k-i) leq sum_{i = 1}^{k-1} q_i. $$
I have seen that it works for $k = 2, 3, 4$, but could not manage to do it in a general case. It is important to use that $0 leq q_1 leq q_2 leq ldots leq q_{k-1} leq 1$.
probability markov-chains random-walk expected-value
$endgroup$
add a comment |
$begingroup$
I am given the following Markov chain and would like to estimate my expected position after $k$ iterations.
The chain is in $mathbb{Z}$. I start at position $1$, with probability $q_1$, I stay at the same position. With probability $1-q_1$, I move to the position $2$.
When I am at location $i$, either with probability $q_i$ I stay or with probability $1-q_i$ I move to location $i+1$, for $i = 1, 2, ldots, k-1$.
It is important to note, that we cannot go back.
Hypothesis: We assume that, $0 leq q_1 leq q_2 leq ldots leq q_{k-1} leq 1$. This means that the more we more along the chain, the likelier we are to stay at the same position.
We start at position $1$ and perform $k-1$ steps. Hence, the maximum position is $k$ at the end. We say that a location $j$ is unvisited, if after $k-1$ steps, we stop at location $i$ with $i < j$.
I would like to upper bound the expected number of unvisited position and show the following,
$$ mathbb{E}left[ |textrm{unvisited}| right] = sum_{i = 1}^{k-1} mathbb{P}left( textrm{be at $i$ after $k-1$ steps} right)cdot (k-i) leq sum_{i = 1}^{k-1} q_i. $$
I have seen that it works for $k = 2, 3, 4$, but could not manage to do it in a general case. It is important to use that $0 leq q_1 leq q_2 leq ldots leq q_{k-1} leq 1$.
probability markov-chains random-walk expected-value
$endgroup$
$begingroup$
Got something from the answer below?
$endgroup$
– Did
Jan 9 at 8:29
$begingroup$
@Did: Yes you solved my problem. I was trying to solve it in a more analytic way, which I was unable to finish. Your proof is much more elegant :) Thanks a lot.
$endgroup$
– Theo
Jan 9 at 11:15
$begingroup$
Cool. $ $ $ $ $ $
$endgroup$
– Did
Jan 9 at 15:27
add a comment |
$begingroup$
I am given the following Markov chain and would like to estimate my expected position after $k$ iterations.
The chain is in $mathbb{Z}$. I start at position $1$, with probability $q_1$, I stay at the same position. With probability $1-q_1$, I move to the position $2$.
When I am at location $i$, either with probability $q_i$ I stay or with probability $1-q_i$ I move to location $i+1$, for $i = 1, 2, ldots, k-1$.
It is important to note, that we cannot go back.
Hypothesis: We assume that, $0 leq q_1 leq q_2 leq ldots leq q_{k-1} leq 1$. This means that the more we more along the chain, the likelier we are to stay at the same position.
We start at position $1$ and perform $k-1$ steps. Hence, the maximum position is $k$ at the end. We say that a location $j$ is unvisited, if after $k-1$ steps, we stop at location $i$ with $i < j$.
I would like to upper bound the expected number of unvisited position and show the following,
$$ mathbb{E}left[ |textrm{unvisited}| right] = sum_{i = 1}^{k-1} mathbb{P}left( textrm{be at $i$ after $k-1$ steps} right)cdot (k-i) leq sum_{i = 1}^{k-1} q_i. $$
I have seen that it works for $k = 2, 3, 4$, but could not manage to do it in a general case. It is important to use that $0 leq q_1 leq q_2 leq ldots leq q_{k-1} leq 1$.
probability markov-chains random-walk expected-value
$endgroup$
I am given the following Markov chain and would like to estimate my expected position after $k$ iterations.
The chain is in $mathbb{Z}$. I start at position $1$, with probability $q_1$, I stay at the same position. With probability $1-q_1$, I move to the position $2$.
When I am at location $i$, either with probability $q_i$ I stay or with probability $1-q_i$ I move to location $i+1$, for $i = 1, 2, ldots, k-1$.
It is important to note, that we cannot go back.
Hypothesis: We assume that, $0 leq q_1 leq q_2 leq ldots leq q_{k-1} leq 1$. This means that the more we more along the chain, the likelier we are to stay at the same position.
We start at position $1$ and perform $k-1$ steps. Hence, the maximum position is $k$ at the end. We say that a location $j$ is unvisited, if after $k-1$ steps, we stop at location $i$ with $i < j$.
I would like to upper bound the expected number of unvisited position and show the following,
$$ mathbb{E}left[ |textrm{unvisited}| right] = sum_{i = 1}^{k-1} mathbb{P}left( textrm{be at $i$ after $k-1$ steps} right)cdot (k-i) leq sum_{i = 1}^{k-1} q_i. $$
I have seen that it works for $k = 2, 3, 4$, but could not manage to do it in a general case. It is important to use that $0 leq q_1 leq q_2 leq ldots leq q_{k-1} leq 1$.
probability markov-chains random-walk expected-value
probability markov-chains random-walk expected-value
edited Dec 30 '18 at 10:06
Theo
asked Dec 30 '18 at 10:00
TheoTheo
115
115
$begingroup$
Got something from the answer below?
$endgroup$
– Did
Jan 9 at 8:29
$begingroup$
@Did: Yes you solved my problem. I was trying to solve it in a more analytic way, which I was unable to finish. Your proof is much more elegant :) Thanks a lot.
$endgroup$
– Theo
Jan 9 at 11:15
$begingroup$
Cool. $ $ $ $ $ $
$endgroup$
– Did
Jan 9 at 15:27
add a comment |
$begingroup$
Got something from the answer below?
$endgroup$
– Did
Jan 9 at 8:29
$begingroup$
@Did: Yes you solved my problem. I was trying to solve it in a more analytic way, which I was unable to finish. Your proof is much more elegant :) Thanks a lot.
$endgroup$
– Theo
Jan 9 at 11:15
$begingroup$
Cool. $ $ $ $ $ $
$endgroup$
– Did
Jan 9 at 15:27
$begingroup$
Got something from the answer below?
$endgroup$
– Did
Jan 9 at 8:29
$begingroup$
Got something from the answer below?
$endgroup$
– Did
Jan 9 at 8:29
$begingroup$
@Did: Yes you solved my problem. I was trying to solve it in a more analytic way, which I was unable to finish. Your proof is much more elegant :) Thanks a lot.
$endgroup$
– Theo
Jan 9 at 11:15
$begingroup$
@Did: Yes you solved my problem. I was trying to solve it in a more analytic way, which I was unable to finish. Your proof is much more elegant :) Thanks a lot.
$endgroup$
– Theo
Jan 9 at 11:15
$begingroup$
Cool. $ $ $ $ $ $
$endgroup$
– Did
Jan 9 at 15:27
$begingroup$
Cool. $ $ $ $ $ $
$endgroup$
– Did
Jan 9 at 15:27
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
Let $X_k$ denote the position after $k$ steps, then, for every $kgeqslant1$, the one-step Markov property of the process yields $$E(X_{k+1}mid X_k=x)=x+1-q_x$$
Since $X_kleqslant k$ almost surely and $q_xleqslant q_k$ for every $xleqslant k$, $$E(X_{k+1}mid X_k=x)geqslant x+1-q_k$$ hence $$E(X_{k+1}mid X_k)geqslant X_k+1-q_k$$ Integrating both sides of this inequality, one sees that $$E(X_{k+1})geqslant E(X_k)+1-q_k$$ Since $X_1=1$ almost surely, this yields $$E(X_k)geqslant1+sum_{i=1}^{k-1}(1-q_i)=k-sum_{i=1}^{k-1}q_i$$ Finally, the number $U_k$ of vertices less than $k$ unvisited at time $k$ is $U_k=k-X_k$, hence, as required, for every $kgeqslant1$, $$E(U_k)leqslantsum_{i=1}^{k-1}q_i$$
$endgroup$
$begingroup$
Thanks a lot for your answer. It really helped me
$endgroup$
– Theo
Jan 2 at 19:53
$begingroup$
A further explanation of the first step $$E(X_{k+1}mid X_k=x)=q_x.x+(x+1).(1-q_x) implies (X_{k+1}mid X_k=x)=x+1-q_x$$
$endgroup$
– Satish Ramanathan
Jan 5 at 4:18
$begingroup$
@SatishRamanathan Useful comment?
$endgroup$
– Did
Jan 5 at 10:40
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3056673%2fsimple-markov-chain-walk-expected-position-after-k-steps%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Let $X_k$ denote the position after $k$ steps, then, for every $kgeqslant1$, the one-step Markov property of the process yields $$E(X_{k+1}mid X_k=x)=x+1-q_x$$
Since $X_kleqslant k$ almost surely and $q_xleqslant q_k$ for every $xleqslant k$, $$E(X_{k+1}mid X_k=x)geqslant x+1-q_k$$ hence $$E(X_{k+1}mid X_k)geqslant X_k+1-q_k$$ Integrating both sides of this inequality, one sees that $$E(X_{k+1})geqslant E(X_k)+1-q_k$$ Since $X_1=1$ almost surely, this yields $$E(X_k)geqslant1+sum_{i=1}^{k-1}(1-q_i)=k-sum_{i=1}^{k-1}q_i$$ Finally, the number $U_k$ of vertices less than $k$ unvisited at time $k$ is $U_k=k-X_k$, hence, as required, for every $kgeqslant1$, $$E(U_k)leqslantsum_{i=1}^{k-1}q_i$$
$endgroup$
$begingroup$
Thanks a lot for your answer. It really helped me
$endgroup$
– Theo
Jan 2 at 19:53
$begingroup$
A further explanation of the first step $$E(X_{k+1}mid X_k=x)=q_x.x+(x+1).(1-q_x) implies (X_{k+1}mid X_k=x)=x+1-q_x$$
$endgroup$
– Satish Ramanathan
Jan 5 at 4:18
$begingroup$
@SatishRamanathan Useful comment?
$endgroup$
– Did
Jan 5 at 10:40
add a comment |
$begingroup$
Let $X_k$ denote the position after $k$ steps, then, for every $kgeqslant1$, the one-step Markov property of the process yields $$E(X_{k+1}mid X_k=x)=x+1-q_x$$
Since $X_kleqslant k$ almost surely and $q_xleqslant q_k$ for every $xleqslant k$, $$E(X_{k+1}mid X_k=x)geqslant x+1-q_k$$ hence $$E(X_{k+1}mid X_k)geqslant X_k+1-q_k$$ Integrating both sides of this inequality, one sees that $$E(X_{k+1})geqslant E(X_k)+1-q_k$$ Since $X_1=1$ almost surely, this yields $$E(X_k)geqslant1+sum_{i=1}^{k-1}(1-q_i)=k-sum_{i=1}^{k-1}q_i$$ Finally, the number $U_k$ of vertices less than $k$ unvisited at time $k$ is $U_k=k-X_k$, hence, as required, for every $kgeqslant1$, $$E(U_k)leqslantsum_{i=1}^{k-1}q_i$$
$endgroup$
$begingroup$
Thanks a lot for your answer. It really helped me
$endgroup$
– Theo
Jan 2 at 19:53
$begingroup$
A further explanation of the first step $$E(X_{k+1}mid X_k=x)=q_x.x+(x+1).(1-q_x) implies (X_{k+1}mid X_k=x)=x+1-q_x$$
$endgroup$
– Satish Ramanathan
Jan 5 at 4:18
$begingroup$
@SatishRamanathan Useful comment?
$endgroup$
– Did
Jan 5 at 10:40
add a comment |
$begingroup$
Let $X_k$ denote the position after $k$ steps, then, for every $kgeqslant1$, the one-step Markov property of the process yields $$E(X_{k+1}mid X_k=x)=x+1-q_x$$
Since $X_kleqslant k$ almost surely and $q_xleqslant q_k$ for every $xleqslant k$, $$E(X_{k+1}mid X_k=x)geqslant x+1-q_k$$ hence $$E(X_{k+1}mid X_k)geqslant X_k+1-q_k$$ Integrating both sides of this inequality, one sees that $$E(X_{k+1})geqslant E(X_k)+1-q_k$$ Since $X_1=1$ almost surely, this yields $$E(X_k)geqslant1+sum_{i=1}^{k-1}(1-q_i)=k-sum_{i=1}^{k-1}q_i$$ Finally, the number $U_k$ of vertices less than $k$ unvisited at time $k$ is $U_k=k-X_k$, hence, as required, for every $kgeqslant1$, $$E(U_k)leqslantsum_{i=1}^{k-1}q_i$$
$endgroup$
Let $X_k$ denote the position after $k$ steps, then, for every $kgeqslant1$, the one-step Markov property of the process yields $$E(X_{k+1}mid X_k=x)=x+1-q_x$$
Since $X_kleqslant k$ almost surely and $q_xleqslant q_k$ for every $xleqslant k$, $$E(X_{k+1}mid X_k=x)geqslant x+1-q_k$$ hence $$E(X_{k+1}mid X_k)geqslant X_k+1-q_k$$ Integrating both sides of this inequality, one sees that $$E(X_{k+1})geqslant E(X_k)+1-q_k$$ Since $X_1=1$ almost surely, this yields $$E(X_k)geqslant1+sum_{i=1}^{k-1}(1-q_i)=k-sum_{i=1}^{k-1}q_i$$ Finally, the number $U_k$ of vertices less than $k$ unvisited at time $k$ is $U_k=k-X_k$, hence, as required, for every $kgeqslant1$, $$E(U_k)leqslantsum_{i=1}^{k-1}q_i$$
answered Dec 31 '18 at 20:10
DidDid
249k23229468
249k23229468
$begingroup$
Thanks a lot for your answer. It really helped me
$endgroup$
– Theo
Jan 2 at 19:53
$begingroup$
A further explanation of the first step $$E(X_{k+1}mid X_k=x)=q_x.x+(x+1).(1-q_x) implies (X_{k+1}mid X_k=x)=x+1-q_x$$
$endgroup$
– Satish Ramanathan
Jan 5 at 4:18
$begingroup$
@SatishRamanathan Useful comment?
$endgroup$
– Did
Jan 5 at 10:40
add a comment |
$begingroup$
Thanks a lot for your answer. It really helped me
$endgroup$
– Theo
Jan 2 at 19:53
$begingroup$
A further explanation of the first step $$E(X_{k+1}mid X_k=x)=q_x.x+(x+1).(1-q_x) implies (X_{k+1}mid X_k=x)=x+1-q_x$$
$endgroup$
– Satish Ramanathan
Jan 5 at 4:18
$begingroup$
@SatishRamanathan Useful comment?
$endgroup$
– Did
Jan 5 at 10:40
$begingroup$
Thanks a lot for your answer. It really helped me
$endgroup$
– Theo
Jan 2 at 19:53
$begingroup$
Thanks a lot for your answer. It really helped me
$endgroup$
– Theo
Jan 2 at 19:53
$begingroup$
A further explanation of the first step $$E(X_{k+1}mid X_k=x)=q_x.x+(x+1).(1-q_x) implies (X_{k+1}mid X_k=x)=x+1-q_x$$
$endgroup$
– Satish Ramanathan
Jan 5 at 4:18
$begingroup$
A further explanation of the first step $$E(X_{k+1}mid X_k=x)=q_x.x+(x+1).(1-q_x) implies (X_{k+1}mid X_k=x)=x+1-q_x$$
$endgroup$
– Satish Ramanathan
Jan 5 at 4:18
$begingroup$
@SatishRamanathan Useful comment?
$endgroup$
– Did
Jan 5 at 10:40
$begingroup$
@SatishRamanathan Useful comment?
$endgroup$
– Did
Jan 5 at 10:40
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3056673%2fsimple-markov-chain-walk-expected-position-after-k-steps%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
Got something from the answer below?
$endgroup$
– Did
Jan 9 at 8:29
$begingroup$
@Did: Yes you solved my problem. I was trying to solve it in a more analytic way, which I was unable to finish. Your proof is much more elegant :) Thanks a lot.
$endgroup$
– Theo
Jan 9 at 11:15
$begingroup$
Cool. $ $ $ $ $ $
$endgroup$
– Did
Jan 9 at 15:27