Find the maximum value without computing matrix
$begingroup$
I have a optimization problem, which is to find a certain $h^*$ that:$$h^* = argmax(h'alpha-frac{kappa}{2}h'Sigma h)$$
where $alpha$ is a $(n times 1)$ vector and $Sigma$ is a $(ntimes n)$ matrix.
The $Sigma$ matrix is of form $$Sigma = XFX' + D$$
where $X$ is a $(ntimes k)$ matrix and $F$ is a covariance matrix of factor loadings, and $D$ is a diagonal matrix. $D = diag(sigma_1^2,..,sigma_n^2)$.
How can I write a function to compute $h^*$ without computing the a $(ntimes n)$ matrix?
linear-algebra optimization numerical-optimization
$endgroup$
|
show 1 more comment
$begingroup$
I have a optimization problem, which is to find a certain $h^*$ that:$$h^* = argmax(h'alpha-frac{kappa}{2}h'Sigma h)$$
where $alpha$ is a $(n times 1)$ vector and $Sigma$ is a $(ntimes n)$ matrix.
The $Sigma$ matrix is of form $$Sigma = XFX' + D$$
where $X$ is a $(ntimes k)$ matrix and $F$ is a covariance matrix of factor loadings, and $D$ is a diagonal matrix. $D = diag(sigma_1^2,..,sigma_n^2)$.
How can I write a function to compute $h^*$ without computing the a $(ntimes n)$ matrix?
linear-algebra optimization numerical-optimization
$endgroup$
$begingroup$
What kind of matrix is $Sigma$? Is it perchance diagonal, symmetric, or hermitian?
$endgroup$
– I like Serena
Dec 10 '18 at 20:18
$begingroup$
@IlikeSerena I have modified the question
$endgroup$
– Qing
Dec 10 '18 at 20:48
$begingroup$
BFGS only requires function evaluations and gradients, which can be computed efficiently for your problem
$endgroup$
– LinAlg
Dec 10 '18 at 21:18
$begingroup$
@LinAlg Hi, Can you explain it more details? Thank you very much!
$endgroup$
– Qing
Dec 10 '18 at 21:23
$begingroup$
What's this $h^*$? If it's the $text{argmax}$ of the given expression, shouldn't it just be $h$?
$endgroup$
– I like Serena
Dec 10 '18 at 22:00
|
show 1 more comment
$begingroup$
I have a optimization problem, which is to find a certain $h^*$ that:$$h^* = argmax(h'alpha-frac{kappa}{2}h'Sigma h)$$
where $alpha$ is a $(n times 1)$ vector and $Sigma$ is a $(ntimes n)$ matrix.
The $Sigma$ matrix is of form $$Sigma = XFX' + D$$
where $X$ is a $(ntimes k)$ matrix and $F$ is a covariance matrix of factor loadings, and $D$ is a diagonal matrix. $D = diag(sigma_1^2,..,sigma_n^2)$.
How can I write a function to compute $h^*$ without computing the a $(ntimes n)$ matrix?
linear-algebra optimization numerical-optimization
$endgroup$
I have a optimization problem, which is to find a certain $h^*$ that:$$h^* = argmax(h'alpha-frac{kappa}{2}h'Sigma h)$$
where $alpha$ is a $(n times 1)$ vector and $Sigma$ is a $(ntimes n)$ matrix.
The $Sigma$ matrix is of form $$Sigma = XFX' + D$$
where $X$ is a $(ntimes k)$ matrix and $F$ is a covariance matrix of factor loadings, and $D$ is a diagonal matrix. $D = diag(sigma_1^2,..,sigma_n^2)$.
How can I write a function to compute $h^*$ without computing the a $(ntimes n)$ matrix?
linear-algebra optimization numerical-optimization
linear-algebra optimization numerical-optimization
edited Dec 10 '18 at 20:45
Qing
asked Dec 10 '18 at 20:05
QingQing
508
508
$begingroup$
What kind of matrix is $Sigma$? Is it perchance diagonal, symmetric, or hermitian?
$endgroup$
– I like Serena
Dec 10 '18 at 20:18
$begingroup$
@IlikeSerena I have modified the question
$endgroup$
– Qing
Dec 10 '18 at 20:48
$begingroup$
BFGS only requires function evaluations and gradients, which can be computed efficiently for your problem
$endgroup$
– LinAlg
Dec 10 '18 at 21:18
$begingroup$
@LinAlg Hi, Can you explain it more details? Thank you very much!
$endgroup$
– Qing
Dec 10 '18 at 21:23
$begingroup$
What's this $h^*$? If it's the $text{argmax}$ of the given expression, shouldn't it just be $h$?
$endgroup$
– I like Serena
Dec 10 '18 at 22:00
|
show 1 more comment
$begingroup$
What kind of matrix is $Sigma$? Is it perchance diagonal, symmetric, or hermitian?
$endgroup$
– I like Serena
Dec 10 '18 at 20:18
$begingroup$
@IlikeSerena I have modified the question
$endgroup$
– Qing
Dec 10 '18 at 20:48
$begingroup$
BFGS only requires function evaluations and gradients, which can be computed efficiently for your problem
$endgroup$
– LinAlg
Dec 10 '18 at 21:18
$begingroup$
@LinAlg Hi, Can you explain it more details? Thank you very much!
$endgroup$
– Qing
Dec 10 '18 at 21:23
$begingroup$
What's this $h^*$? If it's the $text{argmax}$ of the given expression, shouldn't it just be $h$?
$endgroup$
– I like Serena
Dec 10 '18 at 22:00
$begingroup$
What kind of matrix is $Sigma$? Is it perchance diagonal, symmetric, or hermitian?
$endgroup$
– I like Serena
Dec 10 '18 at 20:18
$begingroup$
What kind of matrix is $Sigma$? Is it perchance diagonal, symmetric, or hermitian?
$endgroup$
– I like Serena
Dec 10 '18 at 20:18
$begingroup$
@IlikeSerena I have modified the question
$endgroup$
– Qing
Dec 10 '18 at 20:48
$begingroup$
@IlikeSerena I have modified the question
$endgroup$
– Qing
Dec 10 '18 at 20:48
$begingroup$
BFGS only requires function evaluations and gradients, which can be computed efficiently for your problem
$endgroup$
– LinAlg
Dec 10 '18 at 21:18
$begingroup$
BFGS only requires function evaluations and gradients, which can be computed efficiently for your problem
$endgroup$
– LinAlg
Dec 10 '18 at 21:18
$begingroup$
@LinAlg Hi, Can you explain it more details? Thank you very much!
$endgroup$
– Qing
Dec 10 '18 at 21:23
$begingroup$
@LinAlg Hi, Can you explain it more details? Thank you very much!
$endgroup$
– Qing
Dec 10 '18 at 21:23
$begingroup$
What's this $h^*$? If it's the $text{argmax}$ of the given expression, shouldn't it just be $h$?
$endgroup$
– I like Serena
Dec 10 '18 at 22:00
$begingroup$
What's this $h^*$? If it's the $text{argmax}$ of the given expression, shouldn't it just be $h$?
$endgroup$
– I like Serena
Dec 10 '18 at 22:00
|
show 1 more comment
1 Answer
1
active
oldest
votes
$begingroup$
A covariance matrix is symmetric. After multiplying with the presumed orthogonal matrix $X$ and its transpose, that will still be the case. The diagonal matrix is also symmetric. So their sum will be as well. In particular it means that $Sigma'=Sigma$.
When $h$ is optimal, each partial derivative of the expression must be $0$. Starting with the first:
$$frac{partial}{partial h_1}(h'alpha -frackappa 2h'Sigma h)
= e_1'alpha - frackappa 2e_1'Sigma h - frackappa 2h'Sigma e_1
= e_1'alpha - kappa e_1'Sigma h
= alpha_1 - kappa sigma_1 h = 0
$$
where $sigma_1$ is the first row vector of $Sigma$.
Extending to a vector and to a matrix again:
$$nabla (h'alpha -frackappa 2h'Sigma h) = alpha - kappa Sigma h = 0$$
We can find $h^*$ now by solving:
$$Sigma h^* = frac 1kappa alpha implies h^*=frac 1kappaSigma^{-1}alpha$$
This can be done with for instance Gaussian elimination, or otherwise with a more advanced method that is more robust.
$endgroup$
1
$begingroup$
Thank you so much!
$endgroup$
– Qing
Dec 11 '18 at 12:55
$begingroup$
@Qing how does this answer your question if you do not want to compute $Sigma$? Consider using the generalized Sherman-Morrison-Woodbury formula.
$endgroup$
– LinAlg
Dec 11 '18 at 17:45
$begingroup$
@LinAlg, we don't actually need to compute $Sigma$. Since $F$ is a covariance matrix, we're talking about a probability cloud around a mean. Statistically the most likely solution is if we substitute $D$ for $Sigma$. I only realized this just now though. My answer was just to solve the optimization problem with a solution that is as simple as possible.
$endgroup$
– I like Serena
Dec 11 '18 at 18:45
$begingroup$
@IlikeSerena there are many ways to cope with uncertainty, just inserting the most likely parameter value is not always the best strategy
$endgroup$
– LinAlg
Dec 11 '18 at 18:56
$begingroup$
@LinAlg I think you are right, it's not solving the major issue of the $Sigma$ here though.
$endgroup$
– Qing
Dec 11 '18 at 21:36
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3034400%2ffind-the-maximum-value-without-computing-matrix%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
A covariance matrix is symmetric. After multiplying with the presumed orthogonal matrix $X$ and its transpose, that will still be the case. The diagonal matrix is also symmetric. So their sum will be as well. In particular it means that $Sigma'=Sigma$.
When $h$ is optimal, each partial derivative of the expression must be $0$. Starting with the first:
$$frac{partial}{partial h_1}(h'alpha -frackappa 2h'Sigma h)
= e_1'alpha - frackappa 2e_1'Sigma h - frackappa 2h'Sigma e_1
= e_1'alpha - kappa e_1'Sigma h
= alpha_1 - kappa sigma_1 h = 0
$$
where $sigma_1$ is the first row vector of $Sigma$.
Extending to a vector and to a matrix again:
$$nabla (h'alpha -frackappa 2h'Sigma h) = alpha - kappa Sigma h = 0$$
We can find $h^*$ now by solving:
$$Sigma h^* = frac 1kappa alpha implies h^*=frac 1kappaSigma^{-1}alpha$$
This can be done with for instance Gaussian elimination, or otherwise with a more advanced method that is more robust.
$endgroup$
1
$begingroup$
Thank you so much!
$endgroup$
– Qing
Dec 11 '18 at 12:55
$begingroup$
@Qing how does this answer your question if you do not want to compute $Sigma$? Consider using the generalized Sherman-Morrison-Woodbury formula.
$endgroup$
– LinAlg
Dec 11 '18 at 17:45
$begingroup$
@LinAlg, we don't actually need to compute $Sigma$. Since $F$ is a covariance matrix, we're talking about a probability cloud around a mean. Statistically the most likely solution is if we substitute $D$ for $Sigma$. I only realized this just now though. My answer was just to solve the optimization problem with a solution that is as simple as possible.
$endgroup$
– I like Serena
Dec 11 '18 at 18:45
$begingroup$
@IlikeSerena there are many ways to cope with uncertainty, just inserting the most likely parameter value is not always the best strategy
$endgroup$
– LinAlg
Dec 11 '18 at 18:56
$begingroup$
@LinAlg I think you are right, it's not solving the major issue of the $Sigma$ here though.
$endgroup$
– Qing
Dec 11 '18 at 21:36
add a comment |
$begingroup$
A covariance matrix is symmetric. After multiplying with the presumed orthogonal matrix $X$ and its transpose, that will still be the case. The diagonal matrix is also symmetric. So their sum will be as well. In particular it means that $Sigma'=Sigma$.
When $h$ is optimal, each partial derivative of the expression must be $0$. Starting with the first:
$$frac{partial}{partial h_1}(h'alpha -frackappa 2h'Sigma h)
= e_1'alpha - frackappa 2e_1'Sigma h - frackappa 2h'Sigma e_1
= e_1'alpha - kappa e_1'Sigma h
= alpha_1 - kappa sigma_1 h = 0
$$
where $sigma_1$ is the first row vector of $Sigma$.
Extending to a vector and to a matrix again:
$$nabla (h'alpha -frackappa 2h'Sigma h) = alpha - kappa Sigma h = 0$$
We can find $h^*$ now by solving:
$$Sigma h^* = frac 1kappa alpha implies h^*=frac 1kappaSigma^{-1}alpha$$
This can be done with for instance Gaussian elimination, or otherwise with a more advanced method that is more robust.
$endgroup$
1
$begingroup$
Thank you so much!
$endgroup$
– Qing
Dec 11 '18 at 12:55
$begingroup$
@Qing how does this answer your question if you do not want to compute $Sigma$? Consider using the generalized Sherman-Morrison-Woodbury formula.
$endgroup$
– LinAlg
Dec 11 '18 at 17:45
$begingroup$
@LinAlg, we don't actually need to compute $Sigma$. Since $F$ is a covariance matrix, we're talking about a probability cloud around a mean. Statistically the most likely solution is if we substitute $D$ for $Sigma$. I only realized this just now though. My answer was just to solve the optimization problem with a solution that is as simple as possible.
$endgroup$
– I like Serena
Dec 11 '18 at 18:45
$begingroup$
@IlikeSerena there are many ways to cope with uncertainty, just inserting the most likely parameter value is not always the best strategy
$endgroup$
– LinAlg
Dec 11 '18 at 18:56
$begingroup$
@LinAlg I think you are right, it's not solving the major issue of the $Sigma$ here though.
$endgroup$
– Qing
Dec 11 '18 at 21:36
add a comment |
$begingroup$
A covariance matrix is symmetric. After multiplying with the presumed orthogonal matrix $X$ and its transpose, that will still be the case. The diagonal matrix is also symmetric. So their sum will be as well. In particular it means that $Sigma'=Sigma$.
When $h$ is optimal, each partial derivative of the expression must be $0$. Starting with the first:
$$frac{partial}{partial h_1}(h'alpha -frackappa 2h'Sigma h)
= e_1'alpha - frackappa 2e_1'Sigma h - frackappa 2h'Sigma e_1
= e_1'alpha - kappa e_1'Sigma h
= alpha_1 - kappa sigma_1 h = 0
$$
where $sigma_1$ is the first row vector of $Sigma$.
Extending to a vector and to a matrix again:
$$nabla (h'alpha -frackappa 2h'Sigma h) = alpha - kappa Sigma h = 0$$
We can find $h^*$ now by solving:
$$Sigma h^* = frac 1kappa alpha implies h^*=frac 1kappaSigma^{-1}alpha$$
This can be done with for instance Gaussian elimination, or otherwise with a more advanced method that is more robust.
$endgroup$
A covariance matrix is symmetric. After multiplying with the presumed orthogonal matrix $X$ and its transpose, that will still be the case. The diagonal matrix is also symmetric. So their sum will be as well. In particular it means that $Sigma'=Sigma$.
When $h$ is optimal, each partial derivative of the expression must be $0$. Starting with the first:
$$frac{partial}{partial h_1}(h'alpha -frackappa 2h'Sigma h)
= e_1'alpha - frackappa 2e_1'Sigma h - frackappa 2h'Sigma e_1
= e_1'alpha - kappa e_1'Sigma h
= alpha_1 - kappa sigma_1 h = 0
$$
where $sigma_1$ is the first row vector of $Sigma$.
Extending to a vector and to a matrix again:
$$nabla (h'alpha -frackappa 2h'Sigma h) = alpha - kappa Sigma h = 0$$
We can find $h^*$ now by solving:
$$Sigma h^* = frac 1kappa alpha implies h^*=frac 1kappaSigma^{-1}alpha$$
This can be done with for instance Gaussian elimination, or otherwise with a more advanced method that is more robust.
answered Dec 11 '18 at 7:52
I like SerenaI like Serena
4,2221722
4,2221722
1
$begingroup$
Thank you so much!
$endgroup$
– Qing
Dec 11 '18 at 12:55
$begingroup$
@Qing how does this answer your question if you do not want to compute $Sigma$? Consider using the generalized Sherman-Morrison-Woodbury formula.
$endgroup$
– LinAlg
Dec 11 '18 at 17:45
$begingroup$
@LinAlg, we don't actually need to compute $Sigma$. Since $F$ is a covariance matrix, we're talking about a probability cloud around a mean. Statistically the most likely solution is if we substitute $D$ for $Sigma$. I only realized this just now though. My answer was just to solve the optimization problem with a solution that is as simple as possible.
$endgroup$
– I like Serena
Dec 11 '18 at 18:45
$begingroup$
@IlikeSerena there are many ways to cope with uncertainty, just inserting the most likely parameter value is not always the best strategy
$endgroup$
– LinAlg
Dec 11 '18 at 18:56
$begingroup$
@LinAlg I think you are right, it's not solving the major issue of the $Sigma$ here though.
$endgroup$
– Qing
Dec 11 '18 at 21:36
add a comment |
1
$begingroup$
Thank you so much!
$endgroup$
– Qing
Dec 11 '18 at 12:55
$begingroup$
@Qing how does this answer your question if you do not want to compute $Sigma$? Consider using the generalized Sherman-Morrison-Woodbury formula.
$endgroup$
– LinAlg
Dec 11 '18 at 17:45
$begingroup$
@LinAlg, we don't actually need to compute $Sigma$. Since $F$ is a covariance matrix, we're talking about a probability cloud around a mean. Statistically the most likely solution is if we substitute $D$ for $Sigma$. I only realized this just now though. My answer was just to solve the optimization problem with a solution that is as simple as possible.
$endgroup$
– I like Serena
Dec 11 '18 at 18:45
$begingroup$
@IlikeSerena there are many ways to cope with uncertainty, just inserting the most likely parameter value is not always the best strategy
$endgroup$
– LinAlg
Dec 11 '18 at 18:56
$begingroup$
@LinAlg I think you are right, it's not solving the major issue of the $Sigma$ here though.
$endgroup$
– Qing
Dec 11 '18 at 21:36
1
1
$begingroup$
Thank you so much!
$endgroup$
– Qing
Dec 11 '18 at 12:55
$begingroup$
Thank you so much!
$endgroup$
– Qing
Dec 11 '18 at 12:55
$begingroup$
@Qing how does this answer your question if you do not want to compute $Sigma$? Consider using the generalized Sherman-Morrison-Woodbury formula.
$endgroup$
– LinAlg
Dec 11 '18 at 17:45
$begingroup$
@Qing how does this answer your question if you do not want to compute $Sigma$? Consider using the generalized Sherman-Morrison-Woodbury formula.
$endgroup$
– LinAlg
Dec 11 '18 at 17:45
$begingroup$
@LinAlg, we don't actually need to compute $Sigma$. Since $F$ is a covariance matrix, we're talking about a probability cloud around a mean. Statistically the most likely solution is if we substitute $D$ for $Sigma$. I only realized this just now though. My answer was just to solve the optimization problem with a solution that is as simple as possible.
$endgroup$
– I like Serena
Dec 11 '18 at 18:45
$begingroup$
@LinAlg, we don't actually need to compute $Sigma$. Since $F$ is a covariance matrix, we're talking about a probability cloud around a mean. Statistically the most likely solution is if we substitute $D$ for $Sigma$. I only realized this just now though. My answer was just to solve the optimization problem with a solution that is as simple as possible.
$endgroup$
– I like Serena
Dec 11 '18 at 18:45
$begingroup$
@IlikeSerena there are many ways to cope with uncertainty, just inserting the most likely parameter value is not always the best strategy
$endgroup$
– LinAlg
Dec 11 '18 at 18:56
$begingroup$
@IlikeSerena there are many ways to cope with uncertainty, just inserting the most likely parameter value is not always the best strategy
$endgroup$
– LinAlg
Dec 11 '18 at 18:56
$begingroup$
@LinAlg I think you are right, it's not solving the major issue of the $Sigma$ here though.
$endgroup$
– Qing
Dec 11 '18 at 21:36
$begingroup$
@LinAlg I think you are right, it's not solving the major issue of the $Sigma$ here though.
$endgroup$
– Qing
Dec 11 '18 at 21:36
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3034400%2ffind-the-maximum-value-without-computing-matrix%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
What kind of matrix is $Sigma$? Is it perchance diagonal, symmetric, or hermitian?
$endgroup$
– I like Serena
Dec 10 '18 at 20:18
$begingroup$
@IlikeSerena I have modified the question
$endgroup$
– Qing
Dec 10 '18 at 20:48
$begingroup$
BFGS only requires function evaluations and gradients, which can be computed efficiently for your problem
$endgroup$
– LinAlg
Dec 10 '18 at 21:18
$begingroup$
@LinAlg Hi, Can you explain it more details? Thank you very much!
$endgroup$
– Qing
Dec 10 '18 at 21:23
$begingroup$
What's this $h^*$? If it's the $text{argmax}$ of the given expression, shouldn't it just be $h$?
$endgroup$
– I like Serena
Dec 10 '18 at 22:00