Intuition behind logloss function
I have a difficulty understanding the intuition behind the logloss function since it seems to totally ignore negative examples where y = 0.
The images below visualize my question to some extend:
Your advice will be appreciated!
log-loss
add a comment |
I have a difficulty understanding the intuition behind the logloss function since it seems to totally ignore negative examples where y = 0.
The images below visualize my question to some extend:
Your advice will be appreciated!
log-loss
1
I don't know what you are trying to do, however read en.wikipedia.org/wiki/… for the cross entropy for logistic regression. Your loss is not specified correctly if this is what you intended to do.
– Cowboy Trader
Nov 27 at 8:48
add a comment |
I have a difficulty understanding the intuition behind the logloss function since it seems to totally ignore negative examples where y = 0.
The images below visualize my question to some extend:
Your advice will be appreciated!
log-loss
I have a difficulty understanding the intuition behind the logloss function since it seems to totally ignore negative examples where y = 0.
The images below visualize my question to some extend:
Your advice will be appreciated!
log-loss
log-loss
asked Nov 27 at 8:40
user8270077
1462
1462
1
I don't know what you are trying to do, however read en.wikipedia.org/wiki/… for the cross entropy for logistic regression. Your loss is not specified correctly if this is what you intended to do.
– Cowboy Trader
Nov 27 at 8:48
add a comment |
1
I don't know what you are trying to do, however read en.wikipedia.org/wiki/… for the cross entropy for logistic regression. Your loss is not specified correctly if this is what you intended to do.
– Cowboy Trader
Nov 27 at 8:48
1
1
I don't know what you are trying to do, however read en.wikipedia.org/wiki/… for the cross entropy for logistic regression. Your loss is not specified correctly if this is what you intended to do.
– Cowboy Trader
Nov 27 at 8:48
I don't know what you are trying to do, however read en.wikipedia.org/wiki/… for the cross entropy for logistic regression. Your loss is not specified correctly if this is what you intended to do.
– Cowboy Trader
Nov 27 at 8:48
add a comment |
1 Answer
1
active
oldest
votes
The formula you used, seems to be
$$
H(X) = -P(X)log P(X),
$$
the definition of entropy. You seem to be asking about cross-entropy loss, also known as log-loss, which is defined as
$$
L(y, hat y) = underbrace{-y log(hat y)}_{text{when } y=1} ;underbrace{- (1-y) log(1-hat y)}_{text{when } y=0}
$$
where $y in {0, 1}$ is the label and $hat y$ is the predicted probability for the label. So the loss is zero for perfect classifications $y = hat y = 1$, or $y = hat y = 0$, and logarithmically increases otherwise.
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "65"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f378979%2fintuition-behind-logloss-function%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
The formula you used, seems to be
$$
H(X) = -P(X)log P(X),
$$
the definition of entropy. You seem to be asking about cross-entropy loss, also known as log-loss, which is defined as
$$
L(y, hat y) = underbrace{-y log(hat y)}_{text{when } y=1} ;underbrace{- (1-y) log(1-hat y)}_{text{when } y=0}
$$
where $y in {0, 1}$ is the label and $hat y$ is the predicted probability for the label. So the loss is zero for perfect classifications $y = hat y = 1$, or $y = hat y = 0$, and logarithmically increases otherwise.
add a comment |
The formula you used, seems to be
$$
H(X) = -P(X)log P(X),
$$
the definition of entropy. You seem to be asking about cross-entropy loss, also known as log-loss, which is defined as
$$
L(y, hat y) = underbrace{-y log(hat y)}_{text{when } y=1} ;underbrace{- (1-y) log(1-hat y)}_{text{when } y=0}
$$
where $y in {0, 1}$ is the label and $hat y$ is the predicted probability for the label. So the loss is zero for perfect classifications $y = hat y = 1$, or $y = hat y = 0$, and logarithmically increases otherwise.
add a comment |
The formula you used, seems to be
$$
H(X) = -P(X)log P(X),
$$
the definition of entropy. You seem to be asking about cross-entropy loss, also known as log-loss, which is defined as
$$
L(y, hat y) = underbrace{-y log(hat y)}_{text{when } y=1} ;underbrace{- (1-y) log(1-hat y)}_{text{when } y=0}
$$
where $y in {0, 1}$ is the label and $hat y$ is the predicted probability for the label. So the loss is zero for perfect classifications $y = hat y = 1$, or $y = hat y = 0$, and logarithmically increases otherwise.
The formula you used, seems to be
$$
H(X) = -P(X)log P(X),
$$
the definition of entropy. You seem to be asking about cross-entropy loss, also known as log-loss, which is defined as
$$
L(y, hat y) = underbrace{-y log(hat y)}_{text{when } y=1} ;underbrace{- (1-y) log(1-hat y)}_{text{when } y=0}
$$
where $y in {0, 1}$ is the label and $hat y$ is the predicted probability for the label. So the loss is zero for perfect classifications $y = hat y = 1$, or $y = hat y = 0$, and logarithmically increases otherwise.
edited Nov 27 at 9:27
answered Nov 27 at 9:18
Tim♦
55.5k9124213
55.5k9124213
add a comment |
add a comment |
Thanks for contributing an answer to Cross Validated!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f378979%2fintuition-behind-logloss-function%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
I don't know what you are trying to do, however read en.wikipedia.org/wiki/… for the cross entropy for logistic regression. Your loss is not specified correctly if this is what you intended to do.
– Cowboy Trader
Nov 27 at 8:48