Help Understanding Difference in P-Value & Critical Value Results
$begingroup$
I'd appreciate help in understanding how changing the significance level effects the results of the t-test.
I have conducted an experiment where a group of 15 participants took a test, played a game, and took the original test again. The data set follows:
Round 1 (Before Game) Scores: 6, 4, 7, 8, 12, 6, 7, 5, 11, 4, 7, 1, 6, 10, 4
Round 2 (After Game) Scores: 2, 3, 7, 11, 11, 9, 7, 12, 5, 15, 11, 11, 7, 4, 7
mean test score before game play: 6.53
mean test score after game play: 8.13
Accordingly I formulated a null hypothesis that game play does not effect test scores and an alternative hypothesis that game play increases scores (see below). Using the data and R I calculated the t-statistic, critical value, and p-value
$H_0: mu_0 = 6.53$ and $H_1: mu_1 > 6.53$
$alpha = 0.05, mu_0 = 6.53, overline x = 8.13, sigma = 3.70, n = 15$
$$ t = frac{8.13 - 6.53}{frac{3.70}{sqrt 15}} = 1.67 $$
Critical value = 1.76 and p-value = 0.94
T-value < Critical Value $ to $ $1.67 < 1.76 therefore$ accept $H_0$
$p-value > alpha$ $to 0.94 > 0.5 therefore$ accept $H_0$
But when I re-calculate with a $alpha$ of 0.1 the critical value changes to 1.35, while the p-value stays the same at 0.94. At this point, accepting/rejecting diverges based on which value comparison is made. Did I make a mistake in the calculation or am I misunderstanding some other factor(s)? Thanks.
statistics hypothesis-testing
$endgroup$
add a comment |
$begingroup$
I'd appreciate help in understanding how changing the significance level effects the results of the t-test.
I have conducted an experiment where a group of 15 participants took a test, played a game, and took the original test again. The data set follows:
Round 1 (Before Game) Scores: 6, 4, 7, 8, 12, 6, 7, 5, 11, 4, 7, 1, 6, 10, 4
Round 2 (After Game) Scores: 2, 3, 7, 11, 11, 9, 7, 12, 5, 15, 11, 11, 7, 4, 7
mean test score before game play: 6.53
mean test score after game play: 8.13
Accordingly I formulated a null hypothesis that game play does not effect test scores and an alternative hypothesis that game play increases scores (see below). Using the data and R I calculated the t-statistic, critical value, and p-value
$H_0: mu_0 = 6.53$ and $H_1: mu_1 > 6.53$
$alpha = 0.05, mu_0 = 6.53, overline x = 8.13, sigma = 3.70, n = 15$
$$ t = frac{8.13 - 6.53}{frac{3.70}{sqrt 15}} = 1.67 $$
Critical value = 1.76 and p-value = 0.94
T-value < Critical Value $ to $ $1.67 < 1.76 therefore$ accept $H_0$
$p-value > alpha$ $to 0.94 > 0.5 therefore$ accept $H_0$
But when I re-calculate with a $alpha$ of 0.1 the critical value changes to 1.35, while the p-value stays the same at 0.94. At this point, accepting/rejecting diverges based on which value comparison is made. Did I make a mistake in the calculation or am I misunderstanding some other factor(s)? Thanks.
statistics hypothesis-testing
$endgroup$
$begingroup$
Your $p$-value should be $0.06$, not $0.94$. The $p$-value for a one-sided $t$-test with alternative hypothesis that $mu$ is greater than the hypothesized population mean is the probability that a random sample mean (from a normal distribution of sample means, with the sample standard deviation as an estimate of the population standard deviation, and under the null hypothesis) is greater than your sample mean. $p=0.94$ is the probability that a random sample mean is less than your sample mean. $p=0.06$ is significant if $alpha=0.1$, but not if $alpha=0.05$.
$endgroup$
– Steve Kass
Apr 8 '16 at 1:28
$begingroup$
@SteveKass: Thanks for pointing out the error. I had mistakenly set the R functionpt
argument "lower.tail" to "TRUE."
$endgroup$
– Ari
Apr 8 '16 at 14:33
add a comment |
$begingroup$
I'd appreciate help in understanding how changing the significance level effects the results of the t-test.
I have conducted an experiment where a group of 15 participants took a test, played a game, and took the original test again. The data set follows:
Round 1 (Before Game) Scores: 6, 4, 7, 8, 12, 6, 7, 5, 11, 4, 7, 1, 6, 10, 4
Round 2 (After Game) Scores: 2, 3, 7, 11, 11, 9, 7, 12, 5, 15, 11, 11, 7, 4, 7
mean test score before game play: 6.53
mean test score after game play: 8.13
Accordingly I formulated a null hypothesis that game play does not effect test scores and an alternative hypothesis that game play increases scores (see below). Using the data and R I calculated the t-statistic, critical value, and p-value
$H_0: mu_0 = 6.53$ and $H_1: mu_1 > 6.53$
$alpha = 0.05, mu_0 = 6.53, overline x = 8.13, sigma = 3.70, n = 15$
$$ t = frac{8.13 - 6.53}{frac{3.70}{sqrt 15}} = 1.67 $$
Critical value = 1.76 and p-value = 0.94
T-value < Critical Value $ to $ $1.67 < 1.76 therefore$ accept $H_0$
$p-value > alpha$ $to 0.94 > 0.5 therefore$ accept $H_0$
But when I re-calculate with a $alpha$ of 0.1 the critical value changes to 1.35, while the p-value stays the same at 0.94. At this point, accepting/rejecting diverges based on which value comparison is made. Did I make a mistake in the calculation or am I misunderstanding some other factor(s)? Thanks.
statistics hypothesis-testing
$endgroup$
I'd appreciate help in understanding how changing the significance level effects the results of the t-test.
I have conducted an experiment where a group of 15 participants took a test, played a game, and took the original test again. The data set follows:
Round 1 (Before Game) Scores: 6, 4, 7, 8, 12, 6, 7, 5, 11, 4, 7, 1, 6, 10, 4
Round 2 (After Game) Scores: 2, 3, 7, 11, 11, 9, 7, 12, 5, 15, 11, 11, 7, 4, 7
mean test score before game play: 6.53
mean test score after game play: 8.13
Accordingly I formulated a null hypothesis that game play does not effect test scores and an alternative hypothesis that game play increases scores (see below). Using the data and R I calculated the t-statistic, critical value, and p-value
$H_0: mu_0 = 6.53$ and $H_1: mu_1 > 6.53$
$alpha = 0.05, mu_0 = 6.53, overline x = 8.13, sigma = 3.70, n = 15$
$$ t = frac{8.13 - 6.53}{frac{3.70}{sqrt 15}} = 1.67 $$
Critical value = 1.76 and p-value = 0.94
T-value < Critical Value $ to $ $1.67 < 1.76 therefore$ accept $H_0$
$p-value > alpha$ $to 0.94 > 0.5 therefore$ accept $H_0$
But when I re-calculate with a $alpha$ of 0.1 the critical value changes to 1.35, while the p-value stays the same at 0.94. At this point, accepting/rejecting diverges based on which value comparison is made. Did I make a mistake in the calculation or am I misunderstanding some other factor(s)? Thanks.
statistics hypothesis-testing
statistics hypothesis-testing
edited Apr 8 '16 at 15:02
Ari
1083
1083
asked Apr 7 '16 at 16:15
abrandabrand
612
612
$begingroup$
Your $p$-value should be $0.06$, not $0.94$. The $p$-value for a one-sided $t$-test with alternative hypothesis that $mu$ is greater than the hypothesized population mean is the probability that a random sample mean (from a normal distribution of sample means, with the sample standard deviation as an estimate of the population standard deviation, and under the null hypothesis) is greater than your sample mean. $p=0.94$ is the probability that a random sample mean is less than your sample mean. $p=0.06$ is significant if $alpha=0.1$, but not if $alpha=0.05$.
$endgroup$
– Steve Kass
Apr 8 '16 at 1:28
$begingroup$
@SteveKass: Thanks for pointing out the error. I had mistakenly set the R functionpt
argument "lower.tail" to "TRUE."
$endgroup$
– Ari
Apr 8 '16 at 14:33
add a comment |
$begingroup$
Your $p$-value should be $0.06$, not $0.94$. The $p$-value for a one-sided $t$-test with alternative hypothesis that $mu$ is greater than the hypothesized population mean is the probability that a random sample mean (from a normal distribution of sample means, with the sample standard deviation as an estimate of the population standard deviation, and under the null hypothesis) is greater than your sample mean. $p=0.94$ is the probability that a random sample mean is less than your sample mean. $p=0.06$ is significant if $alpha=0.1$, but not if $alpha=0.05$.
$endgroup$
– Steve Kass
Apr 8 '16 at 1:28
$begingroup$
@SteveKass: Thanks for pointing out the error. I had mistakenly set the R functionpt
argument "lower.tail" to "TRUE."
$endgroup$
– Ari
Apr 8 '16 at 14:33
$begingroup$
Your $p$-value should be $0.06$, not $0.94$. The $p$-value for a one-sided $t$-test with alternative hypothesis that $mu$ is greater than the hypothesized population mean is the probability that a random sample mean (from a normal distribution of sample means, with the sample standard deviation as an estimate of the population standard deviation, and under the null hypothesis) is greater than your sample mean. $p=0.94$ is the probability that a random sample mean is less than your sample mean. $p=0.06$ is significant if $alpha=0.1$, but not if $alpha=0.05$.
$endgroup$
– Steve Kass
Apr 8 '16 at 1:28
$begingroup$
Your $p$-value should be $0.06$, not $0.94$. The $p$-value for a one-sided $t$-test with alternative hypothesis that $mu$ is greater than the hypothesized population mean is the probability that a random sample mean (from a normal distribution of sample means, with the sample standard deviation as an estimate of the population standard deviation, and under the null hypothesis) is greater than your sample mean. $p=0.94$ is the probability that a random sample mean is less than your sample mean. $p=0.06$ is significant if $alpha=0.1$, but not if $alpha=0.05$.
$endgroup$
– Steve Kass
Apr 8 '16 at 1:28
$begingroup$
@SteveKass: Thanks for pointing out the error. I had mistakenly set the R function
pt
argument "lower.tail" to "TRUE."$endgroup$
– Ari
Apr 8 '16 at 14:33
$begingroup$
@SteveKass: Thanks for pointing out the error. I had mistakenly set the R function
pt
argument "lower.tail" to "TRUE."$endgroup$
– Ari
Apr 8 '16 at 14:33
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
You have a paired design. It is the same $n = 15$ students
taking the test both times. Let's call the first score for the $i$th
subject $X_i$ and the second score $Y_i.$ You want to do a
one-sample z-test of the differences $D_i = X_i - Y_i.$
You don't give the individual scores, but the averages are
$bar D = bar X - bar Y.$
The null hypothesis is $H_O: mu_D = 0$ (no different after
playing the game) and $H_a: mu_d > 0$ (better scores after playing
the game).
The test statistic is $$Z = frac{bar D - 0}{sigma/sqrt{n}} = 1.67.$$ The critical value at the 5% level is the value $c = 1.645$ that cuts 5% from the upper tail of the standard normal curve.
Because $T = 1.67 > c = 1.645,$ you reject the null hypothesis
and conclude that the game might have enabled the students to
get better scores on the second test. (Or maybe learned something
from taking the first test!)
However, $T$ exceeds $c$ by only
a little, and evidence is not 'strong'. If you subject the
findings to a more stringent standard and test at the 1% level,
then the critical new value $c^prime = 3.326$ that cuts 1% from
the upper tail of the standard normal distribution.
According to this more stringent standard, you do not reject
the null hypothesis.
The P-value is the probability to the right of $Z = 1.67$
under the standard normal curve. That probability is 0.47.
With the p-value, we can test at any desired level of significance.
In particular, at the 5% level, we reject because $.047 < .05 = 5%$.
However, at the 1% level, we do not reject because $.047 > .01 = 1%.$
In case it is useful, I pasted output below (somewhat abridged) from doing this
test in Minitab statistical software:
One-Sample Z
Test of mu = 0 vs > 0
The assumed standard deviation = 3.7
N Mean SE Mean Z P
15 1.600 0.955 1.67 0.047
$endgroup$
$begingroup$
Thanks for the insightful answer; it was very helpful. I'd appreciate it if you could clarify the following: 1. Why did you use the z-statistic as opposed to the t-statistic? 2. In your answer, you used a standard deviation of 3.7, I believe you used this since I had not provided the raw scores. Had I, the correct standard deviation would have been the square root of the sum of the variances of the pre and post test scores, right?
$endgroup$
– Ari
Apr 8 '16 at 14:08
$begingroup$
@Ari: (1) z instead of t because pop SD $sigma = 3.70$ is known. (3) If $sigma$ needed to be estimated, one would use $S_D$ the sample SD of the differences $D_i.$ Then under $H_0$, test statistic $T = bar D/(S_D/sqrt{n})$ would have Student's t dist'n with $df = n - 1 = 14.$ and critical value for test at 5% would be 1.761 (one-sided alternative)..
$endgroup$
– BruceET
Apr 9 '16 at 4:27
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1732178%2fhelp-understanding-difference-in-p-value-critical-value-results%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
You have a paired design. It is the same $n = 15$ students
taking the test both times. Let's call the first score for the $i$th
subject $X_i$ and the second score $Y_i.$ You want to do a
one-sample z-test of the differences $D_i = X_i - Y_i.$
You don't give the individual scores, but the averages are
$bar D = bar X - bar Y.$
The null hypothesis is $H_O: mu_D = 0$ (no different after
playing the game) and $H_a: mu_d > 0$ (better scores after playing
the game).
The test statistic is $$Z = frac{bar D - 0}{sigma/sqrt{n}} = 1.67.$$ The critical value at the 5% level is the value $c = 1.645$ that cuts 5% from the upper tail of the standard normal curve.
Because $T = 1.67 > c = 1.645,$ you reject the null hypothesis
and conclude that the game might have enabled the students to
get better scores on the second test. (Or maybe learned something
from taking the first test!)
However, $T$ exceeds $c$ by only
a little, and evidence is not 'strong'. If you subject the
findings to a more stringent standard and test at the 1% level,
then the critical new value $c^prime = 3.326$ that cuts 1% from
the upper tail of the standard normal distribution.
According to this more stringent standard, you do not reject
the null hypothesis.
The P-value is the probability to the right of $Z = 1.67$
under the standard normal curve. That probability is 0.47.
With the p-value, we can test at any desired level of significance.
In particular, at the 5% level, we reject because $.047 < .05 = 5%$.
However, at the 1% level, we do not reject because $.047 > .01 = 1%.$
In case it is useful, I pasted output below (somewhat abridged) from doing this
test in Minitab statistical software:
One-Sample Z
Test of mu = 0 vs > 0
The assumed standard deviation = 3.7
N Mean SE Mean Z P
15 1.600 0.955 1.67 0.047
$endgroup$
$begingroup$
Thanks for the insightful answer; it was very helpful. I'd appreciate it if you could clarify the following: 1. Why did you use the z-statistic as opposed to the t-statistic? 2. In your answer, you used a standard deviation of 3.7, I believe you used this since I had not provided the raw scores. Had I, the correct standard deviation would have been the square root of the sum of the variances of the pre and post test scores, right?
$endgroup$
– Ari
Apr 8 '16 at 14:08
$begingroup$
@Ari: (1) z instead of t because pop SD $sigma = 3.70$ is known. (3) If $sigma$ needed to be estimated, one would use $S_D$ the sample SD of the differences $D_i.$ Then under $H_0$, test statistic $T = bar D/(S_D/sqrt{n})$ would have Student's t dist'n with $df = n - 1 = 14.$ and critical value for test at 5% would be 1.761 (one-sided alternative)..
$endgroup$
– BruceET
Apr 9 '16 at 4:27
add a comment |
$begingroup$
You have a paired design. It is the same $n = 15$ students
taking the test both times. Let's call the first score for the $i$th
subject $X_i$ and the second score $Y_i.$ You want to do a
one-sample z-test of the differences $D_i = X_i - Y_i.$
You don't give the individual scores, but the averages are
$bar D = bar X - bar Y.$
The null hypothesis is $H_O: mu_D = 0$ (no different after
playing the game) and $H_a: mu_d > 0$ (better scores after playing
the game).
The test statistic is $$Z = frac{bar D - 0}{sigma/sqrt{n}} = 1.67.$$ The critical value at the 5% level is the value $c = 1.645$ that cuts 5% from the upper tail of the standard normal curve.
Because $T = 1.67 > c = 1.645,$ you reject the null hypothesis
and conclude that the game might have enabled the students to
get better scores on the second test. (Or maybe learned something
from taking the first test!)
However, $T$ exceeds $c$ by only
a little, and evidence is not 'strong'. If you subject the
findings to a more stringent standard and test at the 1% level,
then the critical new value $c^prime = 3.326$ that cuts 1% from
the upper tail of the standard normal distribution.
According to this more stringent standard, you do not reject
the null hypothesis.
The P-value is the probability to the right of $Z = 1.67$
under the standard normal curve. That probability is 0.47.
With the p-value, we can test at any desired level of significance.
In particular, at the 5% level, we reject because $.047 < .05 = 5%$.
However, at the 1% level, we do not reject because $.047 > .01 = 1%.$
In case it is useful, I pasted output below (somewhat abridged) from doing this
test in Minitab statistical software:
One-Sample Z
Test of mu = 0 vs > 0
The assumed standard deviation = 3.7
N Mean SE Mean Z P
15 1.600 0.955 1.67 0.047
$endgroup$
$begingroup$
Thanks for the insightful answer; it was very helpful. I'd appreciate it if you could clarify the following: 1. Why did you use the z-statistic as opposed to the t-statistic? 2. In your answer, you used a standard deviation of 3.7, I believe you used this since I had not provided the raw scores. Had I, the correct standard deviation would have been the square root of the sum of the variances of the pre and post test scores, right?
$endgroup$
– Ari
Apr 8 '16 at 14:08
$begingroup$
@Ari: (1) z instead of t because pop SD $sigma = 3.70$ is known. (3) If $sigma$ needed to be estimated, one would use $S_D$ the sample SD of the differences $D_i.$ Then under $H_0$, test statistic $T = bar D/(S_D/sqrt{n})$ would have Student's t dist'n with $df = n - 1 = 14.$ and critical value for test at 5% would be 1.761 (one-sided alternative)..
$endgroup$
– BruceET
Apr 9 '16 at 4:27
add a comment |
$begingroup$
You have a paired design. It is the same $n = 15$ students
taking the test both times. Let's call the first score for the $i$th
subject $X_i$ and the second score $Y_i.$ You want to do a
one-sample z-test of the differences $D_i = X_i - Y_i.$
You don't give the individual scores, but the averages are
$bar D = bar X - bar Y.$
The null hypothesis is $H_O: mu_D = 0$ (no different after
playing the game) and $H_a: mu_d > 0$ (better scores after playing
the game).
The test statistic is $$Z = frac{bar D - 0}{sigma/sqrt{n}} = 1.67.$$ The critical value at the 5% level is the value $c = 1.645$ that cuts 5% from the upper tail of the standard normal curve.
Because $T = 1.67 > c = 1.645,$ you reject the null hypothesis
and conclude that the game might have enabled the students to
get better scores on the second test. (Or maybe learned something
from taking the first test!)
However, $T$ exceeds $c$ by only
a little, and evidence is not 'strong'. If you subject the
findings to a more stringent standard and test at the 1% level,
then the critical new value $c^prime = 3.326$ that cuts 1% from
the upper tail of the standard normal distribution.
According to this more stringent standard, you do not reject
the null hypothesis.
The P-value is the probability to the right of $Z = 1.67$
under the standard normal curve. That probability is 0.47.
With the p-value, we can test at any desired level of significance.
In particular, at the 5% level, we reject because $.047 < .05 = 5%$.
However, at the 1% level, we do not reject because $.047 > .01 = 1%.$
In case it is useful, I pasted output below (somewhat abridged) from doing this
test in Minitab statistical software:
One-Sample Z
Test of mu = 0 vs > 0
The assumed standard deviation = 3.7
N Mean SE Mean Z P
15 1.600 0.955 1.67 0.047
$endgroup$
You have a paired design. It is the same $n = 15$ students
taking the test both times. Let's call the first score for the $i$th
subject $X_i$ and the second score $Y_i.$ You want to do a
one-sample z-test of the differences $D_i = X_i - Y_i.$
You don't give the individual scores, but the averages are
$bar D = bar X - bar Y.$
The null hypothesis is $H_O: mu_D = 0$ (no different after
playing the game) and $H_a: mu_d > 0$ (better scores after playing
the game).
The test statistic is $$Z = frac{bar D - 0}{sigma/sqrt{n}} = 1.67.$$ The critical value at the 5% level is the value $c = 1.645$ that cuts 5% from the upper tail of the standard normal curve.
Because $T = 1.67 > c = 1.645,$ you reject the null hypothesis
and conclude that the game might have enabled the students to
get better scores on the second test. (Or maybe learned something
from taking the first test!)
However, $T$ exceeds $c$ by only
a little, and evidence is not 'strong'. If you subject the
findings to a more stringent standard and test at the 1% level,
then the critical new value $c^prime = 3.326$ that cuts 1% from
the upper tail of the standard normal distribution.
According to this more stringent standard, you do not reject
the null hypothesis.
The P-value is the probability to the right of $Z = 1.67$
under the standard normal curve. That probability is 0.47.
With the p-value, we can test at any desired level of significance.
In particular, at the 5% level, we reject because $.047 < .05 = 5%$.
However, at the 1% level, we do not reject because $.047 > .01 = 1%.$
In case it is useful, I pasted output below (somewhat abridged) from doing this
test in Minitab statistical software:
One-Sample Z
Test of mu = 0 vs > 0
The assumed standard deviation = 3.7
N Mean SE Mean Z P
15 1.600 0.955 1.67 0.047
edited Apr 7 '16 at 23:49
answered Apr 7 '16 at 18:19
BruceETBruceET
35.3k71440
35.3k71440
$begingroup$
Thanks for the insightful answer; it was very helpful. I'd appreciate it if you could clarify the following: 1. Why did you use the z-statistic as opposed to the t-statistic? 2. In your answer, you used a standard deviation of 3.7, I believe you used this since I had not provided the raw scores. Had I, the correct standard deviation would have been the square root of the sum of the variances of the pre and post test scores, right?
$endgroup$
– Ari
Apr 8 '16 at 14:08
$begingroup$
@Ari: (1) z instead of t because pop SD $sigma = 3.70$ is known. (3) If $sigma$ needed to be estimated, one would use $S_D$ the sample SD of the differences $D_i.$ Then under $H_0$, test statistic $T = bar D/(S_D/sqrt{n})$ would have Student's t dist'n with $df = n - 1 = 14.$ and critical value for test at 5% would be 1.761 (one-sided alternative)..
$endgroup$
– BruceET
Apr 9 '16 at 4:27
add a comment |
$begingroup$
Thanks for the insightful answer; it was very helpful. I'd appreciate it if you could clarify the following: 1. Why did you use the z-statistic as opposed to the t-statistic? 2. In your answer, you used a standard deviation of 3.7, I believe you used this since I had not provided the raw scores. Had I, the correct standard deviation would have been the square root of the sum of the variances of the pre and post test scores, right?
$endgroup$
– Ari
Apr 8 '16 at 14:08
$begingroup$
@Ari: (1) z instead of t because pop SD $sigma = 3.70$ is known. (3) If $sigma$ needed to be estimated, one would use $S_D$ the sample SD of the differences $D_i.$ Then under $H_0$, test statistic $T = bar D/(S_D/sqrt{n})$ would have Student's t dist'n with $df = n - 1 = 14.$ and critical value for test at 5% would be 1.761 (one-sided alternative)..
$endgroup$
– BruceET
Apr 9 '16 at 4:27
$begingroup$
Thanks for the insightful answer; it was very helpful. I'd appreciate it if you could clarify the following: 1. Why did you use the z-statistic as opposed to the t-statistic? 2. In your answer, you used a standard deviation of 3.7, I believe you used this since I had not provided the raw scores. Had I, the correct standard deviation would have been the square root of the sum of the variances of the pre and post test scores, right?
$endgroup$
– Ari
Apr 8 '16 at 14:08
$begingroup$
Thanks for the insightful answer; it was very helpful. I'd appreciate it if you could clarify the following: 1. Why did you use the z-statistic as opposed to the t-statistic? 2. In your answer, you used a standard deviation of 3.7, I believe you used this since I had not provided the raw scores. Had I, the correct standard deviation would have been the square root of the sum of the variances of the pre and post test scores, right?
$endgroup$
– Ari
Apr 8 '16 at 14:08
$begingroup$
@Ari: (1) z instead of t because pop SD $sigma = 3.70$ is known. (3) If $sigma$ needed to be estimated, one would use $S_D$ the sample SD of the differences $D_i.$ Then under $H_0$, test statistic $T = bar D/(S_D/sqrt{n})$ would have Student's t dist'n with $df = n - 1 = 14.$ and critical value for test at 5% would be 1.761 (one-sided alternative)..
$endgroup$
– BruceET
Apr 9 '16 at 4:27
$begingroup$
@Ari: (1) z instead of t because pop SD $sigma = 3.70$ is known. (3) If $sigma$ needed to be estimated, one would use $S_D$ the sample SD of the differences $D_i.$ Then under $H_0$, test statistic $T = bar D/(S_D/sqrt{n})$ would have Student's t dist'n with $df = n - 1 = 14.$ and critical value for test at 5% would be 1.761 (one-sided alternative)..
$endgroup$
– BruceET
Apr 9 '16 at 4:27
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1732178%2fhelp-understanding-difference-in-p-value-critical-value-results%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
Your $p$-value should be $0.06$, not $0.94$. The $p$-value for a one-sided $t$-test with alternative hypothesis that $mu$ is greater than the hypothesized population mean is the probability that a random sample mean (from a normal distribution of sample means, with the sample standard deviation as an estimate of the population standard deviation, and under the null hypothesis) is greater than your sample mean. $p=0.94$ is the probability that a random sample mean is less than your sample mean. $p=0.06$ is significant if $alpha=0.1$, but not if $alpha=0.05$.
$endgroup$
– Steve Kass
Apr 8 '16 at 1:28
$begingroup$
@SteveKass: Thanks for pointing out the error. I had mistakenly set the R function
pt
argument "lower.tail" to "TRUE."$endgroup$
– Ari
Apr 8 '16 at 14:33