Usually the confidence intervals are computed around the sample statistics (because the true population statistic is unknown). This can be seen in the two-sided interval: observe how the interval is symmetric around the sample mean 53.7, not around \(\mu_0 = 50\).
For one-sided inference, the direction (less or greater) must be chosen a priori from theory or study design (before observing the sample). Therefore, we report the two-sided interval by default and only use a one-sided interval when that direction was pre-specified.
The left-sided confidence interval contains the Null value \(\mu_0 = 50\). Therefore we fail to reject the Null Hypothesis which states that \(\mu \leq \mu_0 = 50\) (this assumes that we are testing a one-sided hypothesis).
The two-sided confidence interval also contains the Null value \(\mu_0 = 50\). Therefore we fail to reject the Null Hypothesis which states that \(\mu = \mu_0 = 50\) (assuming we are using a two-sided hypothesis test).
To compute the One Sample t-Test based on critical values on your local machine, the following script can be used in the R console.
Note: this local script uses a small illustrative vector. The embedded app example above uses a stored dataset and can produce different numeric output.
This R module on the public website contains the following fields:
Data: a univariate dataset which represents quantitative data
Alternative: parameter which defines the type of Hypothesis Test to be computed. This parameter can be set to the following values:
two.sided
less
greater
Confidence: this is \(1 - \alpha\) (i.e. 1 minus the chosen type I error)
Null Hypothesis: this is the value of \(\mu_0\) against which the hypothesis is tested (in this case \(\mu_0 = 50\))
114.3.3 Output
The p-value for the two-sided hypothesis is larger than the chosen type I error, i.e. \(p = 0.1085 > 0.05\). As a consequence we cannot reject the Null Hypothesis.
For reporting, also provide an effect size for the mean shift (Cohen 2013), e.g.
\[
d = \frac{\bar{x} - \mu_0}{s}.
\]
When we specify Alternative = "greater" then the Alternative Hypothesis \(\text{H}_A: \mu > \mu_0\) is used.
This test corresponds to the test with the “left-sided” confidence interval that was discussed in the previous example. Since the p-value is larger than the chosen type I error, we cannot reject the Null Hypothesis. In the previous example we reached the same conclusion because the Null value was contained in the confidence interval.
The type I error \(\alpha\) must be fixed before the analysis. Changing \(\alpha\) after looking at the p-value is not valid inference.
Choosing Alternative = "less" corresponds to testing \(\text{H}_A: \mu < \mu_0\). This direction must be justified a priori (before looking at the sample). If the direction is selected post hoc, type I error control is lost.
To compute the One Sample t-Test based on critical values on your local machine, the following script can be used in the R console:
x <-c(50,48,44,56,61,52,53,55,67,51)par1 ='two.sided'#type of testpar2 =0.95#Confidencepar3 =50#Null Hypothesist.test(x,mu=par3,alternative=par1,conf.level=par2)
One Sample t-test
data: x
t = 1.7818, df = 9, p-value = 0.1085
alternative hypothesis: true mean is not equal to 50
95 percent confidence interval:
49.00243 58.39757
sample estimates:
mean of x
53.7
114.4 Assumptions
The 95% confidence interval contains the true population mean in 95% of simple random samples that are (independently) drawn from the population. If we assume that the samples are (truly) random, we know that the sample mean is normally distributed if the number of observations is large enough (this is the case, regardless of the population distribution because of the Central Limit Theorem. If we are not sure about whether the sample is a genuine simple random sample then we have to explicitly make the assumption that the underlying population property is normal.
The number of sample observations is very small, i.e. \(N = 10\). This implies that we have to assume normality in the population and that we have to use the t-Distribution. When the sample is large enough, the distribution of the sample mean converges to normality. This, however, does not imply that we have to use another test (the so-called Z-Test). The reason is simple: the t-Test will always produce the correct answer because the t-Distribution converges to the Normal Distribution with increasing Degrees of Freedom.
114.5 Alternatives
There are several alternatives for the One Sample t-Test:
The Wilcoxon signed-rank test
Notched Boxplots
Bayesian tests
The Bootstrap Plot for Central Tendency
The confidence intervals for the mean are used in the following methods:
Trimmed Mean
Winsorized Mean
Cohen, Jacob. 2013. Statistical Power Analysis for the Behavioral Sciences. Academic press.