Our alternate hypothesis would be HA: resting heart rateZ resting heart ratecontrolThese hypotheses are set up so that one of them must be true and the other must be false. They cannot logically both be true or both be false. We expect to disprove the Null Hypothesis and therefore the Alternate Hypothesis must be true. We could use a statistical test such as a t-test to find out if the differences between the resting heart rates are significantly different. If there is a significant difference and if we have designed and carried out our experiments carefully so that the only difference between our two groups of patients is that one group was given the drug Z, then we can conclude that resting heart rates have changed and the drug Z is the only possible cause. We must remember that this does not "prove" the Alternate Hypothesis, it only strongly supports it.Two-tailed Null and Alternate Hypotheses
The Null and Alternate Hypotheses discussed above are two-tailed. That is because the alternate hypothesis could be true if resting heart rateZ is less than resting heart ratecontrol or if resting heart rateZ is greater than resting heart ratecontrol. Therefore the tail of the distribution of "Z" heart rates could overlap the upper or the lower tail of the distribution of control heart rates and still be significantly different in either case. These two-tailed hypotheses are appropriate if you are trying to find out if the two treatments are different, but have no expectation that the control values will be less than or greater than the experimental values. Given the two-tailed hypotheses above, if you do a t-test to compare the means you must do a two-tailed t-test.
Photo provided by Flickr
For example, when sample size is 20, the .05 critical t-value for atwo tailed test is +2.093, but the critical t-value for a onetailed test is +1.792 (when the alternative hypothesis predicts thesample mean is greater than the population mean) or -1.792 (when thealternative hypothesis predicts that the sample mean is less than thepopulation mean).
We cannot prove a null hypothesis, we can only fail to reject it
Photo provided by Flickr
We reject H0 because . We have statistically significant evidence at α=0.05 to show that H0 is false or that treatment and outcome are not independent (i.e., they are dependent or related). This is the same conclusion we reached when we conducted the test using the Z test above. With a dichotomous outcome and two independent comparison groups, Z2 = χ2 ! Again, in statistics there are often several approaches that can be used to test hypotheses.
(Null Hypothesis and Alternative Hypothesis) ..
Technically, no, a null hypothesis cannot be proven. For any fixed, finite sample size, there will always be some small but nonzero effect size for which your statistical test has virtually no power. More practically, though, you can prove that you're within some small epsilon of the null hypothesis, such that deviations less than this epsilon are not practically significant.
The Null hypothesis cannot be …
Yes, it is possible to prove the null--in exactly the same sense that it is possible to prove any alternative to the null. In a Bayesian analysis, it is perfectly possible for the odds in favor of the null versus any of the proposed alternatives to it to become arbitrarily large. Moreover, it is false to assert, as some of the above answers assert, that one can only prove the null if the alternatives to it are disjoint (do not overlap with the null). In a Bayesian analysis every hypothesis has a prior probability distribution. This distribution spreads a unit mass of prior probability out over the proposed alternatives. The null hypothesis puts all of the prior probability on a single alternative. In principle, alternatives to the null may put all of the prior probability on some non-null alternative (on another "point"), but this is rare. In general, alternatives hedge, that is, they spread the same mass of prior probability out over other alternatives--either to the exclusion of the null alternative, or, more commonly, including the null alternative. The question then becomes which hypothesis puts the most prior probability where the experimental data actually fall. If the data fall tightly around where the null says they should fall, then it will be the odds-on favority (among the proposed hypotheses) EVEN THOUGH IT IS INCLUDED IN (NESTED IN, NOT MUTUALLY EXCLUSIVE WITH) THE ALTERNATIVES TO IT. The believe that it is not possible for a nested alternative to be more likely than the set in which it is nested reflects the failure to distinguish between probability and likelihood. While it is impossible for a component of a set to be less probable than the entire set, it is perfectly possible for the posterior likelihood of a component of a set of hypotheses to be greater than the posterior likelihood of the set as a whole. The posterior likelihood of an hypothesis is the product of the likelihood function and the prior probability distribution that the hypothesis posits. If an hypothesis puts all of the prior probability in the right place (e.g., on the null), then it will have a higher posterior likelihood than an hypothesis that puts some of the prior probability in the wrong place (not on the null).
I thought, by definition, you cannot prove the null hypothesis ..
If our statistical analysis shows that the significance level is below the cut-off value we have set (e.g., either 0.05 or 0.01), we reject the null hypothesis and accept the alternative hypothesis. Alternatively, if the significance level is above the cut-off value, we fail to reject the null hypothesis and cannot accept the alternative hypothesis. You should note that you cannot accept the null hypothesis, but only find evidence against it.