not rejecting the null hypothesis when the alternative is true.

Futility designs bhaveeen proposed and used by constructing classical (non-Bayesian) hypothesis tests such that the decision of therapeutic interest is to accept the null hypothesis.A consequence is that the probability of accepting (failing to reject) the null when the null is false is unknown. Reversal of the conventional null and alternative hypotheses is not required either to demonstrate futility/nonsuperiority, or to align type I and II errors with their consequences. Conventional methods to test whether the response to the investigational agent is superior to a comparative control (superiority trial) are preferable and in the case that the null hypothesis is rejected, the associated type I error is known.

Type I error (False positive): The null hypothesis is rejected when it is true.

Another way your data can fool you is when you don't reject the null hypothesis, even though it's not true. If the true proportion of female chicks is 51%, the null hypothesis of a 50% proportion is not true, but you're unlikely to get a significant difference from the null hypothesis unless you have a huge sample size. Failing to reject the null hypothesis, even though it's not true, is a "false negative" or "Type II error." This is why we never say that our data shows the null hypothesis to be true; all we can say is that we haven't rejected the null hypothesis.

The failure to reject does not imply the null hypothesis is true.

Type II error (False negative): The null hypothesis is not rejected when it is false.

It is often a statement of “no difference” or “no effect.”

The null hypothesis is not rejected unless there is convincing sample evidence that it is false.

Alternative Hypothesis
Type I & Type II
If we reject Ho when it is true, this is a Type I error
If we do not reject Ha when it is false, this is a type II error

The alternative, or research, hypothesis, denoted Ha is a statement that will be accepted only if there is convincing sample evidence that it is true.

How to remember?
Mid Air Collision
Recall that it is hoped that the mean alert time,(mean), using the new display panel is less than eight seconds.

Formulate the null hypothesis Ho and the alternative hypothesis Ha that would be used to attempt to provide evidence that (mean) is less than eight seconds.

Does failure to reject the null hypothesis support it

A Bayesian would insist that you put in numbers just how likely you think the null hypothesis and various values of the alternative hypothesis are, before you do the experiment, and I'm not sure how that is supposed to work in practice for most experimental biology. But the general concept is a valuable one: as Carl Sagan summarized it, "Extraordinary claims require extraordinary evidence."

Does failure to reject the null hypothesis ..

A related criticism is that a significant rejection of a null hypothesis might not be biologically meaningful, if the difference is too small to matter. For example, in the chicken-sex experiment, having a treatment that produced 49.9% male chicks might be significantly different from 50%, but it wouldn't be enough to make farmers want to buy your treatment. These critics say you should estimate the effect size and put a on it, not estimate a P value. So the goal of your chicken-sex experiment should not be to say "Chocolate gives a proportion of males that is significantly less than 50% (P=0.015)" but to say "Chocolate produced 36.1% males with a 95% confidence interval of 25.9 to 47.4%." For the chicken-feet experiment, you would say something like "The difference between males and females in mean foot size is 2.45 mm, with a confidence interval on the difference of ±1.98 mm."

Type II Error Not rejecting the null hypothesis when in ..

In the olden days, when people looked up P values in printed tables, they would report the results of a statistical test as "PPP>0.10", etc. Nowadays, almost all computer statistics programs give the exact P value resulting from a statistical test, such as P=0.029, and that's what you should report in your publications. You will conclude that the results are either significant or they're not significant; they either reject the null hypothesis (if P is below your pre-determined significance level) or don't reject the null hypothesis (if P is above your significance level). But other people will want to know if your results are "strongly" significant (P much less than 0.05), which will give them more confidence in your results than if they were "barely" significant (P=0.043, for example). In addition, other researchers will need the exact P value if they want to combine your results with others into a .