In science, the ultimate success of any experiment is most of the times determined through a measure known as statistical significance.

For a result to be considered significant, the difference that is observed in the experiment carried out between groups of probably people or plants will be quite unlikely if no difference exists.

However, the usual cutoff for an experiment that is very unlikely is the fact that you will see a difference that may be as big or even bigger five percent of the time.

Furthermore, statistical significance has been used in drawing a bright line between the failure and success in experiments. When an experimental result achieves statistical significance that is considered a finding that can be published in a science related journal.

**What Does Statistical Significance Have To Offer?**

Most scientific studies of today are structured around a framework that is testing the statistical significance of the hypothesis. In such a test, the scientist gets to compare the results of an experiment.

What he does is that he places the results up for comparison against the hypothesis that there is no difference existing between a tested and a control group.

If he is testing whether a drug can reduce depression, his goal is not to prove that the drug actually tackles depression. Rather, the concept is to gather lots of data to reject the hypothesis that it does not.

Furthermore, the scientist embarks on this comparison with the aid of a statistical analysis, which results in a P-Value. If you never knew, the P-Value is a result between the numbers 0 and 1. The P stands for probability.

As for the value, it signifies the likelihood that repeating such an experiment will yield an outcome with a difference. The result will be bigger than what the scientist got if the drug does not decrease depression.

There can also be a smaller P-Value. What this means is that the scientist is less likely to see any difference that is large if no difference actually exists.

When it comes down to the scientific parlance, the value is statistically significant if P is either less than or probably equal to 0.05.

If scientists can interpret P values correctly, they can be vital in finding out the compatibility of the scientist’s experimental results.

**The Problem With P-value**

No matter how small a P-value is, it is simply a probability. It does not mean that the experiment actually worked. Also, it does not even tell if the difference in the results is small or big.

In simpler terms, it does not state if the difference has any meaningful impact. Judging from the words of Blake McShane, the 0.05 cutoff has resulted in shorthand for scientific quality.

We should understand that science and statistics have never been so easy to cater to comfortable cutoffs. There is actually no difference between a P-value that is 0.049 and a P-value of 0.051.

**In Conclusion**

No one is trying to eliminate the P-value. Instead, it is taken to be just a statistical test. Signers of what is known as the nature manifesto are simply against the concept of statistical significance wherein P is either less than or it is equal to 0.05.

Such a limit only ends up giving a false sense of certainty about an experimental result. While statistics is misunderstood to be a way of eliminating uncertainty, it actually revolves around quantifying the degree of uncertainty.

Related articles: