Earlier this year, the American Statistical Association published another set of position papers on p-values in the journal The American Statistician. They are worth the time it takes to read them, but to summarise it as tl;dr they essentially say to stop using p-values as sole, and binary, indicator of how meaningful a scientific result is. Instead of relying on single, artificially dichotomised, value, researchers should instead use a variety of measures and also consider the context of the data.
I find that it is rather important that this set of papers appeared and also that papers like these keep appearing until the over-focus on p < .05 in many fields of science disappears. Currently, even if you try to communicate results with a broad set of measures, essentially one out of two things happens: if p-values are present people focus on them, if they are not present people ask for them. This creates a false and dangerous over-focus on a single value that lowers the overall quality of scientific literature and conclusions drawn from it.
There are numerous issues attached to the use of p < .05 leading to the necessity of using alternatives. Below are some, but the list is probably not exhaustive:
The issue with p-values have been communicated to a large extent and many people in science are aware of them. Why is it then still common practice to focus on p < .05 in so many disciplines?
I believe this is because the change toward as a more complex and nuanced approach is really uncomfortable. As of now, p < .05 suggests a comfortable clarity, something is either significant or not – there is a clear cut-off and a clear interpretation (or so it seems). Asking researchers to move away from that and instead use an approach using less clear-cut interpretations is of course being met with resistance and a feeling of helplessness.
Journals should start encouraging submitters to use a variety of measures (and to lay out in the methods section of their paper why the used measures are appropriate) and reviewers should equally be made aware of the editorial shift away from p < .05. If the upstream requirements change fast and in a large enough number of important journals, then researchers will no longer be tempted to write that a difference between groups was statistically significant (p < .05). However, it should also be kept in mind that a lot of researchers are no statisticians, maybe had a poor education on statistics and might feel lost if no longer able to use the previously used threshold. Thus, to encourage change away from p < .05 it is essential that a good number of practical primers appears in journals, giving guidance to alternatives.
When moving away from p-values as the dominant and sometimes only measures considered relevant to a more holistic approach everyone involved should take care to avoid creating the same situation over again, just with another measure. And actually this is to some extent already the case for effect sizes, where people often use cut-off values for characterising effect sizes as mild, moderate and pronounced, without taking the context of the data or the origin of these cut-off values into consideration.
Now, it seems easy to put the blame on researchers; those people in psychology, neuroscience and medicine (and probably a lot more), who don’t know anything about statistics. And why is psychology its own field anyway? (this paragraph is obviously not to be taken seriously, I am a psychologist myself).
One of the real problems that should not be left unmentioned is that it is close to impossible to publish a paper without having something with p < .05, obviously driving practices like p-hacking, overstating the impact of results and not reporting more ambiguous results. It is even worse if you try to publish null findings. You can have a reasonable hypothesis, well carried out methods, proper data analysis, if at the end you say we did not find any effects, then good luck getting this published in a proper journal.
It appears straightforward that people try to have some p < .05 somewhere before their work of years goes to waste, most likely also affecting their career. There is a lot of focus on the number of papers someone has published and if publishing is much easier if you can report something with p < .05, then the vicious cycle that results from that is obvious. People should not start playing the blame game now, as everyone involved is more or less subject to those long-grown conventions. Instead people in science should focus on making it better, starting today.
Science should move away from using p < .05 as a falsely dichotomous indicator of whether results are meaningful or not, as there are numerous issues with this approach. A top-down approach with journals nudging researchers towards alternatives will probably work best and the quickest when it comes to the adoption of different ways of data analysis.