You noticed that the p-values in a paper are close to .05. Does this make you a bad person? Can ‘circling p-values’ ever be a useful heuristic? My two cents on an (apparently?) controversial topic.
Let’s talk about errors in research, using myself as an example.
Creating studies that are powered to detect the smallest effect of interest… without collecting more data than you need to detect bigger effects
What happens when two researchers attempt to reproduce all the pre-registered studies with open data published in JCR?
How natural improvements in outcomes can lead us to form strong, yet incorrect, beliefs about how the world works.
If power analysis cannot be based on the expected effect size, what should it be based on?
Why power analysis, as traditionally performed, isn’t a good tool for choosing sample size.
Can p-values be close to .05 because researchers ran careful power analysis, and collected ‘just enough’ participants to detect their effect? In this blog post, I evaluate the merits of this argument.