Can p-values be close to .05 because researchers ran careful power analysis, and collected ‘just enough’ participants to detect their effect? In this blog post, I evaluate the merits of this argument.
A short blog post showing the immense benefits of pre-registration on false-positive rates, even when the pre-registration is underspecified.
What can we learn from effect sizes, and under which conditions?
How can researchers respectfully and constructively flag inadequate statistical evidence when we they see it in papers? This post offers some personal reflections on this complex question.
A primer on evaluating statistical evidence, with a focus on p-curve analysis.
You have used the distBuilder library to collect data, now what? This post walks you through the basics of cleaning and analyzing distribution builder data in R.
A short tutorial on adding “totals” to distBuilder, keeping track of how many balls are allocated in each bucket
A two-part blog post on outlier exclusion procedures, and their impact on false-positive rates.