Can p-values be close to .05 because researchers ran careful power analysis, and collected 'just enough' participants to detect their effect? In this blog post, I evaluate the merits of this argument.
How can researchers respectfully and constructively flag inadequate statistical evidence when we they see it in papers? This post offers some personal reflections on this complex question.
You have used the distBuilder library to collect data, now what? This post walks you through the basics of cleaning and analyzing distribution builder data in R.