You noticed that the p-values in a paper are close to .05. Does this make you a bad person? Can 'circling p-values' ever be a useful heuristic? My two cents on an (apparently?) controversial topic.
Can p-values be close to .05 because researchers ran careful power analysis, and collected 'just enough' participants to detect their effect? In this blog post, I evaluate the merits of this argument.
How can researchers respectfully and constructively flag inadequate statistical evidence when we they see it in papers? This post offers some personal reflections on this complex question.