Mozgostroje

A social psychologists defense of p values

21 Apr 2014

[]

Only recently I discovered a blog by David Funder, which has several interesting posts. One blog post from a year ago is Funder's polemics with a social psychologist who presents the following apology for the controversial use of $p$ values. Now, a defense of $p$ values is something really rare, at least in public. So let's hear it:

…the key to our research… [is not] to accurately estimate effect size. If I were testing an advertisement for a marketing research firm and wanted to be sure that the cost of the ad would produce enough sales to make it worthwhile, effect size would be crucial. But when I am testing a theory about whether, say, positive mood reduces information processing in comparison with negative mood, I am worried about the direction of the effect, not the size (indeed, I could likely change the size by using a different manipulation of mood, a different set of informational stimuli, a different contextual setting for the research — such as field versus lab). But if the results of such studies consistently produce a direction of effect where positive mood reduces processing in comparison with negative mood, I would not at all worry about whether the effect sizes are the same across studies or not, and I would not worry about the sheer size of the effects across studies. This is true in virtually all research settings in which I am engaged.

This argument is wrong in multiple ways. Funder's polemics revolves about the superiority of effect sizes. I think that that issue is only tangential. To magnify the mistakes, it present the above argument as a criticism of a field where a focus on effect size worked. Here is an alternative take on classical mechanics (portions that I changed are highlighted in bold):

…the key to our research… [is not] to accurately estimate effect size. If I were testing an advertisement for a marketing research firm and wanted to be sure that the cost of the ad would produce enough sales to make it worthwhile, effect size would be crucial. But when I am testing a theory about whether, say, heavy objects falls with a higher velocity than a light object, I am worried about the direction of the effect, not the size (indeed, I could likely change the size by using a different initial speed a different set of objects, a different contextual setting for the research — such as liquid vs air ). But if the results of such studies consistently produce a direction of effect where heavy object falls faster than light object , I would not at all worry about whether the effect sizes are the same across studies or not, and I would not worry about the sheer size of the effects across studies. This is true in virtually all research settings in which I am engaged.

Fortunately, physicists did care about the effect size which allowed them to discover gravitation and the equation which describes velocity of free fall (recall $v(t)= - g \cdot t^2 + v_0$).

The argument is build from the following sub-claims which I consider problematic.

  1. "We can't interpret effect size because there are variables that moderate the effect size". The usual way to solve this is to measure the moderators and then to express the effect size conditional on the moderating variables. The simplest way to do so is linear regression.

  2. "Moderators influence the magnitude but not the direction of the effect". As a consequence we would expect that change in "manipulation, stimuli, contextual setting" might alter the effect size say from $0.8$ to $0.2$ but we would not expect a change from $0.1$ to $-0.1$. I don't consider this plausible. This goes against all that statistics tells us about sampling theory. More importantly it is also empirically implausible. Much of the current fuss about the replicability of social priming research is because the mean estimate of effect size jumps from negative to positive value and back across different replication attempts.

  3. The possibility that an effect size is zero seems to be implicitly out of question in the above argument. In fact, heavy object does not fall faster than light objects nor does the light object fall faster than heavy object. Object's free fall velocity is independent of it's mass.

comments powered by Disqus