With the number of behavioral and new media studies proliferating everyday, its critical for us to sort findings based on our confidence in them. To help us, we've created this comparison table. It helps us decide where interventions belong within the Prosocial Design Library.
*Should an intervention fail to be replicated, we will lower it to either Tentative, Likely, or Convincing, on a case by case basis.
How do I use the comparison table?
Each feature on the left column has a corresponding true or false answer. If the statement in the Features column is true, then it yields an associated citation grade. An intervention's grade is subject to change based on the available evidence.
Let's start at the upper left:
- Has precedence? Does it exist in some form o If so, then it receives at least a grade of Inference. If not, then it's not meant for the library. However, if it also has intervention specific documentation to describe itself and its underlying rationale, then...
- Has quantiative documentation? If the research is entirely qualitative, then the intervention receives a grade of Emergent. Note: Qualitative data might be a document that includes user journeys, interviews, or depictions of the design; but does not have set measurements or controls in place. An Emergent grade is also given if the data is alleged to be quantiative, but is proprietary or unavailable. However, if the supplementals are quantiative, then not only are we entering the realm of Tested Interventions (congratulations!) but also...
- Did the research use an experiment with randomized control groups? Was it an experiment on the intervention itself? If not, as may happen with a cohort study or other quasi-experimental research, then the intervention receives a grade of Tentative. If, however it was an experiment with randomized controls, then...
- Was the experiment conducted without any vested interests? If there's a conflict of interest, or if the research is of corporate origin, then it still receives the grade of Tentative. If, however, the experiment is independently conducted, then...
- Were the experiments findings peer reviewed? Were they published in a journal? If not, then by process of elimination it would be a preprint, and it receives the grade of Likely. If, however, it is peer reviewed, then...
- Has someone independent of the first trial repeated the experiment? Did researchers unaffiliated with the original research repeat the experiment to see if it worked? If not, if the document stands alone and is peer reviewed, then it receives the penultimate grade of Convincing. If, however, it has been independently repeated, then...
- Were the results replicated? If it was not independently replicated, then the grade is bumped down to Tentative. But should the results be replicated, then the intervention receives the highest grade that we can award: Validated. As of February 2021, no such intervention has received the Validated grade.
What happens when findings conflict?
Let's say that a Validated entry is later unable to be replicated. If the new research is found to be well executed, then that Validated intervention would get bumped down to Tentative. Then, if that once-Validated entry is replicated again, and we conclude that the new findings are conducted with even tighter controls, and with of higher accuracy and precision than the prior research, we would likely restore the intervention to Validated status.
If dozens upon dozens of studies were run on an intervention, with no clear pattern of replication arising, then we would likely place said intervention as Tentative until further notice.