This was a Question of the week in the MunichROCS mailing list. This is what the members of the list came up with:
Theory: publish in the same journals.
- "Negative results" offer insight
- Especially important in our young fields or ones that struggle with conventional wisdom and folklore
- Publication bias is a big problem
Practice: papers often get rejected
- Reviewer comments: "missing novelty", "not surprising"
Current solutions:
- If you review, keep it in mind
- If you are an editor, push for an update in journal policies (also for conference organisers)
- Publish in specialized journals such as:
- Or publish in good journals, that encourage publishing negative results
Important points raised:
- Scientific evidence should be quantified by methodological rigour (e.g., quality of study design), and not by study outcome. One way to get there would be by registered reports (https://cos.io/rr/), where you submit the first half of a paper to a journal before data collection. Reviewers assess the rigor of the proposed methods, the theoretical foundation, the statistical power, etc.. Then the journal gives an "in-principle acceptance" (IPA) which says: We will publish your paper regardless of the results (as long as you stick to the proposed plan). This way authors have an guarantee of publication even in the light of null results, and publication bias is entirely avoided.
- Keep talking about the issue that "negative results" are important and valuable.
- Shouldn't we avoid talking about "negative results" at all? Isn't what people mean by "negative results" really results indicating that there is no correlation between two (or more) phenomena?