Search
Tags
RSS Feed

When theory overrides data

7 January 2026 Tags: peer review experimental data theoretical objections editorial standards research integrity data interpretation editorial oversight

Science rests on a fundamental principle: empirical observations must be taken seriously. When experimental data conflicts with theoretical expectations, the appropriate response is careful investigation, not dismissal. Yet a recent experience with Molecular Microbiology has highlighted how easily this principle can be compromised in the peer review process.

The scenario is straightforward. We submitted a manuscript presenting experimental data on antibiotic effects, specifically examining the impact of the combination of rifampicin and cephalexin on bacterial cells. The study comprised six multi-panel figures showing microscopy data and other measurements. In the final figure, Figure 6, we visualized cell morphology following treatment with both antibiotics in a replication run-out experiment. We observed some minor filamentation of bacterial cells following the dual treatment.

Rifampicin was exclusively used in figure 6, not in any other figure. And for this figure one of the reviewers commented: "This [the filamentation] is an artefact: filamentation requires protein synthesis and as explained above protein synthesis cease within the first two minutes after RIF addition."

The handling editor, in the decision letter, stated: "Referee 1 [...] described how your major findings are the result of an artifact."

This progression from a specific criticism to wholesale dismissal reveals two significant problems with how this manuscript was handled.

Problem 1: editorial mischaracterization

The most immediately apparent issue is one of scope. The reviewer's comment applied to one figure out of six. Five other figures, presenting different experimental approaches and utilizing different antibiotic treatments, remained entirely unaffected by this specific criticism. In fact, even if the criticism was correct it would not have changed the main message of the manuscript. Figure 6 is a control that reproduces conditions often used in replication run-out experiment, not more.

How, then, does an editor conclude that "major findings are the result of an artifact"? This characterization can only arise from a failure to engage carefully with the manuscript itself. Had the editor examined the study with appropriate attention, it would have been immediately apparent that the criticism, whatever its merit, applied narrowly to one experimental condition, not to the body of work as a whole.

This represents a significant editorial oversight. It is worth noting that the editor also indicated that molecular details of the study were not sufficiently developed as would be expected for Molecular Microbiology – an entirely fair assessment and one that would have sufficed as grounds for rejection. The mischaracterization of the reviewer's criticism was both unnecessary and misleading.

Problem 2: theory overriding observation

The more troubling issue lies in how experimental data was weighed against theoretical objection. The reviewer's reasoning has a certain logic: rifampicin inhibits transcription, protein synthesis should cease rapidly, and filamentation requires ongoing protein synthesis. On the face of it, this forms a coherent argument.

But here is the critical point: we present controlled, replicated experimental observations. These are not anecdotal results or single observations. They represent systematic, repeated experiments conducted under defined conditions. How can it be that actual data are completely dismissed by nothing than an argument?

When faced with the reviewer's criticism, we did not simply dismiss it. That would, of course, be poor scientific practice. Instead, we took the concern seriously and conducted additional experiments. We obtained fresh rifampicin, prepared new stock solutions, and repeated the experiments. Twice. In each case, we observed precisely the same result as before, unsurprisingly. The filamentation was reproducible and consistent. The reviewer was simply mistaken. This is not a real surprise: we already know from the work from Tokio Kogoma and other labs that processes such as DNA replication can, under the right experimental conditions, continue for hours in the presence of drugs such as chloramphenicol or rifampicin, simply utilising the pool of proteins already present at the time the drug(s) is/are added (see here for a review). Processes with threshold levels of proteins, such as the initiation of DNA replication, are blocked, because no new protein molecules can be synthesised. But the existing molecules will easily be capable of generating the modest amount of filamentation observed.

This raises a fundamental question about peer review: at what point did theoretical objections gain automatic precedence over actual measurements? The reviewer presented a logical framework for why our observations should not occur, but offered no empirical evidence that they did not occur. We presented data showing that they did occur. The editor accepted the former and dismissed the latter.

This inversion of evidential priority is deeply problematic. Reviewers vary in their experience and expertise. Some may have extensive hands-on familiarity with the specific experimental systems under discussion; others may be reasoning from general principles without direct observation. In this case, I suspect the reviewer had never himself/herself directly observed cells after rifampicin treatment under the conditions we employed. The theoretical framework seemed sound, but it did not align with experimental reality. And, as pointed out, other theoretical frameworks exist and can easily explain the observations we have made.

A note on peer review

I want to be clear about my perspective on peer review. Over many years of publishing research, I have found the peer review prcess, even though often cumbersome and tedious, important and valuable. It has improved every single paper I have published. Some reviewer comments have been harsh, some have seemed beside the point initially, but engaging seriously with any criticism has consistently strengthened the final work.

This case troubles me precisely because it departs from that standard. Good peer review involves a genuine engagement with the work presented, a careful weighing of evidence, and a willingness to consider that unexpected observations might be valid. What happened here falls short of that ideal.

How should this work?

The reviewer's role in raising this concern was probably appropriate. When results seem to contradict established understanding, it warrants attention. The issue lies in how the concern was framed.

Rather than stating categorically "This is an artefact," a more constructive approach might have been: "This unexpected result seems to contradict our understanding of rifampicin's rapid inhibition of protein synthesis. Could the authors address whether alternative explanations exist, or provide additional controls to rule out artifacts?"

This phrasing acknowledges the tension between observation and theory while leaving open the possibility that the observation is valid. It invites dialogue rather than imposing conclusion. And we now have an answer to a question, rather than a smug: "The reviewer was wrong!".

The editor's role is equally critical. Editors must distinguish between "the reviewer questions this result" and "this result invalidates the study." They must weigh how a specific criticism relates to the broader body of work. When a manuscript presents multiple lines of evidence, and a reviewer raises concerns about one element, the editor's responsibility is to assess the scope and impact of that concern accurately.

In this case, the editor amplified a narrow criticism into a wholesale rejection of the study's validity. This represents a rather troubling failure of editorial judgment.

The broader context

This experience is not unique to Molecular Microbiology or to this particular editor. These are systemic issues that affect peer review across journals and disciplines.

Editors face significant time pressures and must rely on reviewer expertise. Training in how to evaluate reviews, particularly when they conflict with presented data, varies considerably. Reviewers may not always recognize the distinction between "I don't understand how this is possible" and "this is definitively wrong." Authors working on specialized experimental systems may struggle to communicate their methods and observations clearly to a broader audience.

These challenges are real, but they do not excuse the kind of handling described here. The risk of prioritizing theoretical objections over empirical observations is substantial: we may inadvertently discourage the publication of results that do not fit current frameworks, precisely the observations that often lead to new understanding.

Scientific progress has repeatedly come from taking seriously what seemed impossible. When Helicobacter pylori was proposed as a cause of peptic ulcers, the prevailing theory held that bacteria couldn't survive in the acidic stomach environment. The theory was strong; the observations were stronger.

What this means going forward

For authors, this case underscores the importance of thorough documentation and a willingness to defend valid observations. When faced with theoretical objections to experimental results, the appropriate response is additional experimentation and clear communication, not capitulation.

For reviewers, it highlights the need to distinguish carefully between questioning results and dismissing them. Unexpected observations deserve scrutiny, certainly, but they also deserve the possibility of being correct. Framing criticism as questions rather than declarations leaves room for this possibility. How often do even experienced scientists slip into what seems like nagging or even a grouchy attitude towards a new study. It would be so much more pleasant if reviewers could adhere more to the attitude of Michelangelo: "I saw the angel in the marble and carved until I set him free" – asking what in fact needs to be done to a manuscript to make it better, clarify its meaning and make new findings stand out as much as possible. It would be a refreshingly constructive approach. Not only does it appear incredibly difficult for reviewers to adopt this approach, but the continuous bombardment with whiny comments also changes the attitude of the researchers who would like to take a better approach. Why should I make life easy for authors when my own life is so often made a misery?

For editors, the lesson is one of careful characterization and scope assessment. Reviews must be read critically, not just accepted at face value. The relationship between specific criticisms and overall validity requires careful thought. Editorial decisions carry weight; they should be based on accurate representations of both the work and the reviews. In this particular case, the lack of depth of the molecular mechanisms explaining the data was the key argument. It highlights that Mol Micro simply is the wrong journal for this paper. Case closed, without any hard feelings.

For the field more broadly, this serves as a reminder that observations sometimes reveal gaps in our understanding. The fact that a result seems theoretically unlikely does not make it artifactual. Science advances when we take our measurements seriously, even, and perhaps especially, when they surprise us.

Closing thoughts

This manuscript will find an appropriate home elsewhere. The work is sound, the observations are reproducible, and the story it tells deserves to be part of the scientific record.

But the handling by Molecular Microbiology raises questions that extend beyond a single paper or a single journal. The peer review system, for all its flaws, currently is still required for quality control in scientific publishing. It works when engaged with in good faith, with careful attention to detail, and with genuine respect for the empirical foundation of our discipline.

This case fell short of that standard. We can, and should, do better.