Should we trust "science" less?

I realize I may be stirring the hornet’s nest here, but I wanted to ask for some advice or thoughts on what is causing me to be a bit cynical about research as a whole. As a researcher, I care a lot about “science,” but I’m becoming more skeptical that I can trust the results of any journals (including mainstream journals with a good historical reputation).

Own the surface, the scientific method seems like (and in my view, is) a reliable and trustworthy way of choosing between different theories and models. I often imagine different hypothesis as different buildings, each supported by their own “floors” of assumptions and previous work. The scientific method works by “shaking” each building (hypothesis) and exposing it to stress and possible weak points. Buildings that fall apart are obviously not as strong, while those that remain standing after repeated trials are more trustworthy. As David Hume said, A wise man apportions his beliefs to the evidence,” and similarly scientific theories that have stood up to repeated attempts to falsification deserve more trust.

As great as this methodology is on paper, it is not always practiced, even in respected scientific circles and journals. Funding for repeating experiments is often difficult to obtain (and researchers have a greater incentive to do “original” work instead of reproducing someone else’s work), so subjecting a theory to tests may not happen as often as it should. Furthermore, even when replication is attempted, it is often not successful, calling into question the credibility of theories built on results that cannot or have not been replicated (the reproducibility crisis). In some fields, doing a repeated experiment or even an audit requires access to additional data or code (that is often unpublished), expensive equipment, and access to materials that are not easily obtained.

Double-Blind peer review, once again another great idea in theory, can also be a mess in practice. In 2021, NeurIPS did an experiment where 10% of all papers were resubmitted to an independent set of reviewers. About 57% of papers accepted by one set of reviewers were rejected by the other (and vica-versa). Fortunately (or unfortunately), this result is a reproduction of a 2014 study. This implies that the double-blind review process deciding which papers get accepted into arguably the top machine learning conference, is basically arbitrary (this was the conclusion of both studies).

If we agree that the scientific method and the double-blind review process are useful ways of discovering truth about the world, we should question whether current scientific institutions are actually upholding these principles in a meaningful way. The results of the two above paragraphs, taken together, shouldn’t necessarily decrease our trust in the scientific method, but perhaps should cause us to second guess any theory pushed forward as “science” or in a mainstream scientific journal until these issues are resolved. If results are not independently verified and reviewers are not able to methodically choose between “good” and “bad” work, the scientific community cannot claim its results adhere to the very standards of science it claims to uphold.


I think you bring up some really important points, even if it stirs up a few hornets.

I do think we should remain skeptical of a theory that is supported by just one paper. A well supported theory should have multiple papers and research groups supporting it, and we should have multiple independent lines of evidence.

And I fully agree with the problems that pragmatism and practicality pose for scientific research. It is understandable that we would want to spend money on learning new things instead of repeating what we have already done. Part of the solution is requiring enough repeats of the experiment in the original paper, but in a perfect world we would still like to see the same data from multiple groups in multiple labs. Unfortunately, we don’t live in a perfect world with unlimited resources. Would be nice if we did, though.

The one workaround is others using previous work as a foundation for further research. If the findings in a paper do make an impact on the field then chances are others will want to build off of it, and in doing so they will inevitably retest many of the findings in the original paper. This has at least been my experience. New research will tend to approach the same question from a different angle, so you will start to build up those independent lines of evidence.

But I fully agree that reproducibility is a big deal and is worth bringing up as often as possible.

In my experience, it is rare for a paper to be outright rejected for publication, at least in the mid tier journals in my field but it is also extremely rare for papers to be accepted as is on the first submission. Some reviews will want adjustments to language or figures, and some will want additional experiments. However, a vast majority of the time the journal still wants to see a resubmission once the reviewers’ criticisms have been addressed. I will agree that there can be a huge lack of consistency between reviews and reviewers. What one scientist thinks is adequate another will think is inadequate, but that is kind of how the entire scientific community operates. It’s a human group effort that is fallible and imperfect, but it’s the best we can do with what we have. To borrow from Churchill, peer review is the worst system for getting science published, except for all of the others.

One interesting idea I have come across is for authors to publish their paper with minimal initial peer review and then allow for any scientist to post criticisms and opinions of the paper on a continual basis.

The most important thing to remember is that publications aren’t scriptures. At least in my eyes, a publication is just the start of an idea. Scientists do need to communicate their findings to the scientific community, and I think there shouldn’t be unnecessary hurdles that scientists have to clear. Of course, the question will always focus on what is unnecessary.


I think the best way to look at questions such as this is in terms of how mature a scientific finding is.

The most mature and reliable scientific theories either have practical and commercial applications that depend on them being correct in order to work, or else have other scientific theories that depend on them.

Cutting edge theories, or those which don’t have much in the way of practical application, are likely to be less robust.

I discussed this in a bit more detail here:


I think that it depends significantly on the field: double-blind is all but irrelevant to “Five new species of ____ from ____” (and most anything else that lacks any plausible conflict of interest). I agree that more checking of past studies would be good, especially in fields with more subjective outputs (e.g. neuroscience or drug testing). Systematics (and probably other fields that I haven’t worked with as directly) essentially has a continuous checking of past studies by looking at “Does this placement still make sense?” “Are these taxa blurring together with new data?” “Are there unrecognized cryptic taxa here?”, etc.

Novel phylogenetic studies (among the more cutting-edge papers I work with) are assumed to be tentative (by anyone sensible, at least), and generally aren’t used for phylogeny (or are used, but are acknowledged to be tentative) unless they are really good ones (lots of data, good support, etc.), or until several papers all give the same result.


Science IS the methodology. So what you are really talking about trusting people, not trusting science.

The methodology works. It is not proof, but only what is reasonable to believe. So the faith which scientists put in the methodology is not something which should be less. And this is not about trusting conclusions – on the contrary, the method has us testing things over and over again without end. On the other hand, it is not a methodology which works for everything. There are plenty of things in life where science is of no help whatsoever. So people claiming that science is their guide for life itself are frankly delusional – just trying to steal the science stamp of approval for their own subjective conclusions and beliefs.

And as for people… whether they are in science or otherwise… people are not very trustworthy. Science is not immune to basic human realities that people will sell out for a paycheck. Scientific journals make an effort to check whether things are honestly adhering to scientific methods but there are any number of things which can go wrong. So trusting science does not mean trusting anything you find in a scientific journal. It is only really trustworthy if it is something which has been tested repeatedly by different people all over world. And even then, as I said before, it is not proof, but only what is reasonable to believe.


Great comments and insights, thanks. There are a few things that people are trying to do differently that I think are worth mentioning. First, one journal I am submitting to is doing rolling acceptances (sort of like what you mention in the latter section) and reviews. In a similar way, rather than papers being “rejected” or “accepted,” it goes a more workshop approach where the reviewers help guide the paper towards “acceptance” at perhaps a later time. In other words, rather than a hard accept or reject, it is improved with enough data and feedback until it becomes “publishable.”

I do think most scientists and those of us in the research community understand these nuances, so it is only when science communication becomes poor or click-baity that it really gets out of hand (like when an article reads the title/abstract of a paper and doesn’t communicate important particularities of it).

1 Like

Makes me think of the coffee headliners where someone says “ well this scientist said coffee is good and this scientist said it’s bad “ but one scientist is referring to 4 ounces of black coffee and the other other is talking about 20 ounces of a syrupy Starbucks drink.

Ultimately there is not a better way though. The scientific process is the best process. Once something is published there is the chance for to others to further counter it and share their own, or test it and get similar support. Or something like a entomologist that tests on do bumblebees focus more on vision or calculations based on distance and angle of light to find a specific food source and then they test those results with honey bees and get the same and so on.

As a family physician, I did not submit papers and seldom read primary research papers, being overwhelmed more by application type studies, but the issue still trickled down to the level of practice. When a new drug came to market, some doctors were early adopters, some gave it a little while before prescribing, and some were late adopters, waiting until results were well established before using. I tended to be in the mid to late adopter group, as there was a little distrust of the process, and I thought it good to wait to see if adverse side effects showed up that were not found in the initial studies. That in turn might be due to bias and pressure on the part of the companies funding the research.

That too has changed a bit over the years, particularly because of direct marketing by drug companies, leading consumers to ask for and seek out drugs that are new to the market. We are really seeing that with such things as the weight loss drugs originally developed for diabetes, where clinics pop up and pharmacies are compounding and selling it for weight loss, with little track record as to long term effects or efficacy. Economics seem to rule in our society, with little regard to reason and ethics.


I like that approach. It does require more time from the reviewers which may be a sticking point.

Agreed. The results are mixed (to put it mildly) when peer reviewed papers are filtered through the media. There are some good science journalists out there (e.g. Carl Zimmer), but they are sadly few and far between.


Science can be fabricated as anything in this world.

So i suggest watch out and be sceptical of everything. Even if it has a well based reaserach.

My 10mg a day creatine dose almost caused me kidney issues in the past summer.

The supplement was supposedly “good” by the EOF(FDA but in Greek)

So im more sceptical than i was at everything


Skepticism is good. I have trouble with it lapsing into cynicism at times, which is sort of skepticism devoid of hope.


One of my favorite George Carlin quotes:

“Tell people there’s an invisible man in the sky who created the universe, and the vast majority will believe you. Tell them the paint is wet, and they have to touch it to be sure.”
― George Carlin


I’m sure Carlin could do one better with telling people they cannot choose to act, and then wonder what it means when they agree with you.

1 Like

Ha! Good one. :slightly_smiling_face:

1 Like

I agree that the review system has weaknesses because of the humans doing it, and the interesting publications (or the popular versions of the story) often try to claim too much importance for their finding because they want to promote their research. Although this is common, there are great differences depending on your branch of science and the journals where you send your manuscript.

Some journals only publish <15% of the manuscripts they receive. In these journals, there is an attempt to speed the editorial process by suggesting that editors reject any manuscript that does not seem to have good chances to become accepted (editorial rejection) and reviewers are asked to rate the content of the manuscript relative to other papers. If the reviewers do not rate the manuscript within top 10%, that is enough to warrant rejection, unless the editor has another opinion. To avoid rejection, the authors try to market their manuscript by claiming that the results are very significant and show spectacular novelties, even when that is not true. If the authors are skillful in how they use words, some busy reviewers may buy their claims and think that the manuscript is better than it is. Expert reviewers are busy and do not have much time to evaluate one manuscript, so they may easily miss hidden weaknesses in the manuscript.

If the manuscript includes interesting results that challenge previous consensus, the manuscript has good changes to become accepted. Even if the significant result would be just a statistical anomaly, the results attract attention and citations in papers that want to refute the claims. Good for the author and the journal, so who cares if it is true or not. The following papers then show whether there was some truth in the paper or not. Almost 5% of statistical analyses give a statistical confidence of p<0,05 just by chance, so a lucky researcher may claim to have found significant results, even if that would be just a chance result.

So yes, a single test or paper is not enough to tell what is true or not. There is a need to get cumulative evidence for a model to be trustworthy.

Edit: I used the word ‘model’ to avoid the pitfall of using ‘theory’ or a similar kind of word. Thanks, @Christy for the suggestion to use the word ‘model’, in another thread.


Oh, wow, the urgency is very true! Our system forbids all drug rep visits (thank goodness; though I don’t think they meant ill), so we have not had any since our practice joined the large group, about 7 years ago. We get requests for off label use of Ozempic, etc, all the time; and that is really a struggle, as we want to help people who want to lose weight. The Vioxx problem was a good lesson for me, when they withdrew the drug from the market after an increasing risk of heart attacks was attributed to it. That also helped me realize the heart attack risk of other meds, like ibuprofen, Celebrex, etc–something I should keep in mind more. As my dad, a surgeon, used to say, “every medicine is a poison with a desired side effect.” I could have responded, “Well, Dad, isn’t every scalpel a weapon with a desired side effect?”

The potential for thyroid cancer, pancreatitis, and the fact that the appetite comes right back when stopping the med are good arguments against these.


PS in “Evidence Based Medicine,” which is basically the attempt to put medicine through the scientific method instead of relying on hearsay, the review papers are considered better. There are sites, like, which assess the level of evidence better. Some of the best conclusions include that “we need more studies and data.”

A nice synopsis of the 5 levels of evidence that most here are talking about is here, though I am sure that there are better descriptions out there:
Levels of evidence in research | Elsevier Author Services


In my experience, journals suffer from lack of reviewers with expert knowledge. Sometimes, you need to explain facts that are cristal clear for the experts. Of course, the writing quality is essential and non-native speakers have a disadvantage. It’s almost impossible to write as balanced in a foreign language as in your own language. It’s not to complain but simply the case.


(On the other hand, many non-native speakers do way better writing in English than native speakers, and explicitly including scientists. :slightly_smiling_face:)


I saw this about confusion on theory strength

1 Like