Guest publish by S. Stanley Young and Warren Kindzierski
A 2015 meta-analysis printed within the journal PLOS One claimed that “…exposure to potentially anti-estrogenic and immunotoxic, dioxin-like congeners & phenobarbital, CYP1A and CYP2B inducers might contribute to the risk of breast cancer”. We used p-value plots to guage the statistical reliability of this declare. One of us (Young) had e mail correspondence with PLOS One editorial employees over 4 years. Our p-value plots present that the claimed PCB−breast most cancers threat is fake. PLOS One editorial employees indicated the statistical strategy and strategies used within the meta-analysis are thought of acceptable.
The enterprise mannequin of many journals is that the creator pays a publishing price. Further, the incentives of the creator and writer can work towards sound science and curiosity of the general public. Part of the issue with PLOS One editors exhibiting no actual concern for publishing a false PCB−breast most cancers threat declare (or another unique declare) primarily based on meta-analysis could also be that the publishing course of is profitable for them. Overall, this expertise informs us that it's time for the general public to view any meta-analysis printed within the journal PLOS One as untrustworthy till confirmed in any other case.
False findings in medical analysis are far too frequent. John Ioannidis and others famous again in 2011 that “ traditional areas of epidemiologic research more closely reflect the performance settings and practices of early human genome epidemiology …showing at least 20 false-positive results for every one true-positive result”. We just lately reported that printed estimates of irreproducible medical analysis vary anyplace from 51–100% relying upon the self-discipline.
A meta-analysis is a technique used to research proof that solutions a particular analysis query, akin to whether or not a specific threat issue causes a illness. It combines take a look at statistics from a number of particular person research discovered within the literature that every one requested the identical query. Meta-analysis is taken into account by many, maybe mistakenly, to be the cream of the cream of the crop in methodologies for synthesizing proof in printed analysis (e.g., see right here, right here).
However, now we have famous elsewhere – actually, anyplace we care to look – that findings of meta-analysis research within the environmental epidemiology discipline are with out sound statistical proof, principally false. For instance, see right here, right here, right here. Why is that this? Well amongst different issues, it is because of routine use of questionable analysis practices (akin to evaluation manipulation, p-hacking, HARKing, and many others.).
Here we describe our expertise of how journals and their editors work to protect false findings in meta-analysis research they publish. We present this utilizing a 2015 meta-analysis printed within the journal PLOS One.
PLOS One meta-analysis
Back in 2018, certainly one of us, Young, checked out a meta-analysis printed within the journal PLOS One… “Environmental polychlorinated biphenyl exposure and breast cancer risk: A meta-analysis of observational studies” (Zhang et al. 2015). The meta-analysis claimed that “…exposure to potentially anti-estrogenic and immunotoxic, dioxin-like congeners & phenobarbital, CYP1A and CYP2B inducers might contribute to the risk of breast cancer”. This declare appeared quite incredible on condition that it was primarily based on environmental epidemiology observational research.
We have reported on the way to independently consider the statistical reliability of meta-analysis research utilizing p-value plots (see right here). A p-value plot is simple to assemble and it's interpreted within the following approach… if p-values roughly fall on a 45-degree line within the plot, they help randomness (no actual impact). If the p-values are principally smaller than 0.05, they help an actual impact. A bilinear, hockey stick, formed p-value plot signifies ambivalence (uncertainty) in an impact.
Young emailed a Zhang et al. co-author from China and cc’d a PLOS One editor from the US asking for additional details about their Figure 2 (Forest plot describing the affiliation between complete PCB publicity and breast most cancers threat). Young constructed a p-value plot from their Figure 2 information. It is proven beneath. The plot clearly exhibits a close to 45-degree line or no actual impact between complete PCB publicity and breast most cancers threat!
P-value plot for base research describing the affiliation between complete PCB publicity and breast most cancers threat (Zheng et al. Figure 2):
Young then emailed a publications assistant at PLOS One, connected the p-value plot, and said that the Zhang et al. PCB−breast most cancers declare was not supported by the p-value plot. By that point PLOS One had opened a case file on the difficulty. The publications assistant handed alongside Young’s concern to the Academic Editor who had initially dealt with the manuscript.
Now quick ahead 4 years, to April of this yr. A PLOS One employees editor lastly emailed Young again. The employees editor indicated that the PLOS One Editorial Board with experience in meta-analysis had regarded additional into Young’s concern. The employees editor said … “Based on this assessment, we consider that the authors do not imply an effect of total PCB exposure on breast cancer, based on the data in Figure 2. In light of this, we will not be pursuing this case further at this time.”
Young then emailed with the employees editor again and defined that the a number of testing that Zhang et al. did improve the probabilities of them getting a statistically vital, however false discovering amongst their outcomes. Several days later a unique employees editor responded to Young by e mail and said … “PLOS ONE abides by guidelines set forth by the Committee on Publication Ethics (COPE), of which this journal is a member. We have followed up on these additional concerns in consultation with the Editorial Board, which assessed that the statistical approach and meta-analytical methods used are considered acceptable. Therefore, no editorial action will be taken on the published article.”
Four years and a few imprecise follow-up emails from two PLOS One employees editors and no correspondence from somebody with precise statistical data of issues related to a number of testing. Now we actually needed to understand how deep the statistical issues went within the Zhang et al. research.
We constructed a p-value plot from their Figure 4 (Forest plot describing the affiliation between doubtlessly antiestrogenic and immunotoxic, dioxin-like PCBs publicity and breast most cancers threat). We additionally did a plot from their Figure 5 (Forest plot describing the affiliation between phenobarbital, CYP1A and CYP2B inducers, biologically persistent PCBs publicity and breast most cancers threat). Figures 4 and 5 characterize the important thing proof utilized by Zhang et al. to make their declare. Our p-value plots are proven beneath.
P-value plot for base research describing the affiliation between phenobarbital, CYP1A and CYP2B inducers, biologically persistent PCBs publicity and breast most cancers threat (Zheng et al. Figure 5):
Both of those plots present close to 45-degree traces or no actual results! We simply independently proved that the Zhang et al. PCB−breast most cancers threat declare is fake. So a lot for the PLOS One Editorial Board with experience in meta-analysis having the ability to acknowledge this. Perhaps their Board experience is skinny within the space of statistics or maybe they don't wish to admit that these issues exist of their printed meta-analysis research?
Statistics are an necessary contributor to false (irreproducible) analysis. Douglas Altman – one of the extremely cited researchers in any scientific self-discipline (see right here) and a long-time chief statistical adviser for the British Medical Journal – famous approach again in 1998 that… “The main reason for the plethora of statistical errors [in research] is that the majority of statistical analyses are performed by people with an inadequate understanding of statistical methods” and “…they are then peer reviewed by people who are generally no more knowledgeable”. It seems not a lot has modified.
Big cash enterprise of publishing meta-analysis research
We know most printed analysis is fake; however simply how motivated are journals (and their editors) to repair this? Let’s take a look at the case of meta-analysis. We used the Advanced Search Builder capabilities of freely obtainable PubMed to estimate the variety of systematic assessment and meta-analysis research printed within the journal PLOS One from 2012 to current (3 May 2022). We used the precise phrases (“PLOS One”[Journal]) AND ((systematic assessment[Title/Abstract]) AND (meta-analysis[Title/Abstract])).
Our search returned 2,484 articles (240 articles per yr; 20 per thirty days). PLOS One presently levies a price of $1,805 USD to publish a meta-analysis unique analysis article. This quantities to ~$36K per thirty days (~$430K yearly) of income publishing meta-analysis research going ahead – a money cow!
We know the journal peer assessment course of is damaged and there's little incentive to repair it. We know the enterprise mannequin of journals depends upon publishing, ideally a number of research as cheaply as doable. We additionally know that journals search novelty in analysis partially due to the competitors for impression issue and status. In reality, editors are sometimes rewarded for actions that improve the status of their journal.
Given all this, what journal would wish to mess with ~$430K annual income publishing meta-analyses with claims which might be principally false? The reply is clear… journals and their editors will do what is required to keep up the established order (even it if means repeatedly publishing false analysis claims). It is way too profitable a recreation to wish to change.
Richard Smith, a long-time editor of the British Medical Journal, was a cofounder of the Committee on Medical Ethics (COPE), for a few years the chair of the Cochrane Library Oversight Committee, and a member of the board of the UK Research Integrity Office. Last yr he greatest summarized how we should always deal with medical analysis… “It may be time to move from assuming that research has been honestly conducted and reported to assuming it to be untrustworthy until there is some evidence to the contrary”.
The enterprise mannequin of many journals promotes incentives for the creator and writer to work towards sound science and curiosity of the general public. Our place is that it's time for the general public to view any meta-analysis printed within the journal PLOS One as untrustworthy till confirmed in any other case.
S. Stanley Young is with CGStat in Raleigh, North Carolina and is the Director of the National Association of Scholars’ Shifting Sands Project. Warren Kindzierski is a retired professor in St Albert, Alberta.