"Long before it's in the papers"
January 27, 2015


Pressure to publish may bias scientists

April 24, 2010
Courtesy of Public Library of Science
and World Science staff

The qual­ity of sci­en­tif­ic re­search may be suf­fer­ing be­cause scho­lars are un­der pres­sure to get their work pub­lished in sci­en­tif­ic jour­nals, a new anal­y­sis sug­gests.

The study found that the frac­tion of U.S.-pub­lished re­search pa­pers claim­ing “pos­i­tive” re­sults—those that may in­di­cate an ac­tu­al dis­cov­ery—is im­mensely high­er when the au­thors are from states whose aca­demics pub­lish more of­ten. The dif­fer­ence ranged from less than half, to over 95 per­cent.

The qual­ity of sci­en­tif­ic re­search may be suf­fer­ing be­cause scho­lars are un­der pres­sure to get their work pub­lished in sci­en­tif­ic jour­nals, a new anal­y­sis sug­gests. (Im­age cour­tesy L. Livermore Nat'l Lab)

The find­ings were re­ported in the on­line re­search jour­nal PLoS One on April 21, by Dan­iele Fan­elli of the Uni­vers­ity of Ed­in­burgh in Scot­land.

“Pub­lish or per­ish,” an aphor­ism widely known in ac­a­dem­ia, ex­presses the very real fact that sci­en­tists must pub­lish their work con­tin­u­ously to se­cure jobs and fund­ing, Fan­elli not­ed. Ca­reers are judged based on the sheer num­ber of pa­pers some­one has pub­lished, and on how many times these are cit­ed in lat­er pa­pers—though this is a hotly de­bat­ed meas­ure of sci­en­tif­ic qual­ity.

But pa­pers are more or less likely to be ac­cept­ed by jour­nals, and to be cit­ed, de­pend­ing on the re­sults they re­port. Like a hit song, more in­ter­est­ing re­sults tend to make fur­ther head­way. Thus sci­en­tists are “torn be­tween the need to be ac­cu­rate and ob­jec­tive and the need to keep their ca­reers alive,” Fan­elli said.

Fanelli an­a­lysed over 1,300 pa­pers claim­ing to have tested a hy­poth­e­sis in all dis­ci­plines, from phys­ics to so­ci­ol­o­gy, from U.S.-based main au­thors. Us­ing da­ta from the Na­tional Sci­ence Founda­t­ion, he then checked wheth­er the pa­pers’ con­clu­sions were linked to the states’ pro­duc­ti­vity, meas­ured by the num­ber of pa­pers pub­lished on av­er­age by each ac­a­dem­ic.

Re­sults were more likely to “sup­port” the hy­poth­e­sis un­der in­ves­ti­ga­t­ion, Fan­elli found, when the pa­per was from a “pro­duc­tive” state. That sug­gests, he said, that sci­en­tists work­ing in more com­pet­i­tive and pro­duc­tive en­vi­ron­ments are more likely to make their re­sults look pos­i­tive. It’s un­clear wheth­er they do this by writ­ing the pa­pers dif­fer­ently or by tweak­ing the un­der­ly­ing da­ta, Fan­elli said.

“The out­come of an ex­pe­ri­ment de­pends on many fac­tors, but the pro­duc­ti­vity of the U.S. state of the re­searcher should not, in the­o­ry, be one of them,” ex­plained Fan­elli. “We can­not ex­clude that re­search­ers in the more pro­duc­tive states are smarter and bet­ter equipped, and thus more suc­cess­ful, but this is un­likely to fully ex­plain the marked trend ob­served.” The study re­sults were in­de­pend­ent of fund­ing avail­abil­ity, he said.

Pos­i­tive re­sults were less than half the to­tal in Ne­vada, North Da­ko­ta and Mis­sis­sip­pi. At the oth­er ex­treme, states in­clud­ing Mich­i­gan, Ohio, Dis­trict of Co­lum­bia and Ne­bras­ka had be­tween 95 per­cent and 100 per­cent pos­i­tive re­sults, a rate that seems un­realistic even for the most out­stand­ing in­sti­tu­tions, Fan­elli said.

These con­clu­sions could apply to all sci­en­tif­ic­ally ad­vanced coun­tries, he added. “Aca­demic com­pe­ti­tion for fund­ing and po­si­tions is in­creas­ing ev­ery­where,” said Fan­elli. “Poli­cies that rely too much on cold meas­ures of pro­duc­ti­vity might be low­er­ing the qual­ity of sci­ence it­self.”

* * *

Send us a comment on this story, or send it to a friend


Sign up for

On Home Page         


  • St­ar found to have lit­tle plan­ets over twice as old as our own

  • “Kind­ness curricu­lum” may bo­ost suc­cess in pre­schoolers


  • Smart­er mice with a “hum­anized” gene?

  • Was black­mail essen­tial for marr­iage to evolve?

  • Plu­to has even cold­er “twin” of sim­ilar size, studies find

  • Could simple an­ger have taught people to coop­erate?


  • F­rog said to de­scribe its home through song

  • Even r­ats will lend a help­ing paw: study

  • D­rug may undo aging-assoc­iated brain changes in ani­mals

The quality of scientific research may be suffering because academics are under growing pressure to get results published in scientific journals, a new analysis suggests. The study found that the fraction of U.S.-published research papers claiming “positive” results—those that may indicate an actual discovery—is immensely higher when the authors are from states whose academics publish more often. The difference ranged from less than half to over 95 percent. The findings are reported in the online research journal PLoS One on April 21, by Daniele Fanelli of the University of Edinburgh in Scotland. “Publish or perish” is a witticism widely known in academia. It reflects the reality that scientists must publish their work continuously to secure jobs and funding, Fanelli noted. Careers are judged based on the sheer number of papers someone has published, and on how many times these are cited in later papers—though this is a hotly debated measure of scientific quality. But papers are more or less likely to be accepted by journals, and to be cited, depending on the results they report. Like a hit song, more interesting results tend to make further headway. Scientists are “torn between the need to be accurate and objective and the need to keep their careers alive,” Fanelli said. Fanelli analysed over 1,300 papers claiming to have tested a hypothesis in all disciplines, from physics to sociology, from U.S.-based main authors. Using data from the National Science Foundation, he then checked whether the papers’ conclusions were linked to the states’ productivity, measured by the number of papers published on average by each academic. Results were more likely to “support” the hypothesis under investigation, Fanelli found, when the paper was from a “productive” state. That suggests, he said, that scientists working in more competitive and productive environments are more likely to make their results look positive. It’s unclear whether they do this by writing the papers differently or by tweaking the underlying data, Fanelli said. “The outcome of an experiment depends on many factors, but the productivity of the U.S. state of the researcher should not, in theory, be one of them,” explained Fanelli. “We cannot exclude that researchers in the more productive states are smarter and better equipped, and thus more successful, but this is unlikely to fully explain the marked trend observed.” The study results were independent of funding availability, he said. Positive results were less than half the total in Nevada, North Dakota and Mississippi. At the other extreme, states including Michigan, Ohio, District of Columbia and Nebraska had between 95% and 100% positive results, a rate that seems unrealistic even for the most outstanding institutions, Fanelli said. These conclusions could apply to all scientifically advanced countries, he added. “Academic competition for funding and positions is increasing everywhere,” said Fanelli. “Policies that rely too much on cold measures of productivity might be lowering the quality of science itself.”