"Long before it's in the papers"
November 09, 2015

RETURN TO THE WORLD SCIENCE HOME PAGE


Scientists could aid in new discovery by placing bets, study suggests

Nov. 4, 2015
Courtesy of Harvard University
and World Science staff

Sci­en­tists could help steer their fields in the right di­rec­tions if a whole bunch of them get to­geth­er and start bet­ting on which new re­sults seem most be­liev­a­ble, a study sug­gests.

Just one sci­en­tif­ic re­sult does­n’t mean much. To know wheth­er it’s val­id, the ex­pe­ri­ment needs to be re­peat­ed many times with the same re­sult. But peo­ple are peo­ple; mis­takes, flukes and even fraud hap­pen; and there is no time or mon­ey to re-run eve­ry sin­gle stu­dy. As a re­sult, ir­re­pro­ducible re­search reg­u­larly finds its way in­to even re­spected sci­ent­if­ic jour­nals. 

This is es­pe­cially prob­lem­at­ic for drug tri­als and oth­er clin­i­cal re­search. A re­cent es­ti­mate put the costs as­so­ci­at­ed with ir­re­pro­ducible pre-clin­i­cal re­search at $28 bil­lion a year in the Un­ited States.

In the new stu­dy, the re­search­ers looked for some meth­od to iden­ti­fy stud­ies that were of great­er con­cern and needed to be re-tested.

The re­search­ers, Yil­ing Chen, a com­put­er sci­ent­ist at Har­vard Un­ivers­ity, and col­leagues, turned to pre­dic­tion mar­ket­s—in­vest­ment plat­forms that re­ward traders for cor­rectly pre­dict­ing fu­ture events. They found that these mar­kets cor­rectly pre­dicted repli­ca­bil­ity in 71 per­cent of the cases stud­ied. The team chose 44 stud­ies pub­lished in pres­tig­ious jour­nals that were in the pro­cess of be­ing re-tested or where the re-test re­sults weren’t yet known.

“This re­search shows for the first time that pre­dic­tion mar­kets can help us es­ti­mate the like­li­hood of wheth­er or not the re­sults of a giv­en ex­pe­ri­ment are true,” said Chen. “This could save in­sti­tu­tions and com­pa­nies time and mil­lions of dol­lars in costly replica­t­ion tri­als and help iden­ti­fy which ex­pe­ri­ments are a pri­or­ity to re-test.”

Six­ty-one per­cent of the replica­t­ions used in this study did not re­pro­duce the orig­i­nal re­sults.

“Top psy­chol­o­gy jour­nals seem to fo­cus on pub­lish­ing sur­pris­ing re­sults rath­er than true re­sults,” said An­na Dreber, of the Stock­holm School of Eco­nom­ics and a co-author of the pa­per. “Sur­pris­ing re­sults do not al­ways hold up un­der re-testing. There are dif­fer­ent stages at which an hy­poth­e­sis can be eval­u­at­ed and giv­en a prob­a­bil­ity that it is true. The pre­dic­tion mar­ket helps us get at these prob­a­bil­i­ties.”

The re­search was pub­lished in the Pro­ceed­ings of the Na­t­ional Acad­e­my of Sci­ences.

Pre­dic­tion mar­kets are gain­ing pop­u­lar­ity in a num­ber of realms be­yond eco­nom­ics, es­pe­cially in pol­i­tics. In pre­dic­tion mar­kets, in­vestors make pre­dic­tions of fu­ture events by buy­ing shares in the out­come of the event. Great­er pop­u­larity of a particular outcome drives up its price in the market, so that price in­di­cates what the crowd thinks the prob­a­bil­ity of the event is.

Poll­sters and pun­dits are re­ly­ing more and more on pre­dic­tion mar­kets to fore­cast elec­tions and oth­er events be­cause pre­dic­tion mar­kets rely on the av­er­age an­swer of a group of well-in­formed par­ti­ci­pants, oth­erwise known as the wis­dom of the crowd.

The researchers set up mar­kets for each study and pro­vid­ed their pool of trader­s—all psy­chol­o­gists—$100 to in­vest. The par­ti­ci­pants chose to in­vest an­y­where be­tween one and 99 cents on the out­come of the event—in this case, wheth­er or not the re­search could be re­pro­duced.

If the price for “re­pro­ducible” shares are low when the mar­ket closes, that means that most peo­ple in the field don’t be­lieve the ex­pe­ri­ment can be rep­li­cat­ed.

“One of the ad­van­tages of the mar­ket is that par­ti­ci­pants can pick the most at­trac­tive in­vestment op­por­tun­i­ties,” said Thom­as Pfeif­fer, a co-author and pro­fes­sor of com­puta­t­ional bi­ol­o­gy at the New Zea­land In­sti­tute for Ad­vanced Study. “If the price is wrong and I’m con­fi­dent I have bet­ter in­forma­t­ion than an­y­one else, I have a strong in­cen­tive to cor­rect the price so I can make more mon­ey. It’s all about who has the best in­forma­t­ion.”

“Our re­search showed that there is some ‘wis­dom of the crowd’ among psy­chol­o­gy re­search­ers,” said Bri­an Nosek, co-author and pro­fes­sor of psy­chol­o­gy at the Un­ivers­ity of Vir­gin­ia. “Pre­dic­tion ac­cu­ra­cy of 70 per­cent of­fers an op­por­tun­ity for the re­search com­mun­ity to iden­ti­fy ar­eas to fo­cus re­pro­ducibil­ity ef­forts to im­prove con­fi­dence and cred­i­bil­ity of all find­ings.”

The next step in the re­search, the in­ves­ti­ga­tors said, is to test wheth­er this might work in oth­er fields, such as eco­nom­ics and cell bi­ol­o­gy.


* * *

Send us a comment on this story, or send it to a friend











y Sign up for
e-newsletter

   
 
subscribe
 
cancel

On Home Page         

LATEST

  • Physi­cists stu­dy how bal­loons burst

  • Warm­ing helped deci­mate New Eng­land cod stocks, study finds

EXCLUSIVES

  • Study links global warming, war for first time—in Syria

  • Smart­er mice with a “hum­anized” gene?

  • Was black­mail essen­tial for marr­iage to evolve?

  • Plu­to has even cold­er “twin” of sim­ilar size, studies find

MORE NEWS

  • F­rog said to de­scribe its home through song

  • Even r­ats will lend a help­ing paw: study

  • D­rug may undo aging-assoc­iated brain changes in ani­mals

Scientists could help steer their fields in the right directions if a whole bunch of them get together and start betting on which new results seem most believable, a study suggests. Just one scientific result doesn’t mean much. To know whether it’s valid, the experiment needs to be repeated many times with the same result. But people are people; mistakes, flukes and even fraud happen; and there is no time or money to reproduce every single study. As a result, irreproducible research regularly finds its way into even the respected academic journals. This is especially problematic for drug trials and other clinical research. A recent estimate put the costs associated with irreproducible preclinical research at $28 billion a year in the United States. In the new study, the researchers looked for some method to identify studies that were of greater concern and needed to be re-tested. The researchers, Yiling Chen, a computer scientist at Harvard University, and colleagues, turned to prediction markets—investment platforms that reward traders for correctly predicting future events. They found that prediction markets correctly predicted replicability in 71 percent of the cases studied. The team chose 44 studies published in prestigious journals that were in the process of being re-tested or where the re-test results weren’t yet known. “This research shows for the first time that prediction markets can help us estimate the likelihood of whether or not the results of a given experiment are true,” said Chen. “This could save institutions and companies time and millions of dollars in costly replication trials and help identify which experiments are a priority to re-test.” Sixty-one percent of the replications used in this study did not reproduce the original results. “Top psychology journals seem to focus on publishing surprising results rather than true results,” said Anna Dreber, of the Stockholm School of Economics and a co-author of the paper. “Surprising results do not always hold up under re-testing. There are different stages at which an hypothesis can be evaluated and given a probability that it is true. The prediction market helps us get at these probabilities.” The research was published in the Proceedings of the National Academy of Sciences. Prediction markets are gaining popularity in a number of realms beyond economics, especially in politics. In prediction markets, investors make predictions of future events by buying shares in the outcome of the event and the market price indicates what the crowd thinks the probability of the event is. Pollsters and pundits are relying more and more on prediction markets to forecast elections and other events because prediction markets rely on the average answer of a group of well-informed participants, otherwise known as the wisdom of the crowd. Then, they set up markets for each study and provided their pool of traders—all psychologists—with $100 to invest. Armed with information about each market, including the original publication and their knowledge of the field, the participants chose to invest anywhere between 1 and 99 cents on the outcome of the event—in this case, whether or not the research could be reproduced. If the price for “reproducible” shares are low when the market closes, that means that most people in the field don’t believe the experiment can be replicated. “One of the advantages of the market is that participants can pick the most attractive investment opportunities,” said Thomas Pfeiffer, a co-author and professor of computational biology at the New Zealand Institute for Advanced Study. “If the price is wrong and I’m confident I have better information than anyone else, I have a strong incentive to correct the price so I can make more money. It’s all about who has the best information.” “Our research showed that there is some ‘wisdom of the crowd’ among psychology researchers,” said Brian Nosek, co-author and professor of psychology at the University of Virginia. “Prediction accuracy of 70 percent offers an opportunity for the research community to identify areas to focus reproducibility efforts to improve confidence and credibility of all findings.” The next step in the research, the investigators said, is to test whether this might work in other fields, such as economics and cell biology.