"Long before it's in the papers"
January 28, 2015


Researchers wanted: Humans need not apply?

April 7, 2009
Courtesy National Science Foundation
and World Science staff

As sci­ence fic­tion plot lines go, the un­in­tend­ed con­se­quenc­es of yield­ing tasks too hard or dan­ger­ous for hu­mans to com­put­ers and robots is a pop­u­lar one. 

Yet sci­en­tists are in­creas­ingly do­ing just that, cre­at­ing au­to­mat­ed sys­tems and de­vices that can not only help col­lect, or­gan­ize and an­a­lyze sci­en­tif­ic da­ta, but that are al­so able to draw up new hy­pothe­ses and ap­proaches to re­search based on the da­ta they re­ceive.

In an ar­ti­cle in the April 7 is­sue of the re­search jour­nal Sci­ence, Da­vid Waltz of the Cen­ter for Com­puta­t­ional Learn­ing Sys­tems at Co­lum­bia Uni­ver­s­ity and Bruce G. Bu­chan­an of the com­put­er sci­ence de­part­ment at the Uni­ver­s­ity of Pitts­burgh dis­cuss this new world of sci­en­tif­ic re­search and its im­plica­t­ions for the way sci­ence is con­ducted. 

They see this all as a prom­is­ing trend, but cau­tion that re­search­ers need to con­sid­er what tasks are best suit­ed for au­toma­t­ion and which should be left to the hu­man mind.

Waltz and Bu­chan­an note that com­put­er-aided au­toma­t­ion has been a part of re­search for dec­ades, from sim­ple pro­grams that plot­ted bal­lis­tic arcs to da­tabases that held and or­gan­ized sci­en­tif­ic da­ta. All of these sys­tems, how­ev­er, re­quired a “hu­man in the loop” to shape the re­search, ex­am­ine the re­sults and de­ter­mine how to apply the out­come to fu­ture en­deav­ors.

Now the fron­tiers of au­toma­t­ion can now make the hu­man sci­ent­ist seem ob­so­lete. Waltz and Bu­chan­an write that a com­put­er pro­gram can “con­duct a con­tin­u­ously loop­ing pro­ce­dure that starts with a ques­tion, car­ries out ex­pe­ri­ments to an­swer the ques­tion, eval­u­ates the re­sults, and re­for­mu­lates new ques­tions.”

The au­thors ar­gue that these new sys­tems are ar­riv­ing just when they are needed the most. As sen­sors and oth­er in­stru­ments get more capa­ble and com­plex, the sci­en­tif­ic world is drown­ing in da­ta, and hav­ing com­put­er-based as­sis­tants who can ac­tively sift through the da­ta may be the only way to make sense of it all.

Ac­cord­ing to Waltz and Bu­chan­an, the pros­pect of au­tomating sci­ence al­so brings up a num­ber of ques­tions that need to be con­sid­ered as these new tech­nolo­gies be­come widely used, such as how we de­ter­mine what to au­tomate, what should be left to hu­man in­ter­ven­tion, and how this newly au­to­mat­ed re­search will af­fect the re­sults and the sci­en­tif­ic pro­cess. It is al­so pos­si­ble, Waltz and Bu­chan­an sug­gest, that these new tools will gen­er­ate even more da­ta to be con­sid­ered, and will there­fore con­trib­ute to one of the prob­lems they are meant to solve.

Mov­ing for­ward, the au­thors sug­gest that the best ap­proach is to think of these tools as in­tel­li­gent as­sis­tants that can do dif­fer­ent types of tasks as­so­ci­at­ed with sci­en­tif­ic re­search. Sci­en­tists can then de­ter­mine which as­sis­tants are the best choice for dif­fer­ent as­pects of their re­search.

So, does em­ploy­ing these au­to­mat­ed as­sis­tants mean that stu­dents stu­dy­ing sci­ence should con­sid­er anoth­er ma­jor? The au­thors say no, in­di­cat­ing that for all of their ca­pa­bil­i­ties, au­to­mat­ed sci­ence sys­tems will not do to re­search­ers what robots have done to au­towork­er­s—but they will change how sci­en­tists do their jobs. “Re­gard­less of spe­cial­ty,” Wal­ter said, “sci­en­tists may need to add knowl­edge and skills in ar­ti­fi­cial in­tel­li­gence, ma­chine learn­ing, and knowl­edge rep­re­senta­t­ion.”

* * *

Send us a comment on this story, or send it to a friend


Sign up for

On Home Page         


  • St­ar found to have lit­tle plan­ets over twice as old as our own

  • “Kind­ness curricu­lum” may bo­ost suc­cess in pre­schoolers


  • Smart­er mice with a “hum­anized” gene?

  • Was black­mail essen­tial for marr­iage to evolve?

  • Plu­to has even cold­er “twin” of sim­ilar size, studies find

  • Could simple an­ger have taught people to coop­erate?


  • F­rog said to de­scribe its home through song

  • Even r­ats will lend a help­ing paw: study

  • D­rug may undo aging-assoc­iated brain changes in ani­mals

As science fiction plot lines go, the unintended consequences of yielding tasks too complicated or dangerous for human hands to computers and robots is a popular one. Yet real life scientists are increasingly doing just that, creating automated systems and devices that can not only help collect, organize and analyze scientific data, but that are also able to intelligently and independently draw up new hypotheses and approaches to research based on the data they receive. In a an article in the April 7 issue of the research journal Science, David Waltz of the Center for Computational Learning Systems at Columbia University and Bruce G. Buchanan of the computer science department at the University of Pittsburgh discuss this new world of scientific research and its implications for the way science is conducted. They see this all as a promising trend, but caution that researchers need to consider what tasks are best suited for automation and which should be left to the human mind. Waltz and Buchanan point out that computer-aided automation has been a part of scientific research for decades, from simple programs that plotted ballistic arcs to databases that held and organized scientific data. All of these systems, however, required a “human in the loop” to shape the research, examine the results and determine how to apply the outcome to future endeavors. Now the frontiers of automation can now make the human scientist seem obsolete. Waltz and Buchanan write that, “it is possible for one computer program ... to conduct a continuously looping procedure that starts with a question, carries out experiments to answer the question, evaluates the results, and reformulates new questions.” The authors argue that these new systems are arriving just when they are needed the most. As sensors and other instruments get more capable and complex, the scientific world is drowning in data, and having computer-based assistants who can actively sift through the data may be the only way to make sense of it all. According to Waltz and Buchanan, the prospect of automating science also brings up a number of questions that need to be considered as these new technologies become widely adopted and deployed, such as how we determine what to automate, what should be left to human intervention, and how this newly automated research will affect the results and the scientific process. It is also possible, Waltz and Buchanan suggest, that these new tools will generate even more data to be considered, and will therefore contribute to one of the problems they are meant to solve. Moving forward, the authors suggest that the best approach is to think of these tools as intelligent assistants that can do different types of tasks associated with scientific research. Scientists can then determine which assistants are the best choice for different aspects of their research. So, does employing these automated assistants mean that students studying science should consider another major? The authors say no, indicating that for all of their capabilities, automated science systems will not do to researchers what robots have done to autoworkers—but they will change how scientists do their jobs. “Regardless of specialty,” Walter said, “scientists may need to add knowledge and skills in artificial intelligence, machine learning, and knowledge representation.”