a

"Long before it's in the papers"
June 04, 2013

RETURN TO THE WORLD SCIENCE HOME PAGE


Giving robots the ability to deceive

Sept. 9, 2010
Courtesy of Georgia Tech 
and World Science staff

A ro­bot tricks an en­e­my sol­dier by cre­at­ing a false trail and then hid­ing. While this sounds like a scene from one of the Ter­mi­na­tor movies, it’s ac­tu­ally the sce­nar­i­o of an ex­pe­ri­ment con­ducted by re­search­ers at the Geor­gia In­sti­tute of Tech­nol­o­gy as part of what is be­lieved to be the first de­tailed study of ro­bot de­cep­tion.

A black and a red robot play a game of hide-and-seek. (Courtesy Georgia Tech)


Com­put­er pro­grams newly de­vel­oped at Geor­gia Tech “al­low a ro­bot to de­ter­mine wheth­er it should de­ceive a hu­man or oth­er in­tel­li­gent ma­chine,” said Ronald Arkin, a com­put­er sci­ent­ist at the uni­ver­sity. They also “help the ro­bot se­lect the best de­cep­tive strat­e­gy to re­duce its chance of be­ing dis­cov­ered.” 

Tech­niques de­signed by Arkin and col­leagues are de­signed to let a ro­bot de­ceive anoth­er ro­bot, but the prin­ci­ples in­volved would al­so apply to ro­bot-hu­man in­ter­ac­tions, the re­searchers said. Re­sults were pub­lished on­line on Sept. 3 in the In­terna­t­ional Jour­nal of So­cial Robotics. The re­search was funded by the U.S. Of­fice of Na­val Re­search.

Robots ca­pa­ble of de­cep­tion may be use­ful in var­i­ous ar­eas, in­clud­ing mil­i­tary and search and res­cue opera­t­ions, re­search­ers say. A search and res­cue ro­bot may need to de­ceive in or­der to calm or re­ceive coop­era­t­ion from a pan­ick­ing vic­tim. Robots on the bat­tle­field with the pow­er of de­cep­tion would be able to suc­cess­fully hide and mis­lead the en­e­my to keep them­selves and valua­ble in­forma­t­ion safe.

“Most so­cial ro­bots will probably rarely use de­cep­tion, but it’s still an im­por­tant tool in the ro­bot’s in­ter­ac­tive ar­se­nal be­cause ro­bots that rec­og­nize the need for de­cep­tion have ad­van­tages in terms of out­come com­pared to ro­bots that do not rec­og­nize the need for de­cep­tion,” said the stu­dy’s co-author, Al­an Wag­ner, a re­search en­gi­neer at the Geor­gia Tech Re­search In­sti­tute.

For the stu­dy, the re­search­ers fo­cused on the ac­tions, “be­liefs” and com­mu­nica­t­ions of a ro­bot try­ing to hide from anoth­er ro­bot. Their first step was to teach the de­ceiv­ing ma­chine how to rec­og­nize a situa­t­ion war­ranting de­cep­tion. Wag­ner and Arkin used ap­proaches known as in­ter­de­pend­ence the­o­ry and game the­o­ry to de­vel­op for­mu­las that tested the val­ue of de­cep­tion in a spe­cif­ic situa­t­ion. A situa­t­ion had to sat­is­fy two key con­di­tions to war­rant de­cep­tion: there must be con­flict be­tween the de­ceiv­ing ro­bot and the seek­er, and the de­ceiver must ben­e­fit from the de­cep­tion.

Once a situa­t­ion was deemed to war­rant trick­ery, the ro­bot car­ried it out by pro­vid­ing false in­forma­t­ion to ben­e­fit it­self.

The re­search­ers ran 20 hide-and-seek ex­pe­ri­ments with two au­ton­o­mous ro­bots. Col­ored mark­ers were lined up along three po­ten­tial path­ways to places where the ro­bot could hide. A hid­er ro­bot ran­domly chose a hid­ing place and moved there, knock­ing down col­ored mark­ers along the way. Once it reached a point past the mark­ers, the ro­bot changed course and hid in one of the oth­er two loca­t­ions. The pres­ence or ab­sence of stand­ing mark­ers in­di­cat­ed the hid­er’s loca­t­ion to the seek­er ro­bot.

“The hid­er’s set of false com­mu­nica­t­ions was de­fined by se­lecting a pat­tern of knocked over mark­ers that in­di­cat­ed a false hid­ing po­si­tion in an at­tempt to say, for ex­am­ple, that it was go­ing to the right and then ac­tu­ally go to the left,” said Wag­ner. The hid­er ro­bots man­aged to de­ceive the seek­ers in three-fourths of tri­als, with fail­ures re­sult­ing from the hid­ing ro­bot’s in­abil­ity to knock over the mark­ers that would pro­duce the de­sired ef­fect.

The re­sults “w­eren’t per­fect, but they dem­on­strat­ed the learn­ing and use of de­cep­tion sig­nals by real ro­bots,” said Wag­ner. “The re­sults were al­so a pre­lim­i­nar­y in­dica­t­ion that the tech­niques and al­go­rithms de­scribed in the pa­per could be used to suc­cess­fully pro­duce de­cep­tive be­hav­ior in a ro­bot.”

There are al­so eth­i­cal im­plica­t­ions that need to be con­sid­ered to en­sure that these crea­t­ions don’t harm so­ci­e­ty, the re­search­ers said. “We have been con­cerned from the very be­gin­ning with the eth­i­cal im­plica­t­ions,” ex­plained Arkin. “We strongly en­cour­age dis­cus­sion about the ap­pro­pri­ate­ness of de­cep­tive ro­bots to de­ter­mine what, if any, regula­t­ions or guide­lines should con­strain the de­vel­opment of these sys­tems.”


* * *









Send us a comment on this story, or send it to a friend

 

Sign up for
e-newsletter
   
 
subscribe
 
cancel

On Home Page         

LATEST

  • Meet­ing on­line may lead to hap­pier mar­riages

  • Pov­erty re­duction, environ­mental safe­guards go hand in hand: UN re­port

EXCLUSIVES

  • Was black­mail essen­tial for marr­iage to evolve?

  • Plu­to has even cold­er “twin” of sim­ilar size, studies find

  • Could simple an­ger have taught people to coop­erate?

  • Diff­erent cul­tures’ mu­sic matches their spe­ech styles, study finds

MORE NEWS

  • F­rog said to de­scribe its home through song

  • Even r­ats will lend a help­ing paw: study

  • D­rug may undo aging-assoc­iated brain changes in ani­mals

A robot tricks an enemy soldier by creating a false trail and then hiding. While this sounds like a scene from one of the Terminator movies, it’s actually the scenario of an experiment conducted by researchers at the Georgia Institute of Technology as part of what is believed to be the first detailed study of robot deception. Computer programs newly developed at Georgia Tech “allow a robot to determine whether it should deceive a human or other intelligent machine,” and “help the robot select the best deceptive strategy to reduce its chance of being discovered,” said Ronald Arkin, a computer scientist at Georgia Tech. The techniques designed by Arkin and colleagues are designed to let a robot deceive another robot, but the principles involved would also apply to robot-human interactions, the researcers said. Results were published online on September 3 in the International Journal of Social Robotics. The research was funded by the U.S. Office of Naval Research. Robots capable of deception may be useful in various areas, including military and search and rescue operations, researchers say. A search and rescue robot may need to deceive in order to calm or receive cooperation from a panicking victim. Robots on the battlefield with the power of deception will be able to successfully hide and mislead the enemy to keep themselves and valuable information safe. “Most social robots will probably rarely use deception, but it’s still an important tool in the robot’s interactive arsenal because robots that recognize the need for deception have advantages in terms of outcome compared to robots that do not recognize the need for deception,” said the study’s co-author, Alan Wagner, a research engineer at the Georgia Tech Research Institute. For the study, the researchers focused on the actions, beliefs and communications of a robot attempting to hide from another robot to develop programs that produced deceptive behavior. Their first step was to teach the deceiving machine how to recognize a situation warranting deception. Wagner and Arkin used approaches known as interdependence theory and game theory to develop formulas that tested the value of deception in a specific situation. A situation had to satisfy two key conditions to warrant deception: there must be conflict between the deceiving robot and the seeker, and the deceiver must benefit from the deception. Once a situation was deemed to warrant trickery, the robot carried it out by providing false information to benefit itself. The technique developed by the Georgia Tech researchers based a robot’s deceptive action selection on its understanding of the individual robot it was attempting to deceive. The researchers ran 20 hide-and-seek experiments with two autonomous robots. Colored markers were lined up along three potential pathways to places where the robot could hide. A hider robot randomly chose a hiding place and moved there, knocking down colored markers along the way. Once it reached a point past the markers, the robot changed course and hid in one of the other two locations. The presence or absence of standing markers indicated the hider’s location to the seeker robot. “The hider’s set of false communications was defined by selecting a pattern of knocked over markers that indicated a false hiding position in an attempt to say, for example, that it was going to the right and then actually go to the left,” said Wagner. The hider robots managed to deceive the seekers in three-fourths of trials, with failures resulting from the hiding robot’s inability to knock over the markers that would produce the desired effect. The results “weren’t perfect, but they demonstrated the learning and use of deception signals by real robots,” said Wagner. “The results were also a preliminary indication that the techniques and algorithms described in the paper could be used to successfully produce deceptive behavior in a robot.” There are also ethical implications that need to be considered to ensure that these creations don’t harm society, the researchers said. “We have been concerned from the very beginning with the ethical implications related to the creation of robots capable of deception,” explained Arkin. “We strongly encourage discussion about the appropriateness of deceptive robots to determine what, if any, regulations or guidelines should constrain the development of these systems.”