1
7 April 2012 | NewScientist | 21 For more technology stories, visit newscientist.com/technology TECHNOLOGY Many businesses employ chatbots to handle customer queries, but criminals also use them to trick people into giving out account information, so how do you ensure you’re talking to a “good” bot? Roman Yampolskiy and colleagues at the University of Louisville, Kentucky, fed text written by chatbots into software originally designed to identify human authors and found that they were often able to recognise the chatbot responsible. Identifying chatbots that change over time proved more difficult, however. Yampolskiy is now looking for signature features that identify particular chatbots, suggesting that software could be developed that would weed out the bad bots. Spot the bad bot would be no way of linking an avatar username to a human user. Yampolskiy and colleagues have developed facial recognition techniques specifically tailored to avatars, since current algorithms only work on humans. “Not all avatars are human looking, and even with those that are humanoid there is a huge diversity of colour,” Yampolskiy says, so his software uses those colours to improve avatar recognition. The team also investigated matching a human face to an avatar generated from that face – previous studies show that avatars often resemble their owners. Combining their colour- based technique with existing facial recognition software produced the best results, suggesting it might be possible to track someone between the physical and virtual worlds. They will present the work at the Defense, Security, and Sensing conference in Baltimore, Maryland, this month. Yampolskiy also plans to develop recognition algorithms for robots. He says that autonomous robots may one day – at least in Japan – become ubiquitous enough to require identification of their own, separate from humans. Artimetrics probably have no immediately useful applications, says Tony Mansfield, a biometrics specialist at the UK’s National Physical Laboratory in London. He says it is unlikely that virtual worlds or robots will ever become completely decentralised, which means companies should still be able to provide unique identifiers, rendering Yampolskiy’s approach unnecessary. “This should be as easy as identifying a barcode, rather than as hard as biometrics,” he says. n MICHAEL BARTON Appearances can be deceptive“Like human biometrics, ‘artimetrics’ would serve to authenticate and identify non-biological agents” A POLICE car rolls up to a house where the doors and windows are smashed in, rooms are ransacked and numerous high- value items are missing. Calming the home-owner, an officer begins to investigate: “Did you see the person who did it?” The shaken victim replies: “Yes, he had massive purple dreadlocks, green lips and was dressed like Michael Jackson.” Such an unusual perpetrator would be easy to identify in the physical realm, but this break-in took place in a virtual world, where odd-looking avatars are the norm. It may sound like an odd crime, but Japanese police have previously arrested virtual muggers, and the FBI has investigated casinos based in the virtual world of Second Life. Virtual crimes will become more common as we venture more and more into these worlds, says computer scientist Roman Yampolskiy. To prevent this, multinational defence firm Raytheon, based in Waltham, Massachusetts, has a patent pending on fusing a person’s real biometrics with their 3D avatar, so you know for sure who you are speaking to in a digital world. Yampolskiy and colleagues at the Cyber-Security Lab at the University of Louisville in Kentucky are going one step further: they are developing the field of artificial biometrics, or “artimetrics”. Similar to human biometrics, artimetrics would serve to authenticate and identify non-biological agents such as avatars, physical robots or even chatbots (see “Spot the bad bot”, below). In Second Life, avatars are easily identified by their username, meaning police can just ask San Francisco-based Linden Labs, which runs the virtual world, to look up a particular user. But what happens when virtual worlds start running on peer-to- peer networks, leaving no central authority to appeal to? Then there Jacob Aron Catch me if you can Crime in virtual worlds is boosting demand for tracking antisocial avatars – a form of biometrics may help

Face recognition for avatars and robots

Embed Size (px)

Citation preview

Page 1: Face recognition for avatars and robots

7 April 2012 | NewScientist | 21

For more technology stories, visit newscientist.com/technologyTECHNOLOGY

Many businesses employ chatbots to handle customer queries, but criminals also use them to trick people into giving out account information, so how do you ensure you’re talking to a “good” bot?

Roman Yampolskiy and colleagues at the University of Louisville, Kentucky, fed text written by chatbots into software originally

designed to identify human authors and found that they were often able to recognise the chatbot responsible. Identifying chatbots that change over time proved more difficult, however.

Yampolskiy is now looking for signature features that identify particular chatbots, suggesting that software could be developed that would weed out the bad bots.

Spot the bad bot

would be no way of linking an avatar username to a human user.

Yampolskiy and colleagues have developed facial recognition techniques specifically tailored to avatars, since current algorithms only work on humans. “Not all avatars are human looking, and even with those that are humanoid there is a huge diversity of colour,” Yampolskiy says, so his

software uses those colours to improve avatar recognition.

The team also investigated matching a human face to an avatar generated from that face – previous studies show that avatars often resemble their owners. Combining their colour-based technique with existing facial recognition software produced the best results, suggesting it might be possible to track someone between the physical and virtual worlds. They will present the work at the Defense, Security, and Sensing conference in Baltimore, Maryland, this month.

Yampolskiy also plans to develop recognition algorithms for robots. He says that autonomous robots may one day – at least in Japan – become ubiquitous enough to require identification of their own, separate from humans.

Artimetrics probably have no immediately useful applications, says Tony Mansfield, a biometrics specialist at the UK’s National Physical Laboratory in London. He says it is unlikely that virtual worlds or robots will ever become completely decentralised, which means companies should still be able to provide unique identifiers, rendering Yampolskiy’s approach unnecessary. “This should be as easy as identifying a barcode, rather than as hard as biometrics,” he says. n

MiC

Ha

EL B

arT

ON

–Appearances can be deceptive–

“Like human biometrics, ‘artimetrics’ would serve to authenticate and identify non-biological agents”

A POLICE car rolls up to a house where the doors and windows are smashed in, rooms are ransacked and numerous high-value items are missing. Calming the home-owner, an officer begins to investigate: “Did you see the person who did it?” The shaken victim replies: “Yes, he had massive purple dreadlocks, green lips and was dressed like Michael Jackson.”

Such an unusual perpetrator would be easy to identify in the physical realm, but this break-in took place in a virtual world, where odd-looking avatars are the norm. It may sound like an odd crime, but Japanese police have previously arrested virtual muggers, and the FBI has investigated casinos based in the virtual world of Second Life.

Virtual crimes will become more common as we venture more and more into these worlds, says computer scientist Roman Yampolskiy. To prevent this,

multinational defence firm Raytheon, based in Waltham, Massachusetts, has a patent pending on fusing a person’s real biometrics with their 3D avatar, so you know for sure who you are speaking to in a digital world.

Yampolskiy and colleagues at the Cyber-Security Lab at the University of Louisville in Kentucky are going one step further: they are developing the field of artificial biometrics, or “artimetrics”. Similar to human biometrics, artimetrics

would serve to authenticate and identify non-biological agents such as avatars, physical robots or even chatbots (see “Spot the bad bot”, below).

In Second Life, avatars are easily identified by their username, meaning police can just ask San Francisco-based Linden Labs, which runs the virtual world, to look up a particular user. But what happens when virtual worlds start running on peer-to-peer networks, leaving no central authority to appeal to? Then there

Jacob Aron

Catch me if you canCrime in virtual worlds is boosting demand for tracking antisocial avatars – a form of biometrics may help

120407_N_Tech_Opener.indd 21 29/3/12 17:22:11