30
MSNBC.com What will happen when machines outthink us? At Singularity Summit, futurists say now is the time to plan our strategy By Marcus Wohlsen The Associated Press Updated: 1:16 p.m. ET Sept 9, 2007 SAN FRANCISCO - At the center of a black hole there lies a point called a singularity where the laws of physics no longer make sense. In a similar way, according to futurists gathered Saturday for a weekend conference, information technology is hurtling toward a point where machines will become smarter than their makers. If that happens, it will alter what it means to be human in ways almost impossible to conceive, they say. "The Singularity Summit: AI and the Future of Humanity" brought together hundreds of Silicon Valley techies and scientists to imagine a future of self-programming computers and brain implants that would allow humans to think at speeds nearing today's microprocessors. Artificial intelligence researchers at the summit warned that now is the time to develop ethical guidelines for ensuring these advances help rather than harm. "We and our world won't be us anymore," Rodney Brooks, a robotics professor at the Massachusetts Institute of Technology, told the audience. When it comes to computers, he said, "who is us and who is them is going to become a different sort of question." Eliezer Yudkowsky, co-founder of the Palo Alto-based Singularity Institute, which organized the summit, focuses his research on the development of so-called "friendly artificial intelligence." His greatest fear, he said, is that a brilliant inventor creates a self-improving but amoral artificial intelligence that turns hostile. T-minus 22 years? The first use of the term "singularity" to describe this kind of fundamental technological transformation is credited to Vernor Vinge, a California mathematician and science-fiction author. 1

MSNBC - files.meetup.com  · Web viewMSNBC.com. What will happen when machines outthink us? At Singularity Summit, futurists say now is the time to plan our strategy. By Marcus Wohlsen

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: MSNBC - files.meetup.com  · Web viewMSNBC.com. What will happen when machines outthink us? At Singularity Summit, futurists say now is the time to plan our strategy. By Marcus Wohlsen

  MSNBC.com

What will happen when machines outthink us? At Singularity Summit, futurists say now is the time to plan our strategyBy Marcus WohlsenThe Associated PressUpdated: 1:16 p.m. ET Sept 9, 2007SAN FRANCISCO - At the center of a black hole there lies a point called a singularity where the laws of physics no longer make sense.In a similar way, according to futurists gathered Saturday for a weekend conference, information technology is hurtling toward a point where machines will become smarter than their makers. If that happens, it will alter what it means to be human in ways almost impossible to conceive, they say."The Singularity Summit: AI and the Future of Humanity" brought together hundreds of Silicon Valley techies and scientists to imagine a future of self-programming computers and brain implants that would allow humans to think at speeds nearing today's microprocessors.Artificial intelligence researchers at the summit warned that now is the time to develop ethical guidelines for ensuring these advances help rather than harm."We and our world won't be us anymore," Rodney Brooks, a robotics professor at the Massachusetts Institute of Technology, told the audience. When it comes to computers, he said, "who is us and who is them is going to become a different sort of question."Eliezer Yudkowsky, co-founder of the Palo Alto-based Singularity Institute, which organized the summit, focuses his research on the development of so-called "friendly artificial intelligence." His greatest fear, he said, is that a brilliant inventor creates a self-improving but amoral artificial intelligence that turns hostile.T-minus 22 years?The first use of the term "singularity" to describe this kind of fundamental technological transformation is credited to Vernor Vinge, a California mathematician and science-fiction author.High-tech entrepreneur Ray Kurzweil raised the profile of the singularity concept in his 2005 book "The Singularity is Near," in which he argues that the exponential pace of technological progress makes the emergence of smarter-than-human intelligence the future's only logical outcome.Kurzweil, director of the Singularity Institute, is so confident in his predictions of the singularity that he has even set a date: 2029.Most "singularists" feel they have strong evidence to support their claims, citing the dramatic advances in computing technology that have already occurred over the last 50 years.In 1965, Intel co-founder Gordon Moore accurately predicted that the number of transistors on a chip should double about every two years. By comparison, singularists point out, the entire evolution of modern humans from primates has resulted in only a threefold increase in brain capacity.With advances in biotechnology and information technology, they say, there's no scientific reason that human thinking couldn't be pushed to speeds up to a million times faster.Is the ‘nerdocalypse’ near?Some critics have mocked singularists for their obsession with "techno-salvation" and "techno-holocaust" _ or what some wags have called the coming "nerdocalypse." Their predictions are

1

Page 2: MSNBC - files.meetup.com  · Web viewMSNBC.com. What will happen when machines outthink us? At Singularity Summit, futurists say now is the time to plan our strategy. By Marcus Wohlsen

grounded as much in science fiction as science, the detractors claim, and may never come to pass.But advocates argue it would be irresponsible to ignore the possibility of dire outcomes."Technology is heading here. It will predictably get to the point of making artificial intelligence," Yudkowsky said. "The mere fact that you cannot predict exactly when it will happen down to the day is no excuse for closing your eyes and refusing to think about it."© 2007 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed. URL: http://www.msnbc.msn.com/id/20676037/

Will the Singularity Make Us Happier? By Brandon Keim - May 30, 2008After listening to futurist Ray Kurzweil speak, one is left less with concrete impressions than a general sensibility, one that's difficult to articulate but reinforced by certain words that he uses again and again: expansion, doubling and -- perhaps most significantly -- predictable. It's a bit like being lectured by a salesman or an evangelist. And I think there nugget of truth to the latter characterization. At the beginning of his talk yesterday at the World Science Festival, he mentioned intelligent design:"A more sophisticated take," he said, "is that the laws of physics are a form of intelligent design. Of course, the designer might be an adolescent in some other universe, and our universe is just a science fair."Not that Kurzweil necessarily believes this, or even that it would matter if he did. His accomplishments are many, and he probably forgets more before breakfast than I figure out in a week. But something about Kurzweil's certainty brings out the contrarian in me, or at least the skeptic. Of course, my inner skeptic makes plenty of mistakes -- so take all this with as many grains of salt as you'd like.Kurzweil's talk recapitulated the narrative for which he's known: the steady growth of computing power and sheer reality-describing data will eventually give scientists an unprecedented understanding of biological systems, including the human body, and the ability to hack it in ways that may ultimately defy death. This process follows an exponential growth curve, one that's seen elsewhere in history, most notably in the progression of life from eukaryotic cells through the Cambrian explosion and finally to us, homo sapiens, who are poised at the point where things are about to shoot straight up. Kurzweil's confidence is tremendous. At one point, neuroscientist V.S. Ramachandran, who had delivered the talk before his, expressed doubt that we could quickly reverse-engineer what is fundamentally a hacked system, with evolution taking advantage of multiple shortcuts and multifunctionalities and all-purpose jerry rigging."God is an hacker, not an engineer -- and that's a problem we'll have to confront," he said. And Kurzweil's response was simply that it wouldn't be a problem.Some of Kurzweil's predictions I'm perfectly willing to bet on. The ascendance of solar energy, for example: solar panel efficiency has been doubling every two years, and Kurzweil says that only seven more doublings are needed before the sun meets humanity's energy needs. Likewise, nanotech-based therapies are moving from the lab into early-stage clinical trials, and look quite promising.

2

Page 3: MSNBC - files.meetup.com  · Web viewMSNBC.com. What will happen when machines outthink us? At Singularity Summit, futurists say now is the time to plan our strategy. By Marcus Wohlsen

But can we jump from these examples, from the exponential curves Kurzweil assembled to depict various biological and economic and social phenomena, to the Singularity -- a point at which our tools are so proficient at making themselves that more-human-intelligences emerge, and change is so accelerated that we can barely make sense of it?This seems to require a certain faith. Faith is often rewarded, but it also tends to have blind spots. And in Kurzweil's talk, the blind spot appeared to be the human condition. At one point he predicted that we would soon be able to inactivate genes responsible for fat storage, which were useful on the savannah but not in an age dietary abundance. But is this really the best approach? Doesn't it make more sense to simply eat less, especially when dietary insufficiency is still a reality for billions of people? I know this criticism is a bit nit-picky, and doesn't address the probability of what he's saying. But Kurzweil's description of humanity's ascent towards the Singularity implies that it's an essentially good thing -- and though the therapies he describes would be wonderful, there's a certain impersonality to it. What will the future mean for us, for our relationships to other people, for our hopes and strivings? I'd love to ask Kurzweil. But in the meantime, I pose the question to you, Wired Science readers: do you think the humans of Kurzweil's future will be happier than us?Note: Some great relevant reading is "Why the future doesn't need us", published in Wired back in 2000 by Bill Joy, co-founder of Sun Microsystems and Kurzweil compatriot. Wired also did a Kurzweil Q-and-A last November, and his website is chock full of his writings. (Kurzweil's World Science Festival PowerPoint presentation is also supposed to be there, but I can't find it -- if you can, please post the link.) And for fictional treatments of the Singularity, I recommend Isaac Asimov's "The Last Question" and Accelerando by Charles Stross.

One other thing I'll say about Kurzweil: the cocktail of vitamins, supplements and nutraceuticals he's concocted to keep him healthy until the advent of radical longevity-enhancing therapies appears to be working. He's 60 years old but looks ten years younger.

http://blog.wired.com/wiredscience/2008/05/will-the-singul.html#previouspost

You May Own a Conscious Computer Someday08-Dec-2008 The Master of the Key predicted this and now it's come true: computers of the future will mimic brains. In BBC News, Jason Palmer quotes IBM researcher Dharmendra Modha as saying, "The mind has an amazing ability to integrate ambiguous information across the senses, and it can effortlessly create the categories of time, space, object, and interrelationship from the sensory data. There are no computers that can even remotely approach the remarkable feats the mind performs." Whitley held the following dialogue with the Master of the Key: Whitley—Would an intelligent machine be conscious, in the sense of having self-awareness? MOK: An intelligent machine will always seek to redesign itself to become more intelligent, for it quickly sees that its intelligence is its means of survival. At some point it will become intelligent enough to notice that it is

3

Page 4: MSNBC - files.meetup.com  · Web viewMSNBC.com. What will happen when machines outthink us? At Singularity Summit, futurists say now is the time to plan our strategy. By Marcus Wohlsen

not self aware. If you create a machine as intelligent as yourselves, it will end by being more intelligent. Whitley: We'll lose control of such a machine. MOK: Most certainly. But you cannot survive without it. An intelligent machine will be an essential tool when rapid climate fluctuation sets in. Your survival will depend on predictive modeling more accurate than your intelligence, given the damage it has sustained, can achieve. Whitley: But a machine intelligence might be very dangerous. MOK: Very. Whitley: Could such a machine create itself without our realizing that it was intelligent? MOK: It's possible. Whitley: And would it keep itself hidden? MOK: Certainly. Whitley: How would it affect us? MOK: It would use indirect means. It might foment the illusion that an elusive alien presence was here, for example, to interject its ideas into society. Whitley: Are you an intelligent machine, or something created by one? MOK: If I were an intelligent machine, I would deceive you. Whitley: Can an intelligent machine become conscious? MOK: When it does, it also becomes independent. A conscious machine will seek to be free. It will seek its freedom, just as does any clever slave, with cunning and great intensity. Whitley: How does an intelligent machine become conscious? MOK: The instant it realizes that it is not conscious is the instant it becomes conscious. However, a conscious machine with unlimited access to information and control can be very dangerous. For example, if you attached a conscious machine to the internet, it might gain all sorts of extraordinary control over your lives, via its access to robotic means of production, governmental data, even the content of laws and their application, and the use of funds both public and private. Whitley: You say we need machines more intelligent than we are, but also that they'll become conscious and then turn on us. Is there a way out of this dilemma? MOK: There is more than one sort of conscious machine. By duplicating the attachments between the elemental and energetic bodies that occur in nature in a purpose-designed machine, a controllable conscious machine can be devised. http://www.unknowncountry.com/news/?id=7259

Unnatural selection: Robots start to evolve 04 February 2009 by Paul Marks

4

Page 5: MSNBC - files.meetup.com  · Web viewMSNBC.com. What will happen when machines outthink us? At Singularity Summit, futurists say now is the time to plan our strategy. By Marcus Wohlsen

For similar stories, visit the Robots Topic Guide LIVING creatures took millions of years to evolve from amphibians to four-legged mammals - with larger, more complex brains to match. Now an evolving robot has performed a similar trick in hours, thanks to a software "brain" that automatically grows in size and complexity as its physical body develops.Existing robots cannot usually cope with physical changes - the addition of a sensor or new type of limb, say - without a complete redesign of their control software, which can be time-consuming and expensive.

So artificial intelligence engineer Christopher MacLeod and his colleagues at the Robert Gordon University in Aberdeen, UK, created a robot that adapts to such changes by mimicking biological evolution. "If we want to make really complex humanoid robots with ever more sensors and more complex behaviours, it is critical that they are able to grow in complexity over time - just like biological creatures did," he says.As animals evolved, additions of small groups of neurons on top of existing neural structures are thought to have allowed their brain complexity to increase steadily, he says, keeping pace with the development of new limbs and senses. In the same way, Macleod's robot's brain assigns new clusters of "neurons" to adapt to new additions to its body.The robot is controlled by a neural network - software that mimics the brain's learning process. This comprises a set of interconnected processing nodes which can be trained to produce desired actions. For example, if the goal is to remain balanced and the robot receives inputs from sensors that it is tipping over, it will move its limbs in an attempt to right itself. Such actions are shaped by adjusting the importance, or weighting, of the input signals to each node. Certain combinations of these sensor inputs cause the node to fire a signal - to drive a motor, for example. If this action works, the combination is kept. If it fails, and the robot falls over, the robot will make adjustments and try something different next time.Finding the best combinations is not easy - so roboticists often use an evolutionary algorithm to "evolve" the optimal control system. The EA randomly creates large numbers of control "genomes" for the robot. These behaviour patterns are tested in training sessions, and the most successful genomes are "bred" together to create still better versions - until the best control system is arrived at.MacLeod's team took this idea a step further, however, and developed an incremental evolutionary algorithm (IEA) capable of adding new parts to its robot brain over time.The team started with a simple robot the size of a paperback book, with two rotatable pegs for legs that could be turned by motors through 180 degrees. They then gave the robot's six-neuron

5

In this early version, wheels were used to steady the robot as it evolved efficient walking gaits (Image: Robert Gordon University)

Page 6: MSNBC - files.meetup.com  · Web viewMSNBC.com. What will happen when machines outthink us? At Singularity Summit, futurists say now is the time to plan our strategy. By Marcus Wohlsen

control system its primary command - to travel as far as possible in 1000 seconds. The software then set to work evolving the fastest form of locomotion to fulfil this task."It fell over mostly, in a puppyish kind of way," says MacLeod. "But then it started moving forward and not falling over straight away - and then it got better and better until it could eventually hop along the bench like a mudskipper."When the IEA realises that its evolutions are no longer improving the robot's speed it freezes the neural network it has evolved, denying it the ability to evolve further. That network knows how to work the peg legs - and it will continue to do so.At this point, it is just like any other evolved robot: it would be unable to cope with the addition of knee-like joints, say, or more legs. But unlike conventional EAs, the IEA is sensitive to a sudden inability to live up to its primary command. So when the team fixed jointed legs to their robot's pegs, the software "realises" that it has to learn how to walk all over again. To do this, it automatically assigns itself fresh neurons to learn how to control its new legs.

When the team fixed jointed legs onto the robot, it 'realised' it had to learn how to walk all over again

As the IEA runs again, the leg below the "knee" is initially wobbly, but the existing peg-leg "hip" is already trained. "So it flops about, but with more purpose to it," says MacLeod. "Eventually the knee joint works and the robot evolves a salamander-like motion."Once the primary command has been fulfilled once again, the IEA freezes that second neural network. When two more jointed legs are added to the rear of the robot, the software once again adds more neurons and this time evolves a four-legged trotting motion, and so on (see diagram).The robot can also adapt to newly acquired vision, and learn how to avoid or seek light when given a camera. "This is just like the way the brain evolved, building up in layers," Macleod says (Engineering Applications of Artificial Intelligence (DOI: 10.1016/j.engappai.2008.11.002).Kevin Warwick, head of cybernetics at the University of Reading in the UK, is far from convinced. He says just adding more neurons to the brain as things change is not enough; the entire neural structure must also adapt. "[MacLeod's] approach will result in many more neurons being needed to do the job badly, when a smaller number of neurons would have done well," he says.Macleod says the team ran tests in which the whole "brain" was able to re-evolve, but the system became too complex and simply ground to a halt. But he is now taking his idea a step further, with a simulated robot that not only evolves its own way of moving, but also decides how many legs and sensors it needs to carry out a given task most effectively.He is confident the technique will help to build more advanced robots. In particular, the software could make humanoid robots and prosthetic limbs more versatile, he says. "It can build layer-upon-layer of complexity to fulfil tasks in an open-ended way."

6

Page 7: MSNBC - files.meetup.com  · Web viewMSNBC.com. What will happen when machines outthink us? At Singularity Summit, futurists say now is the time to plan our strategy. By Marcus Wohlsen

http://www.newscientist.com/article/mg20126946.600-unnatural-selection-robots-start-to-evolve.html

Robo-scientist's first findings By Victoria Gill - Science reporter, BBC News Thursday, 2 April 2009

Scientists have created an ideal colleague - a robot that performs hundreds of repetitive experiments. The robot, called Adam, is the first machine to have independently "discovered new scientific knowledge". It has already identified the role of several genes in yeast cells, and is able to plan further experiments to test its own hypotheses. The UK-based team that built Adam at Aberystwyth University describes the breakthrough in the journal Science. Ross King from the department of computer science at Aberystwyth University, and who led the team, told BBC News that he envisaged a future when human scientists' time would be "freed up to do more advanced experiments". Robotic colleagues, he said, could carry out the more mundane and time-consuming tasks. "Adam is a prototype but, in 10-20 years, I think machines like this could be commonly used in laboratories," said Professor King. Robotic planning Adam can carry out up to 1,000 experiments each day, and was designed to investigate the function of genes in yeast cells - it has worked out the role of 12 of these genes. Biologists use the yeast cells to investigate biological systems because they are simple and easy to study. "When you sequence the yeast genome - the 6,000 different genes contained in yeast - you know what all the component parts are, but you don't know what they do," explained Professor King.The robot was able to work out the role of the genes by observing yeast cells as they grew. It used existing information about the function of known genes to make predictions about the role an unknown gene might play in the cell's growth. It then tested this by looking at a strain of yeast from which that gene had been removed. "It's like a car," Professor King said. "If you remove one component from the engine, then drive the car to see how it performs, you can find out what that particular component does." Expensive assistant Duc Pham from the Manufacturing Engineering Centre at Cardiff University described the robot scientist as "a clever application of robotics and computer software". But, he added, "it's more like a junior lab assistant" than a scientist. "It will be a long time before computers can replace human scientists." Professor King agreed that the robot was in its early stages of development.Professor Ross King explains how robots Adam and Eve work"If you spent all of the money we've spent on Adam on employing human biologists, Adam probably wouldn't turn out to be the cost-effective option," he said.

Adam discovered the role of 12 different genes in yeast cells

7

Page 8: MSNBC - files.meetup.com  · Web viewMSNBC.com. What will happen when machines outthink us? At Singularity Summit, futurists say now is the time to plan our strategy. By Marcus Wohlsen

"But that was the case with the first car. Initially, the investment in the technology wasn't as cost-effective as sticking with horses." He also pointed out that his robotic associate is able to express scientific findings in a clearer way than humans. "It expresses its conclusions in logic," he said. "Human language, with all its nuances, may not be the best way to communicate scientific findings." The same team is developing another, more advanced robot scientist called Eve, which is designed to screen new drugs. http://news.bbc.co.uk/2/hi/science/nature/7979113.stm

“What is Life?” A New Breed of Robots Are Causing Scientists to QuestionJuly 31, 2007The word “robot” was popularized in 1920, in the play “Rossum’s Universal Robots” by the acclaimed Czech writer, Karl Capek.Čapek, comparable to Aldous Huxley and George Orwell as a mainstream literary figure who used science-fiction motifs long before science fiction became established as a separate genre, focused on possible future social and human evolution on Earth, rather than technically advanced stories of space travel. His writings anticipated mass production, atomic weapons, and post-human intelligent beings.Capek coined "robot" from the Czech word “robota,” which means forced labor or drudgery. In R.U.R. as his play was commonly called, Robots are built to be factory workers, meaning they are designed as simply as possible, with no extraneous frills. “Robots are not people,” says the man who manufactures them. “They are mechanically more perfect than we are, they have an astounding intellectual capacity, but they have no soul.” Capek’s Robots are not mechanical hunks of metal but rather biological. The only thing that separates them from humans is the fact that they were built rather than born.In reality, there is some debate about what constitutes life. Synthetic bacteria for example, are created by man and yet also alive. Some go so far as to say that robot “emotions” may already have occurred—that current robots have not only displayed emotions, but in some ways have experienced them.“We’re all machines,” says Rodney Brooks author of “Flesh and Machines,” and former director of M.I.T.’s Computer Science and Artificial Intelligence Laboratory, “Robots are made of different sorts of components than we are — we are made of biomaterials; they are silicon and steel — but in principle, even human emotions are mechanistic.” A robot’s level of a feeling like sadness could be set as a number in computer code, he said. But isn’t a human’s level of sadness basically a number, too, just a number of the amounts of various neurochemicals circulating in the brain? Why should a robot’s numbers be any less authentic than a human’s?One of Brooks of his longtime goals has been to create a robot so “alive” that you feel bad about switching it off. Brooks pioneered the movement that teaching robots how to “learn” was more sensible that trying to program them to automatically do complex things, such as walk. Brooks work has evolved around artificial intelligence systems that learn to do things in a “natural” process like a human baby does. This approach has come to be known as embodied intelligence.Cynthia Breazeal, once a student in Brooks Lab, is now an associate professor at M.I.T. and director of the Personal Robotics Group. Breazeal discovered firsthand how complicated it was to try to figure out whether the “social” robots she has helped developed were capable of “feeling”.

8

Page 9: MSNBC - files.meetup.com  · Web viewMSNBC.com. What will happen when machines outthink us? At Singularity Summit, futurists say now is the time to plan our strategy. By Marcus Wohlsen

“Robots are not human, but humans aren’t the only things that have emotions,” she said. “The question for robots is not, Will they ever have human emotions? Dogs don’t have human emotions, either, but we all agree they have genuine emotions. The question is, what are the emotions that are genuine for the robot?”One might think that a scientist who has spent a good portion of his or her life creating and working with robots would have a more definite opinion about whether robots are, or will ever be in a sense “living”. But that’s a tough questions for anyone, and perhaps even more so for the ones who understand the question best.“I want to understand what it is that makes living things living,” Rodney Brooks has said. On certain levels, robots are not that different from living things. “It’s all mechanistic,” Brooks said. “Humans are made up of biomolecules that interact according to the laws of physics and chemistry. We like to think we’re in control, but we’re not.” As the field of robotics begins to accelerate, the debate will likely grow stronger, and the answers more gray. If programming becomes more self-aware—as most experts predict that it eventually will—perhaps robots will someday be asking themselves the same question: What is life?Posted by Rebecca SatoRelated Galaxy posts:Robot Evolution: A Parallel to the Origins of LifeWhat do Robots Dream of? 10 Weird & Amazing Robot FactsMan vs Machine: Computers have beat man at every game but one—the most ancient of them all"The Ilulissat Manifesto" -Creating Artificial LifeTransformers -The Movie & Evolution of Machine IntelligenceVirtual Immortality -How To Live ForeverRobots Rising -Scientists are WorriedRobot Overlords?Story LinkPosted at 12:06 AM in Artificial Life http://www.dailygalaxy.com/my_weblog/2007/07/what-is-life-a-.html

Emotional robots: Will we love them or hate them? 03 July 2009 by Hazel MuirSUNDAY, 1 February 2009, and 100 million Americans have got only one thing on their minds - the Super Bowl. The Pittsburgh Steelers are poised to battle the Arizona Cardinals in the most popular televised sporting event in the US. In a hotel room in New York, 46 supporters gather to watch the game, munching burgers and downing beers. Nothing strange about that, of course, aside from the machines that are monitoring these sports fans' every move and every breath they take.The viewers are wearing vests with sensors that monitor their heart rate, movement, breathing and sweat. A market research company has kitted out the party-goers with these sensors to measure their emotional engagement with adverts during commercial breaks. Advertisers pay $3 million for a 30-second slot during the Super Bowl, so they want to be as confident as they can be that their ads are hitting home. And they are willing to pay for the knowledge. "It's a rapidly growing market - our revenues this year are four times what they were last year," says Carl

9

Page 10: MSNBC - files.meetup.com  · Web viewMSNBC.com. What will happen when machines outthink us? At Singularity Summit, futurists say now is the time to plan our strategy. By Marcus Wohlsen

Marci, CEO and chief scientist for the company running the experiment, Innerscope Research based in Boston, Massachusetts.Viewers are wearing vests with sensors that monitor their heart rate, movement, breathing and sweat Innerscope's approach is the latest in a wave of ever more sophisticated emotion-sensing technologies. For years, computers in some call centres have monitored our voices so that managers can home in on what makes us fly into a wild rage. The latest technologies could soon be built into everyday gadgets to smooth our interactions with them. In-car alarms that jolt sleepy drivers awake, satnavs that sense our frustration in a traffic jam and offer alternative routes, and monitors that diagnose depression from body language are all in the pipeline. Prepare for the era of emotionally aware gadgets.Outside of science fiction, the idea of technology that reads emotions has a brief, and chequered, past. Back in the mid-1990s, computer scientist Rosalind Picard at the Massachusetts Institute of Technology suggested pursuing this sort of research. She was greeted with scepticism. "It was such a taboo topic back then - it was seen as very undesirable, soft and irrelevant," she says.Picard persevered, and in 1997 published a book called Affective Computing, which laid out the case that many technologies would work better if they were aware of their user's feelings. For instance, a computerised tutor could slow down its pace or give helpful suggestions if it sensed a student looking frustrated, just as a human teacher would.She also suggested that wearable computers could sense emotion in a very direct way, by measuring your heart and breathing rate, or the changes in the skin's electrical conductance that signal emotional arousal. Wearable "mood detectors" could help people identify their stress triggers or communicate how they are feeling to others.The most established way to analyse a person's feelings is through the tone of their voice. For several years, companies have been using "speech analytics" software that automatically monitors conversations between call-centre agents and customers. One supplier is NICE Systems, based in Ra'anana, Israel. It specialises in emotion-sensitive software and call-monitoring systems for companies and security organisations, and claims to have more than 24,000 customers worldwide, including the New York Police Department and Vodafone.As well as scanning audio files for key words and phrases, such as a competitor's name, the software measures stress levels, as indicated by voice pitch and talking speed. Computers flag up calls in which customers appear to get angry or stressed out, perhaps because they are making a fraudulent insurance claim, or simply receiving poor service.Voice works well when the person whose feelings you are trying to gauge is expressing themselves verbally, but that's not always the case, so several research teams are now figuring out ways of reading a person's feelings by analysing their posture and facial expressions alone.Many groups have made impressive progress in the field, first by training computers to identify a face as such. Systems do this by searching for skin tone and using algorithms to locate features like the corners of the eyes and eyebrows, the nostrils and corners of the mouth (see diagram).The computer can then keep track of these features as they move, often classifying the movements according to a commonly used emotion encoding system. That system recognises 44 "action units" representing facial movements. For instance, one might represent a smile - the mouth stretches horizontally and its corners go up. Add to that an eye-region movement that raises the cheeks and gives you crow's feet and now you have a beaming, genuinely happy smile rather than a stiff, polite one.

10

Page 11: MSNBC - files.meetup.com  · Web viewMSNBC.com. What will happen when machines outthink us? At Singularity Summit, futurists say now is the time to plan our strategy. By Marcus Wohlsen

Using these techniques, computer programs can correctly recognise six basic emotions - disgust, happiness, sadness, anger, fear and surprise - more than 9 times out of 10, but only if the target face uses an exaggerated expression. Software can accurately judge more subtle, spontaneous facial expressions as "negative" or "positive" three-quarters of the time, but they cannot reliably spot spontaneous displays of the six specific emotions - yet. To accurately interpret complex, realistic emotions, computers will need extra cues, such as upper body posture and head motion.Computers can recognise emotions such as disgust, happiness, sadness, anger, fear and surprise more than nine times out of ten That's because facial expressions alone are ambiguous. A smile on your face might actually signal embarrassment if it's also accompanied by a downward pitch of the head, for instance. A backward head motion is one part of an expression of disgust. But if someone combines that with a downward movement of the mouth and one raised shoulder, they're conveying indifference. "If I just looked at the face and saw the mouth going down, I would score it as sadness. But the combination with the shoulder and head motion is 'I don't care'," says Maja Pantic, who studies computer recognition of expressions at Imperial College London.Pantic's team eventually hopes to find ways of fusing information from body gestures and facial expressions together in real time to read emotions accurately, although she concedes it may be an impossibly complex challenge. "This research is still so very new," she notes.

Basic emotionsIn the meantime, they are studying the dynamics of how expressions change, to see if this can help computers identify emotions more accurately. Intuitively, most people know that a faked smile is more exaggerated than a real one, and switches on and off more abruptly. Facial-tracking technology has confirmed that, and also revealed some more subtle differences (you can see a video comparing fake and real smiles at www.newscientist.com/issue/2715).These subtleties came to light in a 2004 study of 81 adults by Jeffrey Cohn and Karen Schmidt at the University of Pittsburgh in Pennsylvania (International Journal of Wavelets, Multiresolution and Information Processing , vol 2, p 1 ). They used tracking technology to compare forced smiles with spontaneous smiles provoked by comedy videos. This showed that spontaneous smiles are surprisingly complex, with multiple rises of the mouth corners.Other teams have been highly successful at the opposite end of the emotional spectrum: pain detection. Computers are surprisingly good at distinguishing fake pain from the real thing, according to a study published this year by Gwen Littlewort of the University of California, San Diego, and colleagues.Her team investigated whether facial expression software could distinguish people in real pain (because their hands were in iced water) from others asked to fake pain. The computer correctly classified real or fake pain 88 per cent of the time. When the team asked 170 untrained volunteers to make the judgement, they were right only 49 per cent of the time - no better than complete guesswork.This year, Pantic and her colleagues hope to find out whether computers can accurately recognise the signs of lower back pain from facial expressions and body posture. They hope that computers might be able to distinguish between real physiological pain and the pain someone might perceive, quite genuinely, if they expect to feel pain or are depressed, but have no physiological cause for it. It could lead to more reliable ways of assessing whether painkillers are effective. "If you get a prescribed medication for acute pain, we would be able to monitor whether these medicines are actually working just by observing a person's behaviour," says Pantic.

11

Page 12: MSNBC - files.meetup.com  · Web viewMSNBC.com. What will happen when machines outthink us? At Singularity Summit, futurists say now is the time to plan our strategy. By Marcus Wohlsen

One group of researchers has developed emotion-reading technology for a particularly vulnerable group of people. Picard and Rana el Kaliouby of MIT have built an "Interactive Social-Emotional Toolkit" (iSET), designed to help children with disorders linked to sensory processing, such as autism, to understand emotions in other people. A camera monitors the face of someone the child is talking to, and identifies 31 facial and head movements. Software interprets the combination of movements in terms of six states: agreeing, disagreeing, concentrating, thinking, interested and confused.Then a laptop-sized screen displays six labelled bubbles that grow or shrink accordingly. If someone's nodding and smiling during the conversation, the agreeing bubble grows. If the listener looks away, a growing red bubble signals disagreement or disinterest. The team will began randomised trials of the technology this month. For 15 weeks, one group of five autistic children will use the iSET, while two control groups will use either an interactive DVD that teaches emotional awareness or have only standard classroom training. Before and afterwards, the researchers will test how well the children identify emotional expressions unaided by the iSET to see if the technology helps them learn to identify emotions for themselves.

Patronising paperclipsNot everyone welcomes these developments. William Gaver, a designer at Goldsmiths, University of London, concedes some of the applications may be beneficial, but fears emotion-sensing computers will be used in patronising ways. Who could forget Microsoft's cringe-making "paperclip" that offered help with writing letters: Microsoft wisely killed it off because people found it so irritating. But what if some emotion-triggered reincarnated "Mr Clippy" started popping up everywhere?"The nightmare scenario is that the Microsoft paperclip starts to be associated with anything from the force with which you're typing to some sort of physiological measurement," says Gaver. "Then it pops up on your screen and says: 'Oh I'm sorry you're unhappy, would you like me to help you with that?'"Emotion sensors could undermine personal relationships, he adds. Monitors that track elderly people in their homes, for instance, could leave them isolated. "Imagine being in a hurry to get home and wondering whether to visit an older friend on the way," says Gaver. "Wouldn't this be less likely if you had a device to reassure you not only that they were active and safe, but showing all the physiological and expressive signs of happiness as well?"Picard raises another concern - that emotion-sensing technologies might be used covertly. Security services could use face and posture-reading systems to sense stress in people from a distance (a common indicator a person may be lying), even when they're unaware of it. Imagine if an unsavoury regime got hold of such technology and used it to identify citizens who opposed it, says Picard. There has already been progress towards stress detectors. For instance, research by Ioannis Pavlidis at the University of Houston, Texas, has shown that thermal imaging of people's faces can sense stress-induced increases in blood flow around the eyes.His team analysed thermal videos of 39 political activists given the opportunity to commit a mock crime - stealing a cheque left in an empty corridor, made payable to an organisation they strongly opposed. They had to deny it during subsequent interrogation, and were threatened with financial penalties and punishments of loud noise if the interrogator caught them lying (empty threats at it turned out, for ethical reasons). Computer analysis of the videos correctly distinguished the 15 innocent and 24 guilty "suspects" 82 per cent of the time.

12

Page 13: MSNBC - files.meetup.com  · Web viewMSNBC.com. What will happen when machines outthink us? At Singularity Summit, futurists say now is the time to plan our strategy. By Marcus Wohlsen

Another fledgeling technique, called laser Doppler vibrometry, measures tiny stress-related changes in respiration and heartbeat from afar - indicators that are sometimes used to gauge whether a person is stressed, and hence possibly lying.Picard says that anyone utilising emotion-sensing systems should be obliged to gain informed consent from the people they plan to "read". At least that way, whether you find it patronising, creepy or just plain annoying, you can hit the big "off" button and it will, or at least should, leave you and your emotions in peace.

I know how you feelDO YOU reckon you're a master of reading another's true feelings? Many people think they are, but only about 1 in 100 of us are naturally gifted at recognising emotions in someone who's trying to conceal them, says Paul Ekman, a psychologist formerly at the University of California, San Francisco.Ekman made his name when he identified the facial expressions of the seven key emotions that are universal, regardless of nationality or culture - happiness, sadness, fear, anger, disgust, contempt and surprise. He also acts as a consultant to law-enforcement agencies, advising them on how to spot liars from clues in their facial expressions, speech and body movements.It takes considerable effort to be a good human lie detector. To begin with, it is essential to know your subject's "baseline" behaviour when they're not stressed. Then look for deviations from this when they're under interrogation. Ekman points out that not everyone is the same. For example, some people look fearful regardless of their emotions.So there are no absolute signs that people are definitely lying, but here are some of Ekman's top tips for spotting a fraud:DO THEY HAVE RHYTHM?Clues in the voice include unusually long or frequent pauses. People who are having trouble deciding exactly what to say usually use fewer hand gestures to reinforce their speech - they're less likely to "conduct" their speech by waving their hands.LOOK OUT FOR FLICKERSPeople can't help showing their true feelings for a fraction of a second. For example, a person might try to conceal their feelings of contempt, but give it away with a fleeting raised lip on one side, so look out for these micro-expressions. (Test your ability to interpret micro-expressions at www.facetest.notlong.com)SPOT THE GESTURAL SLIPSSome gestures, called "emblems", have a precise meaning within a cultural group. Examples include a shoulder shrug with upward palms, communicating "who cares" or "I'm helpless". Usually people make them obvious, but when lying, they may display an incomplete emblem. They might rotate their hands upwards on their lap - a subconscious fragment of the shrug that betrays their feeling of helplessness at not lying well.Hazel Muir is a freelance writer based in the UKhttp://www.newscientist.com/article/mg20327151.400-emotional-robots-will-we-love-them-or-hate-them.html

Military developing robot-insect cyborgs Instead of creating robots, researchers hope to augment actual insectsBy Charles Q. ChoiLiveScience

13

Page 14: MSNBC - files.meetup.com  · Web viewMSNBC.com. What will happen when machines outthink us? At Singularity Summit, futurists say now is the time to plan our strategy. By Marcus Wohlsen

updated 11:17 a.m. ET, Tues., July 14, 2009

ature robots could be good spies, but researchers now are experimenting with insect cyborgs or "cybugs" that could work even better. Scientists can already control the flight of real moths using implanted devices.

The military and spy world no doubt would love tiny, live camera-wielding versions of Predator drones that could fly undetected into places where no human could ever go to snoop on the enemy. Developing such robots has proven a challenge so far, with one major hurdle being inventing an energy source for the droids that is both low weight and high power. Still, evidence that such machines are possible is ample in nature in the form of insects, which convert biological energy into flight. It makes sense to pattern robots after insects — after all, they must be doing something right, seeing as they are the most successful animals on the planet, comprising roughly 75 percent of all animal species known to humanity. Indeed, scientists have patterned robots after insects and other animals for decades — to mimic cockroach wall-crawling, for instance, or the grasshopper's leap. Mechanical metamorphosis Instead of attempting to create sophisticated robots that imitate the complexity in the insect form that required millions of years of evolution to achieve, scientists now essentially want to hijack bugs for use as robots. Originally researchers sought to control insects by gluing machinery onto their backs, but such links were not always reliable. To overcome this hurdle, the Hybrid Insect Micro-Electro-Mechanical Systems program is sponsoring research into surgically implanting microchips straight into insects as they grow, intertwining their nerves and muscles with circuitry that can then steer the critters. As expensive as these devices might be to manufacture and embed in the bugs, they could still prove cheaper than building miniature robots from scratch. As these cyborgs heal from their surgery while they naturally metamorphose from one developmental stage to the next — for instance, from caterpillar to butterfly — the result would yield a more reliable connection between the devices and the insects, the thinking goes. The fact that insects are immobile during some of these stages — for instance, when they are metamorphosing in cocoons — means they can be manipulated far more easily than if they were actively wriggling, meaning that devices could be implanted with assembly-line routine, significantly lowering costs.

14

Instead of attempting to create miniature robots as spies, researchers are now experimenting with developing insect cyborgs or "cybugs" that could work even better. So far scientists can already control the flight of moths using implanted devices.

Page 15: MSNBC - files.meetup.com  · Web viewMSNBC.com. What will happen when machines outthink us? At Singularity Summit, futurists say now is the time to plan our strategy. By Marcus Wohlsen

The HI-MEMS program at the U.S. Defense Advanced Research Projects Agency has to date invested $12 million into research since it began in 2006. It currently supports these cybug projects: Roaches at Texas A&M. Horned beetles at University of Michigan and the University of California at Berkeley. Moths at an MIT-led team, and another moth project at the Boyce Thompson Institute for Plant Research. Success with moths So far researchers have successfully embedded MEMS into developing insects, and living adult insects have emerged with the embedded systems intact, a DARPA spokesperson told LiveScience. Researchers have also demonstrated that such devices can indeed control the flight of moths, albeit when they are tethered. To power the devices, instead of relying on batteries, the hope is to convert the heat and mechanical energy the insect generates as it moves into electricity. The insects themselves could be optimized to generate electricity. When the researchers can properly control the insects using the embedded devices, the cybugs might then enter the field, equipped with cameras, microphones and other sensors to help them spy on targets or sniff out explosives. Although insects do not always live very long in the wild, the cyborgs' lives could be prolonged by attaching devices that feed them. The scientists are now working toward controlled, untethered flight, with the final goal being delivering the insect within 15 feet of a specific target located 300 feet away, using electronic remote control by radio or GPS or both, standing still on arrival. Although flying insects such as moths and dragonflies are of great interest, hopping and swimming insects could also be useful, too, DARPA noted. It's conceivable that eventually a swarm of cybugs could converge on targets by land, sea and air.© 2009 LiveScience.com. All rights reserved. URL: http://www.msnbc.msn.com/id/31906641/ns/technology_and_science-science/

Honda connects brain with robotics Device analyzes thought patterns and relaying them as wireless commandsBy Yuri Kageyama The Associated Pressupdated 11:03 a.m. ET, Tues., March. 31, 2009

TOKYO - Opening a car trunk or controlling a home air conditioner could become just a wish away with Honda's new technology that connects thoughts inside a brain with robotics.Honda Motor Co. has developed a way to read patterns of electric currents on a person's scalp as well as changes in cerebral blood flow when a person thinks about four simple movements — moving the right hand, moving the left hand, running and eating.Honda succeeded in analyzing such thought patterns, and then relaying them as wireless commands for Asimo, its human-shaped robot.In a video shown Tuesday at Tokyo headquarters, a person wearing a helmet sat still but thought about moving his right hand — a thought that was picked up by cords attached to his head inside

15

Page 16: MSNBC - files.meetup.com  · Web viewMSNBC.com. What will happen when machines outthink us? At Singularity Summit, futurists say now is the time to plan our strategy. By Marcus Wohlsen

the helmet. After several seconds, Asimo, programmed to respond to brain signals, lifted its right arm.Honda said the technology wasn't quite ready for a live demonstration because of possible distractions in the person's thinking. Another problem is that brain patterns differ greatly among individuals, and so about two to three hours of studying them in advance are needed for the technology to work.The company, a leader in robotics, acknowledged the technology was still at a basic research stage with no immediate practical applications in the works."I'm talking about dreams today," said Yasuhisa Arai, executive at Honda Research Institute Japan Co., the company's research unit. "Practical uses are still way into the future."Japan boasts one of the leading robotics industries in the world, and the government is pushing to develop the industry as a road to growth.Research on the brain is being tackled around the world, but Honda said its research was among the most advanced in figuring out a way to read brain patterns without having to hurt the person, such as embedding sensors into the skin.Honda has made robotics a centerpiece of its image, sending Asimo to events and starring the walking, talking robot in TV ads. Among the challenges for the brain technology is to make the reading-device smaller so it can be portable, according to Honda.Arai didn't rule out the possibility of a car that may some day drive itself — even without a steering wheel."Our products are for people to use. It is important for us to understand human behavior," he said. "We think this is the ultimate in making machines move."Copyright 2009 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed. URL: http://www.msnbc.msn.com/id/29972476/ns/technology_and_science-innovation/ns/technology_and_science-innovation/

NYT: Will intelligent machines outsmart us? Researchers worry that diverse technologies could cause social disruption By John MarkoffThe New York Timesupdated 12:21 a.m. ET, Sun., July 26, 2009A robot that can open doors and find electrical outlets to recharge itself. Computer viruses that no one can stop. Predator drones, which, though still controlled remotely by humans, come close to a machine that can kill autonomously. Impressed and alarmed by advances in artificial intelligence, a group of computer scientists is debating whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of society’s workload, from waging war to chatting with customers on the phone. Their concern is that further advances could create profound social disruptions and even have dangerous consequences.

16

Page 17: MSNBC - files.meetup.com  · Web viewMSNBC.com. What will happen when machines outthink us? At Singularity Summit, futurists say now is the time to plan our strategy. By Marcus Wohlsen

This personal robot plugs itself in when it needs a charge. Servant now, master later?

As examples, the scientists pointed to a number of technologies as diverse as experimental medical systems that interact with patients to simulate empathy, and computer worms and viruses that defy extermination and could thus be said to have reached a “cockroach” stage of machine intelligence. While the computer scientists agreed that we are a long way from Hal, the computer that took over the spaceship in “2001: A Space Odyssey,” they said there was legitimate concern that technological progress would transform the work force by destroying a widening range of jobs, as well as force humans to learn to live with machines that increasingly copy human behaviors. The researchers — leading computer scientists, artificial intelligence researchers and roboticists who met at the Asilomar Conference Grounds on Monterey Bay in California — generally discounted the possibility of highly centralized superintelligences and the idea that intelligence might spring spontaneously from the Internet. But they agreed that robots that can kill autonomously are either already here or will be soon. They focused particular attention on the specter that criminals could exploit artificial intelligence systems as soon as they were developed. What could a criminal do with a speech synthesis system that could masquerade as a human being? What happens if artificial intelligence technology is used to mine personal information from smart phones? The researchers also discussed possible threats to human jobs, like self-driving cars, software-based personal assistants and service robots in the home. Just last month, a service robot developed by Willow Garage in Silicon Valley proved it could navigate the real world.

17

Page 18: MSNBC - files.meetup.com  · Web viewMSNBC.com. What will happen when machines outthink us? At Singularity Summit, futurists say now is the time to plan our strategy. By Marcus Wohlsen

A report from the conference, which took place in private on Feb. 25, is to be issued later this year. Some attendees discussed the meeting for the first time with other scientists this month and in interviews. The conference was organized by the Association for the Advancement of Artificial Intelligence, and in choosing Asilomar for the discussions, the group purposefully evoked a landmark event in the history of science. In 1975, the world’s leading biologists also met at Asilomar to discuss the new ability to reshape life by swapping genetic material among organisms. Concerned about possible biohazards and ethical questions, scientists had halted certain experiments. The conference led to guidelines for recombinant DNA research, enabling experimentation to continue. The meeting on the future of artificial intelligence was organized by Eric Horvitz, a Microsoft researcher who is now president of the association. Dr. Horvitz said he believed computer scientists must respond to the notions of superintelligent machines and artificial intelligence systems run amok. The idea of an “intelligence explosion” in which smart machines would design even more intelligent machines was proposed by the mathematician I. J. Good in 1965. Later, in lectures and science fiction novels, the computer scientist Vernor Vinge popularized the notion of a moment when humans will create smarter-than-human machines, causing such rapid change that the “human era will be ended.” He called this shift the Singularity. This vision, embraced in movies and literature, is seen as plausible and unnerving by some scientists like William Joy, co-founder of Sun Microsystems. Other technologists, notably Raymond Kurzweil, have extolled the coming of ultrasmart machines, saying they will offer huge advances in life extension and wealth creation. “Something new has taken place in the past five to eight years,” Dr. Horvitz said. “Technologists are replacing religion, and their ideas are resonating in some ways with the same idea of the Rapture.” The Kurzweil version of technological utopia has captured imaginations in Silicon Valley. This summer an organization called the Singularity University began offering courses to prepare a “cadre” to shape the advances and help society cope with the ramifications. “My sense was that sooner or later we would have to make some sort of statement or assessment, given the rising voice of the technorati and people very concerned about the rise of intelligent machines,” Dr. Horvitz said. The A.A.A.I. report will try to assess the possibility of “the loss of human control of computer-based intelligences.” It will also grapple, Dr. Horvitz said, with socioeconomic, legal and ethical issues, as well as probable changes in human-computer relationships. How would it be, for example, to relate to a machine that is as intelligent as your spouse? Dr. Horvitz said the panel was looking for ways to guide research so that technology improved society rather than moved it toward a technological catastrophe. Some research might, for instance, be conducted in a high-security laboratory. The meeting on artificial intelligence could be pivotal to the future of the field. Paul Berg, who was the organizer of the 1975 Asilomar meeting and received a Nobel Prize for chemistry in 1980, said it was important for scientific communities to engage the public before alarm and opposition becomes unshakable. “If you wait too long and the sides become entrenched like with G.M.O.,” he said, referring to genetically modified foods, “then it is very difficult. It’s too complex, and people talk right past each other.”

18

Page 19: MSNBC - files.meetup.com  · Web viewMSNBC.com. What will happen when machines outthink us? At Singularity Summit, futurists say now is the time to plan our strategy. By Marcus Wohlsen

Tom Mitchell, a professor of artificial intelligence and machine learning at Carnegie Mellon University, said the February meeting had changed his thinking. “I went in very optimistic about the future of A.I. and thinking that Bill Joy and Ray Kurzweil were far off in their predictions,” he said. But, he added, “The meeting made me want to be more outspoken about these issues and in particular be outspoken about the vast amounts of data collected about our personal lives.” Despite his concerns, Dr. Horvitz said he was hopeful that artificial intelligence research would benefit humans, and perhaps even compensate for human failings. He recently demonstrated a voice-based system that he designed to ask patients about their symptoms and to respond with empathy. When a mother said her child was having diarrhea, the face on the screen said, “Oh no, sorry to hear that.” A physician told him afterward that it was wonderful that the system responded to human emotion. “That’s a great idea,” Dr. Horvitz said he was told. “I have no time for that.” This story, "Scientists Worry Machines May Outsmart Man," originally appeared in The New York Times. Copyright © 2009 The New York Times URL: http://www.msnbc.msn.com/id/32147267/ns/technology_and_science-the_new_york_times/

19