23
INB 201 Global Business Technology AI Hype Cycle chatter or the real thing? NYT March 27, 2016 Over the last decade, smartphones, social networks and cloud computing have moved from feeding the growth of companies like Facebook and Twitter, leapfrogging to Uber, Airbnb and others that have used the phones, personal rating systems and powerful remote computers in the cloud to create their own new businesses. Believe it or not, that stuff may be heading for the rearview mirror already. The tech industry’s new architecture is based not just on the giant public computing clouds of Google, Microsoft and Amazon , but also on their A.I. capabilities. These clouds create more efficient and supple use of computing resources, available for rent. Smaller clouds used in corporate systems are designed to connect to them. The A.I. resources Ms. Greene is opening up at Google are remarkable. Google’s autocomplete feature that most of us use when doing a search can instantaneously touch 500 computers in several locations as it guesses what we are looking for. Services like Maps and Photos have over a billion users, sorting places and faces by computer. Gmail sifts through 1.4 petabytes of data, or roughly two billion books’ worth of information, every day. Handling all that, plus tasks like language translation and speech recognition, Google has amassed a wealth of analysis technology that it can offer to customers. Urs Hölzle, Ms. Greene’s chief of technical infrastructure, predicts that the business of renting out machines and software will eventually surpass Google advertising. In 2015, ad profits were $16.4 billion. NYT March 25, 2016

Technology Review - MyWeb Web viewThe A.I. resources Ms. Greene is ... ... To recognize a spoken word,

Embed Size (px)

Citation preview

Page 1: Technology Review - MyWeb  Web viewThe A.I. resources Ms. Greene is ...   ... To recognize a spoken word,

INB 201Global Business TechnologyAI

Hype Cycle chatter or the real thing?

NYT March 27, 2016

Over the last decade, smartphones, social networks and cloud computing have moved from feeding the growth of companies like Facebook and Twitter, leapfrogging to Uber, Airbnb and others that have used the phones, personal rating systems and powerful remote computers in the cloud to create their own new businesses.

Believe it or not, that stuff may be heading for the rearview mirror already. The tech industry’s new architecture is based not just on the giant public computing clouds of Google, Microsoft and Amazon, but also on their A.I. capabilities. These clouds create more efficient and supple use of computing resources, available for rent. Smaller clouds used in corporate systems are designed to connect to them.

The A.I. resources Ms. Greene is opening up at Google are remarkable. Google’s autocomplete feature that most of us use when doing a search can instantaneously touch 500 computers in several locations as it guesses what we are looking for. Services like Maps and Photos have over a billion users, sorting places and faces by computer. Gmail sifts through 1.4 petabytes of data, or roughly two billion books’ worth of information, every day.

Handling all that, plus tasks like language translation and speech recognition, Google has amassed a wealth of analysis technology that it can offer to customers. Urs Hölzle, Ms. Greene’s chief of technical infrastructure, predicts that the business of renting out machines and software will eventually surpass Google advertising. In 2015, ad profits were $16.4 billion.

NYTMarch 25, 2016

The resounding win by a Google artificial intelligence program over a champion in the complex board game Go this month was a statement — not so much to professional game players as to Google’s competitors.

Many of the tech industry’s biggest companies, like Amazon, Google, IBM and Microsoft, are jockeying to become the go-to company for A.I. In the industry’s lingo, the companies are engaged in a “platform war.”

A platform, in technology, is essentially a piece of software that other companies build on and that consumers cannot do without. Become the platform and huge

Page 2: Technology Review - MyWeb  Web viewThe A.I. resources Ms. Greene is ...   ... To recognize a spoken word,

profits will follow. Microsoft dominated personal computers because its Windows software became the center of the consumer software world. Google has come to dominate the Internet through its ubiquitous search bar.

If true believers in A.I. are correct that this long-promised technology is ready for the mainstream, the company that controls A.I. could steer the tech industry for years to come.

“Whoever wins this race will dominate the next stage of the information age,” said Pedro Domingos, a machine learning specialist and the author of “The Master Algorithm,” a 2015 book that contends that A.I. and big-data technology will remake the world.

Fei -Fei Li , a Stanford University professor who is an expert in computer vision, said one of her Ph.D. candidates had an offer for a job paying more than $1 million a year, and that was only one of four from big and small companies.

What is a business opportunity?

What is AI? How does it work?

Nick Bostrom, Superintelligent Computers

http://www.computerworld.com/article/2975866/emerging-technology/10-ted-talks-for-techies.html?phint=newt%3Dcomputerworld_dailynews&phint=idg_eid%3D941a22643e0b40a439bb47c465068c71#slide11

****Jeremy Howard, Machine Learning

http://www.computerworld.com/article/2975866/emerging-technology/10-ted-talks-for-techies.html?phint=newt%3Dcomputerworld_dailynews&phint=idg_eid%3D941a22643e0b40a439bb47c465068c71#slide10

Soft AI?Strong AI

How is Watson today different from the Watson that won on Jeopardy?

Cloud?

Because AI improves as people use it, Watson is always getting smarter; anything it learns in one instance can be immediately transferred to the others. And instead of

Page 3: Technology Review - MyWeb  Web viewThe A.I. resources Ms. Greene is ...   ... To recognize a spoken word,

one single program, it's an aggregation of diverse software engines—its logic-deduction engine and its language-parsing engine might operate on different code, on different chips, in different locations—all cleverly integrated into a unified stream of intelligence.

All the major cloud companies, plus dozens of startups, are in a mad rush to launch a Watson-like cognitive service. According to quantitative analysis firm Quid, AI has attracted more than $17 billion in investments since 2009. Last year alone more than $2 billion was invested in 322 companies with AI-like technology. Facebook and Google have recruited researchers to join their in-house AI research teams. Yahoo, Intel, Dropbox, LinkedIn, Pinterest, and Twitter have all purchased AI companies since last year. Private investment in the AI sector has been expanding 62 percent a year on average for the past four years, a rate that is expected to continue.

What is a Watson ecosystem? A Watson developer community? How is this related to business opportunities?

https://developer.ibm.com/watson/ use

Watson Ecosystem

http://www.ibm.com/smarterplanet/us/en/ibmwatson/ecosystem.html use

What is Watson Analytics? Business opportunity?

Doing your taxes?

What is an API? What does it let you do?

an abbreviation of application program interface, is a set of routines, protocols, and tools for building software applications. The API specifies how software components should interact and APIs are used when programming graphical user interface (GUI) components.

http://paidpost.nytimes.com/ca-technologies/apis-the-building-blocks-of-the-app-economy.html?_r=0

https://www.dsiglobal.com/go-digital-or-go-home-why-your-company-needs-a-digital-supply-chain/?utm_source=taboola&utm_medium=referral

What are some examples of AI in products today? Business opportunities for making them better?

Page 4: Technology Review - MyWeb  Web viewThe A.I. resources Ms. Greene is ...   ... To recognize a spoken word,

Weather forecasts, email spam filtering, Google’s search predictions, and voice recognition, such Apple’s Siri, are all examples. What these technologies have in common are machine-learning algorithms that enable them to react and respond in real time.

workflow management tools trend predictions the way brands purchase advertising, AI can detect irregular patterns, such as spam filtering or payment fraud, and

alert businesses in real time about suspicious activities. Businesses can “train” AI machines to handle incoming customer support

calls, reducing costs. It can even be used to optimize the sales funnel by scanning the database and

searching the Web for prospects that exhibit the same buying patterns as existing customers.

analyze massive amounts of genomic data, leading to more accurate prevention and treatment of medical conditions on a personalized level.

a health management company in South Africa (Metropolitan Health) to a retail sales training app maker in Britain (Red Ant). GenieMD , which is using Watson to make health recommendations to

patients, SparkCognition , which is applying Watson to computer security.

Google plus Orvis = buy an ad on NY Times for this person

See article below on concerns about having algorithms make decisions in business.

What products or types of products could be improved through AI? Soft and Strong?

There is almost nothing we can think of that cannot be made new, different, or interesting by infusing it with some extra IQ. In fact, the business plans of the next 10,000 startups are easy to forecast: Take X and add AI. This is a big deal, and now it's here.

Can you think of any for Watson?

North Face

College student

How about for soft AI?

Page 5: Technology Review - MyWeb  Web viewThe A.I. resources Ms. Greene is ...   ... To recognize a spoken word,

How is Google really an AI company?Rather than use AI to make its search better, Google is using search to make its AI better. Every time you type a query, click on a search-generated link, or create a link on the web, you are training the Google AI. When you type “Easter Bunny” into the image search bar and then click on the most Easter Bunny-looking image, you are teaching the AI what an Easter bunny looks like. Each of the 12.1 billion queries that Google's 1.2 billion searchers conduct each day tutor the deep-learning AI over and over again. With another 10 years of steady improvements to its AI algorithms, plus a thousand-fold more data and 100 times more computing resources, Google will have an unrivaled AI. My prediction: By 2024, Google's main product will not be search but AI.

What happened to make AI suddenly possible?

1. Cheap parallel computation

Thinking is an inherently parallel process, billions of neurons firing simultaneously to create synchronous waves of cortical computation. To build a neural network—the primary architecture of AI software—also requires many different processes to take place simultaneously. Each node of a neural network loosely imitates a neuron in the brain—mutually interacting with its neighbors to make sense of the signals it receives. To recognize a spoken word, a program must be able to hear all the phonemes in relation to one another; to identify an image, it needs to see every pixel in the context of the pixels around it—both deeply parallel tasks. But until recently, the typical computer processor could only ping one thing at a time.

That began to change more than a decade ago, when a new kind of chip, called a graphics processing unit, or GPU, was devised for the intensely visual—and parallel —demands of videogames, in which millions of pixels had to be recalculated many times a second. That required a specialized parallel computing chip, which was added as a supplement to the PC motherboard. The parallel graphical chips worked, and gaming soared. By 2005, GPUs were being produced in such quantities that they became much cheaper. In 2009, Andrew Ng and a team at Stanford realized that GPU chips could run neural networks in parallel.

Serial and Parallel Computing

Traditionally, software has been written for serial computation:

A problem is broken into a discrete series of instructions Instructions are executed sequentially one after another Executed on a single processor Only one instruction may execute at any moment in time

In the simplest sense, parallel computing is the simultaneous use of multiple compute resources to solve a computational problem:

A problem is broken into discrete parts that can be solved concurrently Each part is further broken down to a series of instructions Instructions from each part execute simultaneously on different processors An overall control/coordination mechanism is employed

Page 6: Technology Review - MyWeb  Web viewThe A.I. resources Ms. Greene is ...   ... To recognize a spoken word,

http://www.nvidia.com/object/what-is-gpu-computing.html

2. Big Data

Every intelligence has to be taught. A human brain, which is genetically primed to categorize things, still needs to see a dozen examples before it can distinguish between cats and dogs. That's even more true for artificial minds. Even the best-programmed computer has to play at least a thousand games of chess before it gets good. Part of the AI breakthrough lies in the incredible avalanche of collected data about our world, which provides the schooling that AIs need. Massive databases, self-tracking, web cookies, online footprints, terabytes of storage, decades of search results, Wikipedia, and the entire digital universe became the teachers making AI smart.

More data = More learning

3. Better algorithms

Digital neural nets were invented in the 1950s, but it took decades for computer scientists to learn how to tame the astronomically huge combinatorial relationships between a million—or 100 million—neurons. The key was to organize neural nets into stacked layers. Take the relatively simple task of recognizing that a face is a face. When a group of bits in a neural net are found to trigger a pattern—the image of an eye, for instance—that result is moved up to another level in the neural net for further parsing. The next level might group two eyes together and pass that meaningful chunk onto another level of hierarchical structure that associates it with the pattern of a nose. It can take many millions of these nodes (each one producing a calculation feeding others around it), stacked up to 15 levels high, to recognize a human face.

****Jeremy Howard, Machine Learning

http://www.computerworld.com/article/2975866/emerging-technology/10-ted-talks-for-techies.html?phint=newt%3Dcomputerworld_dailynews&phint=idg_eid%3D941a22643e0b40a439bb47c465068c71#slide10

IMMEDIATE FRONTIERS OF AI

Deep LearningNeuromorphic Chips – QualcommDARPA SynapseFacebook M

Deep Learning

(deep machine learning, or deep structured learning, or hierarchical learning, or sometimes DL) is a branch of machine learning based on a set of algorithms that attempt to model high-level abstractions in data by using multiple processing layers

Page 7: Technology Review - MyWeb  Web viewThe A.I. resources Ms. Greene is ...   ... To recognize a spoken word,

with complex structures or otherwise, composed of multiple non-linear transformations

Various deep learning architectures such as deep neural networks, convolutional deep neural networks, deep belief networks and recurrent neural networks have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks.

Some of the learning problems by deep learning:

Classification Clustering Regression Anomaly detection Association rules Reinforcement learning Structured prediction Feature engineering Feature learning Online learning Semi-supervised learning Unsupervised learning Learning to rank Grammar induction

What is DARPA SyNAPSE?

DARPA SyNAPSE Program

Last updated: Jan 11, 2013

Page 8: Technology Review - MyWeb  Web viewThe A.I. resources Ms. Greene is ...   ... To recognize a spoken word,

The Brain Wall - a neural network visualisation tool built by SyNAPSE researchers at IBM

SyNAPSE is a DARPA-funded program to develop electronic neuromorphic machine technology that scales to biological levels. More simply stated, it is an attempt to build a new kind of computer with similar form and function to the mammalian brain. Such artificial brains would be used to build robots whose intelligence matches that of mice and cats.

DARPA’s SyNAPSEhttp://www.artificialbrains.com/darpa-synapse-program SyNAPSE is a complex, multi-faceted project, but traces its roots to two fundamental problems. First, traditional algorithms perform poorly in the complex, real-world environments that biological agents thrive in. Biological computation, in contrast, is highly distributed and deeply data-intensive. Second, traditional microprocessors are extremely inefficient at executing highly distributed, data-intensive algorithms. SyNAPSE seeks both to advance the state-of-the-art in biological algorithms and to develop a new generation of nanotechnology necessary for the efficient implementation of those algorithms.

*****http://celest.bu.edu/outreach-and-impacts/the-synapse-project/

Neuromorphic Chips - Qualcomm

http://www.technologyreview.com/featuredstory/526506/neuromorphic-chips/

A pug-size robot named pioneer slowly rolls up to the Captain America action figure on the carpet. They’re facing off inside a rough model of a child’s bedroom that the wireless-chip maker Qualcomm has set up in a trailer. The robot pauses, almost as if it is evaluating the situation, and then corrals the figure with a snowplow-like implement mounted in front, turns around, and pushes it toward three squat pillars representing toy bins. Qualcomm senior engineer Ilwoo Chang sweeps both arms toward the pillar where the toy should be deposited. Pioneer spots that gesture with its camera and dutifully complies. Then it rolls

Page 9: Technology Review - MyWeb  Web viewThe A.I. resources Ms. Greene is ...   ... To recognize a spoken word,

back and spies another action figure, Spider-Man. This time Pioneer beelines for the toy, ignoring a chessboard nearby, and delivers it to the same pillar with no human guidance.

This demonstration at Qualcomm’s headquarters in San Diego looks modest, but it’s a glimpse of the future of computing. The robot is performing tasks that have typically needed powerful, specially programmed computers that use far more electricity. Powered by only a smartphone chip with specialized software, Pioneer can recognize objects it hasn’t seen before, sort them by their similarity to related objects, and navigate the room to deliver them to the right location—not because of laborious programming but merely by being shown once where they should go. The robot can do all that because it is simulating, albeit in a very limited fashion, the way a brain works.

Later this year, Qualcomm will begin to reveal how the technology can be embedded into the silicon chips that power every manner of electronic device. These “neuromorphic” chips—so named because they are modeled on biological brains—will be designed to process sensory data such as images and sound and to respond to changes in that data in ways not specifically programmed. They promise to accelerate decades of fitful progress in artificial intelligence and lead to machines that are able to understand and interact with the world in humanlike ways. Medical sensors and devices could track individuals’ vital signs and response to treatments over time, learning to adjust dosages or even catch problems early. Your smartphone could learn to anticipate what you want next, such as background on someone you’re about to meet or an alert that it’s time to leave for your next meeting. Those self-driving cars Google is experimenting with might not need your help at all, and more adept Roombas wouldn’t get stuck under your couch. “We’re blurring the boundary between silicon and biological systems,” says Qualcomm’s chief technology officer, Matthew Grob.

The Brain Chiphttp://techcrunch.com/2015/01/31/the-ongoing-quest-for-the-brain-chip/

Neuromorphic Chips and Cognitive Computing

Technology Review

Page 10: Technology Review - MyWeb  Web viewThe A.I. resources Ms. Greene is ...   ... To recognize a spoken word,

Neuromorphic ChipsMicroprocessors configured more like brains than traditional chips could soon make computers far more astute about what’s going on around them.

Breakthrough

An alternative design for computer chips that will enhance artificial intelligence.

Why It Matters

Traditional chips are reaching fundamental performance limits.

Key Players

Qualcomm IBM HRL Laboratories Human Brain Project

A pug-size robot named pioneer slowly rolls up to the Captain America action figure on the carpet. They’re facing off inside a rough model of a child’s bedroom that the wireless-chip maker Qualcomm has set up in a trailer. The robot pauses, almost as if it is evaluating the situation, and then corrals the figure with a snowplow-like implement mounted in front, turns around, and pushes it toward three squat pillars representing toy bins. Qualcomm senior engineer Ilwoo Chang sweeps both arms toward the pillar where the toy should be deposited. Pioneer spots that gesture with its camera and dutifully complies. Then it rolls back and spies another action figure, Spider-Man. This time Pioneer beelines for the toy, ignoring a chessboard nearby, and delivers it to the same pillar with no human guidance.

This demonstration at Qualcomm’s headquarters in San Diego looks modest, but it’s a glimpse of the future of computing. The robot is performing tasks that have typically needed powerful, specially programmed computers that use far more electricity. Powered by only a smartphone chip with specialized software, Pioneer can recognize objects it hasn’t seen before, sort them by their similarity to related objects, and navigate the room to deliver them to the right location—not because of laborious programming but merely by being shown once where they should go. The robot can do all that because it is simulating, albeit in a very limited fashion, the way a brain works.

Later this year, Qualcomm will begin to reveal how the technology can be embedded into the silicon chips that power every manner of electronic device. These “neuromorphic” chips—so named because they are modeled on biological brains—will be designed to process sensory data such as images and sound and to respond to changes in that data in ways not specifically programmed. They promise to accelerate decades of fitful progress in artificial intelligence and lead to machines that are able to understand and interact with the world in humanlike ways. Medical sensors and devices could track individuals’ vital signs and response to treatments over time, learning to adjust dosages or even catch problems early. Your smartphone could learn to anticipate what you want next, such as background on someone you’re about to meet or an alert that it’s time to leave for your next meeting. Those self-driving cars Google is experimenting with might not need your help at all, and more adept Roombas wouldn’t get stuck under your couch. “We’re blurring the boundary between silicon and biological systems,” says Qualcomm’s chief technology officer, Matthew Grob.

Page 11: Technology Review - MyWeb  Web viewThe A.I. resources Ms. Greene is ...   ... To recognize a spoken word,

Qualcomm’s chips won’t become available until next year at the earliest; the company will spend 2014 signing up researchers to try out the technology. But if it delivers, the project—known as the Zeroth program—would be the first large-scale commercial platform for neuromorphic computing. That’s on top of promising efforts at universities and at corporate labs such as IBM Research and HRL Laboratories, which have each developed neuromorphic chips under a $100 million project for the Defense Advanced Research Projects Agency. Likewise, the Human Brain Project in Europe is spending roughly 100 million euros on neuromorphic projects, including efforts at Heidelberg University and the University of Manchester. Another group in Germany recently reported using a neuromorphic chip and software modeled on insects’ odor-processing systems to recognize plant species by their flowers.

Today’s computers all use the so-called von Neumann architecture, which shuttles data back and forth between a central processor and memory chips in linear sequences of calculations. That method is great for crunching numbers and executing precisely written programs, but not for processing images or sound and making sense of it all. It’s telling that in 2012, when Google demonstrated artificial-intelligence software that learned to recognize cats in videos without being told what a cat was, it needed 16,000 processors to pull it off.

Continuing to improve the performance of such processors requires their manufacturers to pack in ever more, ever faster transistors, silicon memory caches, and data pathways, but the sheer heat generated by all those components is limiting how fast chips can be operated, especially in power-stingy mobile devices. That could halt progress toward devices that effectively process images, sound, and other sensory information and then apply it to tasks such as face recognition and robot or vehicle navigation.

No one is more acutely interested in getting around those physical challenges than Qualcomm, maker of wireless chips used in many phones and tablets. Increasingly, users of mobile devices are demanding more from these machines. But today’s personal-assistant services, such as Apple’s Siri and Google Now, are limited because they must call out to the cloud for more powerful computers to answer or anticipate queries. “We’re running up against walls,” says Jeff Gehlhaar, the Qualcomm vice president of technology who heads the Zeroth engineering team.

Neuromorphic chips attempt to model in silicon the massively parallel way the brain processes information as billions of neurons and trillions of synapses respond to sensory inputs such as visual and auditory stimuli. Those neurons also change how they connect with each other in response to changing images, sounds, and the like. That is the process we call learning. The chips, which incorporate brain-inspired models called neural networks, do the same. That’s why Qualcomm’s robot—even though for now it’s merely running software that simulates a neuromorphic chip—can put Spider-Man in the same location as Captain America without having seen Spider-Man before.

Qualcomm could add a “neural processing unit” to mobile-phone chips to handle sensory data and tasks such as image recognition.

Even if neuromorphic chips are nowhere near as capable as the brain, they should be much faster than current computers at processing sensory data and learning from it. Trying to emulate the brain just by using special software on conventional processors—the way Google did in its cat experiment—is way too inefficient to be the basis of machines with still greater intelligence, says Jeff Hawkins, a leading thinker on AI who created the Palm Pilot before cofounding Numenta, a maker of brain-inspired software. “There’s no way you can build it [only] in software,” he says of effective AI. “You have to build this in silicon.”

Neural Channel

As smartphones have taken off, so has Qualcomm, whose market capitalization now tops Intel’s. That’s thanks in part to the hundreds of wireless-communications patents that Qualcomm shows off on two levels of a seven-story atrium lobby at its San Diego headquarters. Now it’s looking to break new ground again. First in coöperation with Brain Corp., a neuroscience startup it invested in and that is housed at its

Page 12: Technology Review - MyWeb  Web viewThe A.I. resources Ms. Greene is ...   ... To recognize a spoken word,

headquarters, and more recently with its own growing staff, it has been quietly working for the past five years on algorithms to mimic brain functions as well as hardware to execute them. The Zeroth project has initially focused on robotics applications because the way robots can interact with the real world provides broader lessons about how the brain learns—lessons that can then be applied in smartphones and other products. Its name comes from Isaac Asimov’s “Zeroth Law” of robotics: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”

The idea of neuromorphic chips dates back decades. Carver Mead, the Caltech professor emeritus who is a legend in integrated-circuit design, coined the term in a 1990 paper, describing how analog chips—those that vary in their output, like real-world phenomena, in contrast to the binary, on-or-off nature of digital chips—could mimic the electrical activity of neurons and synapses in the brain. But he struggled to find ways to reliably build his analog chip designs. Only one arguably neuromorphic processor, a noise suppression chip made by Audience, has sold in the hundreds of millions. The chip, which is based on the human cochlea, has been used in phones from Apple, Samsung, and others.

As a commercial company, Qualcomm has opted for pragmatism over sheer performance in its design. That means the neuromorphic chips it’s developing are still digital chips, which are more predictable and easier to manufacture than analog ones. And instead of modeling the chips as closely as possible on actual brain biology, Qualcomm’s project emulates aspects of the brain’s behavior. For instance, the chips encode and transmit data in a way that mimics the electrical spikes generated in the brain as it responds to sensory information. “Even with this digital representation, we can reproduce a huge range of behaviors we see in biology,” says M. Anthony Lewis, the project engineer for Zeroth.

The chips would fit neatly into the existing business of Qualcomm, which dominates the market for mobile-phone chips but has seen revenue growth slow. Its Snapdragon mobile-phone chips include components such as graphics processing units; Qualcomm could add a “neural processing unit” to the chips to handle sensory data and tasks such as image recognition and robot navigation. And given that Qualcomm has a highly profitable business of licensing technologies to other companies, it would be in a position to sell the rights to use algorithms that run on neuromorphic chips. That could lead to sensor chips for vision, motion control, and other applications.

Cognitive Companion

Matthew Grob was startled, then annoyed, when he heard the theme to Sanford and Son start playing in the middle of a recent meeting. It turns out that on a recent trip to Spain, he had set his smartphone to issue a reminder using the tune as an alarm, and the phone thought it was time to play it again. That’s just one

Page 13: Technology Review - MyWeb  Web viewThe A.I. resources Ms. Greene is ...   ... To recognize a spoken word,

small example of how far our personal devices are from being intelligent. Grob dreams of a future when instead of monkeying with the settings of his misbehaving phone, as he did that day, all he would have to do is bark, “Don’t do that!” Then the phone might learn that it should switch off the alarm when he’s in a new time zone.

Qualcomm is especially interested in the possibility that neuromorphic chips could transform smartphones and other mobile devices into cognitive companions that pay attention to your actions and surroundings and learn your habits over time. “If you and your device can perceive the environment in the same way, your device will be better able to understand your intentions and anticipate your needs,” says Samir Kumar, a business development director at Qualcomm’s research lab.

Pressed for examples, Kumar ticks off a litany: If you tag your dog in a photo, your phone’s camera would recognize the pet in every subsequent photo. At a soccer game, you could tell the phone to snap a photo only when your child is near the goal. At bedtime, it would know without your telling it to send calls to voice mail. In short, says Grob, your smartphone would have a digital sixth sense.

Qualcomm executives are reluctant to embark on too many flights of fancy before their chip is even available. But neuromorphic researchers elsewhere don’t mind speculating. According to Dharmendra Modha, a top IBM researcher in San Jose, such chips might lead to glasses for the blind that use visual and auditory sensors to recognize objects and provide audio cues; health-care systems that monitor vital signs, provide early warnings of potential problems, and suggest ways to individualize treatments; and computers that draw on wind patterns, tides, and other indicators to predict tsunamis more accurately. At HRL this summer, principal research scientist Narayan Srinivasa plans to test a neuromorphic chip in a bird-size device from AeroVironment that will be flown around a couple of rooms. It will take in data from cameras and other sensors so it can remember which room it’s in and learn to navigate that space more adeptly, which could lead to more capable drones.

It will take programmers time to figure out the best way to exploit the hardware. “It’s not too early for hardware companies to do research,” says Dileep George, cofounder of the artificial-intelligence startup Vicarious. “The commercial products could take a while.” Qualcomm executives don’t disagree. But they’re betting that the technology they expect to launch this year will bring those products a lot closer to reality.

Why are some people (Bill Gates and Stephen Hawking) very afraid of AI?

Singularity?A technological singularity is a predicted point in the development of a civilization at which technological progress accelerates beyond the ability of present-day humans to fully comprehend or predict.”

FACEBOOK M

Computerworld | Nov 3, 2015 1:10 PM PT

Striving to keep up with the increasing demands of delivering users' News Feeds, Facebook is pressing hard to advance artificial intelligence.

Page 14: Technology Review - MyWeb  Web viewThe A.I. resources Ms. Greene is ...   ... To recognize a spoken word,

Today the company reported that it's making headway in this area to create and test an artificial intelligence assistant dubbed "M."

Unlike other intelligent systems that might tell the user what the weather will be or to pull up a map, M is designed to complete multi-step tasks.

M, according to Facebook, is set up to purchase a gift, for example, and have it delivered to your mother. It also will be able to make travel arrangements and appointments and book restaurant reservations.

The new assistant program is in what Facebook calls a "small test" that is showing promise. However, to handle complicated requests and tasks, the system needs more advances in machine language, vision, prediction and planning.

NYT

If Algorithms Know All, How Much Should Humans Help?APRIL 6, 2015

Steve Lohr

@SteveLohr

Armies of the finest minds in computer science have dedicated themselves to improving the odds of making a sale. The Internet-era abundance of data and clever software has opened the door to tailored marketing, targeted advertising and personalized product recommendations.

Shake your head if you like, but that’s no small thing. Just look at the technology-driven shake-up in the advertising, media and retail industries.

This automated decision-making is designed to take the human out of the equation, but it is an all-too-human impulse to want someone looking over the result spewed out of the computer. Many data quants see marketing as a low-risk — and, yes, lucrative — petri dish in which to hone the tools of an emerging science. “What happens if my algorithm is wrong? Someone sees the wrong ad,” said Claudia Perlich, a data scientist who works for an ad-targeting start-up. “What’s the harm? It’s not a false positive for breast cancer.”

But the stakes are rising as the methods and mind-set of data science spread across the economy and society. Big companies and start-ups are beginning to use the technology in decisions like medical diagnosis, crime prevention and loan approvals. The application of data science to such fields raises questions of when close human supervision of an algorithm’s results is needed.

Page 15: Technology Review - MyWeb  Web viewThe A.I. resources Ms. Greene is ...   ... To recognize a spoken word,

These questions are spurring a branch of academic study known as algorithmic accountability. Public interest and civil rights organizations are scrutinizing the implications of data science, both the pitfalls and the potential. In the foreword to a report last September, “Civil Rights, Big Data and Our Algorithmic Future,” Wade Henderson, president of The Leadership Conference on Civil and Human Rights, wrote, “Big data can and should bring greater safety, economic opportunity and convenience to all people.”

Take consumer lending, a market with several big data start-ups. Its methods amount to a digital-age twist on the most basic tenet of banking: Know your customer. By harvesting data sources like social network connections, or even by looking at how an applicant fills out online forms, the new data lenders say they can know borrowers as never before, and more accurately predict whether they will repay than they could have by simply looking at a person’s credit history.

The promise is more efficient loan underwriting and pricing, saving millions of people billions of dollars. But big data lending depends on software algorithms poring through mountains of data, learning as they go. It is a highly complex, automated system — and even enthusiasts have qualms.

“A decision is made about you, and you have no idea why it was done,” said Rajeev Date, an investor in data-science lenders and a former deputy director of Consumer Financial Protection Bureau. “That is disquieting.”

The concern is similar in other fields. Since its Watson computer beat human “Jeopardy” champions four years ago, IBM has taken its data-driven artificial intelligence technology well beyond brainy games. Health care has been a major initiative. The history of “expert” decision-support technology in medicine has been disappointing; the systems have not been smart or fast enough to really help doctors in day-to-day practice.

But IBM scientists in collaboration with researchers at leading medical groups — including the Cleveland Clinic, the Mayo Clinic and the Memorial Sloan Kettering Cancer Center — are making progress. Watson can read through medical documents at a pace incomprehensible to humans: many thousands per second, searching for clues, correlations and insights.

The software has been used to help train medical students and is starting to be deployed in clinical settings in oncology, offering diagnostic and treatment recommendations as a kind of quick-witted digital assistant.

IBM has also developed a software program called Watson Paths, which is a visual tool that allows a doctor to see the underlying evidence and inference paths Watson took in making a recommendation.

“It’s not sufficient to give a black-box answer,” said Eric Brown, IBM’s director of Watson technologies.

Watson Paths points to the need for some machine-to-man translation as data science advances. As Danny Hillis, an artificial intelligence expert, put it, “The key thing that will make it work and make it acceptable to society is story telling.” Not so much literal story telling, but more an understandable audit trail that explains how an automated decision was made. “How does it relate to us?” Mr. Hillis said. “How much of this decision is the machine and how much is human?”

Keeping a human in the loop is one approach. The new data-science lenders are animated by data and software. But one of the start-ups in San Francisco, Earnest, has at least one of its staff members review the predictive recommendations of its software, even if the algorithms are rarely overruled. “We think the human element will always be an important piece in our process to make sure we’re getting it right,” said Louis Beryl, co-founder and chief executive of Earnest.

But such a stance, others say, amounts to a comforting illusion — good marketing perhaps, but not necessarily good data science. Giving a person veto power in algorithmic systems, they say, introduces

Page 16: Technology Review - MyWeb  Web viewThe A.I. resources Ms. Greene is ...   ... To recognize a spoken word,

human bias. The promise of big data decision-making, after all, is that decisions based on data and analysis — more science, less gut feel and rule of thumb — will yield better outcomes.

Yet even if optimism is justified, there is a serious challenge, given the complexity and opacity of data science. Will a technology that promises large benefits on average sufficiently protect the individual from a mysterious and wayward decision that might have a lasting effect on a person’s life?

One solution, according to Gary King, director of Harvard’s Institute for Quantitative Social Science, may be for the human creators of the scoring algorithms to tweak them not so much for maximum efficiency or profit but to give somewhat greater weight to the individual, reducing the risk of getting it wrong.

In banking, for example, an algorithm might be tuned to reduce the probability of misclassifying a loan applicant as a deadbeat, even if the trade-off is a few more delinquent loans for the lender.

“The goal,” Mr. King said, “is not necessarily to have a human look at the outcome afterwards, but to improve the quality of the classification for the individual.”

In a sense, a math model is the equivalent of a metaphor, a descriptive simplification. It usefully distills, but it also somewhat distorts. So at times, a human helper can provide that dose of nuanced data that escapes the algorithmic automaton. “Often, the two can be way better than the algorithm alone,” Mr. King said.

Where are we?

Page 17: Technology Review - MyWeb  Web viewThe A.I. resources Ms. Greene is ...   ... To recognize a spoken word,