Upload
diannepatricia
View
83
Download
0
Embed Size (px)
Citation preview
Machine Learning vs. Machine Intelligence• Machine Learning (e.g. Deep Learning)
“Solve a specific task by defining and optimizing an objective function” (Yann LeCunn) Training is supervised using labeled datasets (“this is a gorilla”)
Training and execution are distinct phases
• Machine Intelligence (Numenta’s HTM or IBM’s CAL)
Systems which continuously and on their own detect and predict patterns and sequences in sensory data streams, act on these predictions System must have a notion of time Integration of sensory and motor functions ⬄ robots
OUTPUTS: PredictionsContextStable Concepts (SDR)Motor commands
INPUT: Spatial-temporal data streams of any kind
“ Universal Cortical Engine “
Array o
f
columns
of neurons
(shown 4 high
)
Sparse Distributed Representations (SDR) – What and Why?
• Dense Representations Few bits (8-128) Example: ASCII “m” = 01101101 Efficient but no semantic meaning
• Sparse Representations Many bits (thousands), few 1’s, mostly 0’s Appears inefficient but evolution has picked it! Each bit has semantic meaning
• Example of SDR uses: Union of Properties Color 00000010001000000001100000000100 (‘red) Shape 00001000100010100000100000000000 (‘sphere’)
Union 00001010101010100001100000000100 (‘red sphere’)
ESCAPE
• 1000 node parallel machine intelligence system
– per node: Xilinx Zynq dual A9 core + FPGA, 1 GB RAM,6x2 bi-di high-speed links
– system topology: 3D mesh
– very high bandwidth
• Dual purpose
– will scale up CAL simulations to > 108 realistic neurons
– platform for design of waferscale system
7
Context Aware Learning
• Based on recent understanding of the Neo-cortex – Quite realistic neuron models
• Learning via formation of new synapses – Dynamic changing network topology => deep hardware implications
• Unsupervised learning from raw data streams – No labels required
• Detects patterns, makes predictions, (may) take actions – everything is temporal
• Universality • Closed loop: Sensors – universal engine – actuators
– Robots
Embodied Cognition• Intelligence starts with understanding sensor-motor interactions *
• “I believe that mobility, acute vision and the ability to carry out survival related tasks in a dynamic environment provide a necessary basis for the development of true intelligence.”
• Human cognition is shaped by the motor and perceptual system• Intelligence emerges from interactions with the world
• Time is critical factor• Limited knowledge of the world, rely on context• Noise and uncertainty is present• Real world possesses a continuum of states
• Developmental approach to intelligence• Walk before we run• Grasp before we catch
* R. Brooks, “Intelligence Without Representation”, 1991 Director, MIT Artificial Intelligence Laboratory, 545 Technology Square, Rm. 836, Cambridge, MA 02139, USA Founder and CTO, iRobot Chairman and CTO, Rethink Robotics
How Kate learns to walk farther unsupervised
• Focus: Robot that learns to walk robustly• Biological architecture:
• Central Pattern Generator (CPG) coordinates actuation• Contextual control to predict / provide appropriate mitigation
V. Albouy, A. Asseman, D. Barros, H. Carbone, I. Carvalho, C. Chaves, M. Desta, R. Gaspar, I. Godoy, P. Ludwig, B. Lyo, T. Mantelato and L. Munhoz
11
Team Brazil
Bipedal robots
KAIST DRC-HUBO
Boston Dynamics Atlas
Honda Asimo
TU Delft
Boston Dynamics Atlas NASA Valkyrie
Lola
Toro ATRIAS
HRP-4
HRP-4C
Dr. Guero
12 14 16 18 20 22 24 26 28
−14000
−12000
−10000
−8000
−6000
−4000
−2000
0
2000
Slow walk data
Time (sec)
Ampl
itude
(arb
)
Roll acceleration
Pitch acceleration
Hip motor position
Leg motor position
Low Level Walking Controls
motors
Kate Control Structure
foot sensors,inclinometer
mitigation
STT
TTS
Cont
iPad
Mac
CAL
accel, gyro, voice, video, touch
foot sensor, inclinometertorque, motor pos
USB
tcpip
• Learn contexts • Examples:
• Time sequence of angular attitude, eg. roll, pitch • Time sequence of motor torques • Time sequence of foot lift durations
• Context can be any or all of the above but for this study we used roll • Develop expectations based on context • Discern contexts as known or novel sequence
• If in a known sequence, are the expectations fulfilled • If in a period of novel sequences, learn the sequence • if in a period of known sequences, flag as an anomaly
• Provide appropriate actuation • No anomaly - no action • Anomaly triggers mitigation (pause)
How we learn to walk farther
Summary
First results to extend MFPT with context aware learning
Learning contexts for good stepsDiscerning anomalies and mitigating
Robots will provide large, correlated datasetsSignificant opportunity for unsupervised learning
• R. Tedrake - MIT • Atlas, Valkyrie
• K. Byl - UCSB (student of Tedrake) • T. McGeer
• Passive dynamic walking • M. Vukobratovic
• ZMP • M. Grizzle - U. Michigan
• limit cycle analysis • Ames - Oregon State Univ
• Atrias, Mabel, Thumper • Hobbelen - TU Delft
• limit cycle walking • J. Pratt
• virtual model control
Prior work
• Statically stable - used in early robots, slow • Zero Moment Point (ZMP) - stance foot is always flat on ground • Limit cycle walking - only dynamically stable, most efficient • Hybrid zero dynamics
• Holonomically constrained knee / ankle