46
Kernel, RKHS, and Gaussian Processes Caution! No proof will be given. Sungjoon Choi, SNU

Kernel, RKHS, and Gaussian Processes

Embed Size (px)

Citation preview

Page 1: Kernel, RKHS, and Gaussian Processes

Kernel, RKHS, and Gaussian Processes

Caution! No proof will be given.

Sungjoon Choi, SNU

Page 2: Kernel, RKHS, and Gaussian Processes
Page 3: Kernel, RKHS, and Gaussian Processes
Page 4: Kernel, RKHS, and Gaussian Processes
Page 5: Kernel, RKHS, and Gaussian Processes
Page 6: Kernel, RKHS, and Gaussian Processes
Page 7: Kernel, RKHS, and Gaussian Processes
Page 8: Kernel, RKHS, and Gaussian Processes
Page 9: Kernel, RKHS, and Gaussian Processes
Page 10: Kernel, RKHS, and Gaussian Processes
Page 11: Kernel, RKHS, and Gaussian Processes
Page 12: Kernel, RKHS, and Gaussian Processes
Page 13: Kernel, RKHS, and Gaussian Processes
Page 14: Kernel, RKHS, and Gaussian Processes
Page 15: Kernel, RKHS, and Gaussian Processes
Page 16: Kernel, RKHS, and Gaussian Processes
Page 17: Kernel, RKHS, and Gaussian Processes
Page 18: Kernel, RKHS, and Gaussian Processes
Page 19: Kernel, RKHS, and Gaussian Processes
Page 20: Kernel, RKHS, and Gaussian Processes
Page 21: Kernel, RKHS, and Gaussian Processes
Page 22: Kernel, RKHS, and Gaussian Processes
Page 23: Kernel, RKHS, and Gaussian Processes
Page 24: Kernel, RKHS, and Gaussian Processes
Page 25: Kernel, RKHS, and Gaussian Processes
Page 26: Kernel, RKHS, and Gaussian Processes
Page 27: Kernel, RKHS, and Gaussian Processes
Page 28: Kernel, RKHS, and Gaussian Processes
Page 29: Kernel, RKHS, and Gaussian Processes
Page 30: Kernel, RKHS, and Gaussian Processes
Page 31: Kernel, RKHS, and Gaussian Processes

LeveragedGaussian  Process  Regression

Page 32: Kernel, RKHS, and Gaussian Processes

Leveraged  Gaussian  Processes

The  original  Gaussian  process  regression  anchors  positive training  data.

The  proposed  leveraged  Gaussian  process  regression  anchors  positive data  while  avoiding  negative data.

Moreover,  it  is  possible  to  vary  the  leverage  of  each  training  data  from  -­‐1  to  +1.

Page 33: Kernel, RKHS, and Gaussian Processes

Positive and  Negative Motion  Control  Data  for  Autonomous  

Navigation

Page 34: Kernel, RKHS, and Gaussian Processes

Training  Phase

Page 35: Kernel, RKHS, and Gaussian Processes

Execution  Phase

Page 36: Kernel, RKHS, and Gaussian Processes

Real  World  Experiments

Page 37: Kernel, RKHS, and Gaussian Processes

Leverage  Optimization

Page 38: Kernel, RKHS, and Gaussian Processes

Leverage  Optimization

• The  key  intuition  behind  the  leverage  optimization  is  that  we  cast  the  leverage  optimization  problem  into  a  model  selection  problem in  Gaussian  process  regression.  

• However,  the  number  of  leverage  parameters  is  equivalent  to  the  number  of  training  data.  

• To  handle  this  issue,  we  propose  a  sparse  constrained  leverage  optimization where  we  assume  that  the  majority  of  leverage  parameters  are  +1.  

Page 39: Kernel, RKHS, and Gaussian Processes

Leverage  Optimization  (v1)

Using  proximal  linearized  minimization  [1],  the  update  rule  for  solving  above  optimization  is  

[1]  J.  Bolte,  S.  Sabach,  and  M.  Teboulle,  “Proximal  alternating  linearized  minimization  for  nonconvex  and  nonsmooth  problems,”  Mathematical  Programming,  vol.  146,  no.  1-­2,  pp.  459–494,  2014.

where  the  proximal  mapping  becomes  soft-­‐thresholding:

where  �̅� = 𝛾 − 1.  

Page 40: Kernel, RKHS, and Gaussian Processes

Leverage  Optimization  (v2)

We  propose  a  new  leverage  optimization  method  by  doubling the  leverage  parameters  to  positive  and  negative  parts.  

Furthermore,  we  assume  multiple  demonstrations  are  collected  from  one  demonstrator.  

By  doing  so,  we  have  two  major  benefits:1. As  the  L1-­‐norm  regularizer is  only  on  the  positive  parts  of  the  leverages,  

negative  parts  of  the  leverages  can  be  optimized  more  accurately.  2. By  doubling  the  variables,  proximal  mapping  is  no  longer  needed.  

Page 41: Kernel, RKHS, and Gaussian Processes

Sensory  Field  Reconstruction

Sensory  observations  from  correlated  sensory  fields

Leverage  Optimization  

Collect  observations

Page 42: Kernel, RKHS, and Gaussian Processes

Sensory  observations  from  correlated  sensory  fields

Proposed  Method Previous  Method Without  Optimization

Sensory  Field  Reconstruction

Page 43: Kernel, RKHS, and Gaussian Processes

Autonomous  Driving  Experiments

Driving  demonstrations  with  mixed  qualities

Leverage  Optimization  

Collect  observations

Page 44: Kernel, RKHS, and Gaussian Processes

Proposed  Method

Previous  Method

Autonomous  Driving  Experiments

Page 45: Kernel, RKHS, and Gaussian Processes

Proposed  MethodWithout  Optimization

Autonomous  Driving  Experiments

Page 46: Kernel, RKHS, and Gaussian Processes