Upload
marcus-bryant
View
212
Download
0
Embed Size (px)
Citation preview
Big Learning: Algorithms, Systems, and Tools
for Learning at Scale
December 17-18, 2011http://biglearn.org
The Organizing Committee (14)
Primary Organizers• Joseph Gonzalez (CMU)• Sameer Singh (UMass Amherst)• Alice Zheng (Microsoft Research)• Graham Taylor (NYU)• James Bergstra (Harvard)• Misha Bilenko (Microsoft Research)• Yucheng Low (CMU)
Advisory Committee • Sugato Basu (Google Research)• Alexander J. Smola (Yahoo/NICTA)• Michael Franklin (Berkeley)• Andrew McCallum (UMass Amherst)• Yoshua Bengio (UMontreal)• Carlos Guestrin (CMU)• Michael Jordan (Berkeley)
Program Committee (44)• Frederic Bastien• Sugato Basu• Ron Bekkerman• Kedar Bellare• Danny Bickson• Joseph Bradley• Mihai Budiu• Polo Chau• Dan Ciresan• Ronan Collobert• Ofer Dekel• Gregory Druck• Khalid El-Arini• Clement Farabet• Amit Goyal
• Arthur Gretton• Firas Hamze• Matt Hoffman• Michael Isard• Paul Ivanov• Alex Krizhevsky• Aapo Kyrola• Anthony Lee• Frank Mcsherry• Roland Memisevic• Volodymyr Mnih• Anders Mueller• Jim Mutch• Alexandre Passos• Nicolas Pinto
• Rajat Raina• Karl Schultz• Hannes Schulz• Alex Smola• Balaji Vasan Srinivasan• Vasily Volkov• Markus Weimer• Kilian Weinberger• Michael Wick• Jing Xiang• Yisong Yue• Matei Zaharia• Matthew Zeiler• Martin Zinkevich
History of Big Learning at NIPS
• 2010: Learning on Cores, Clusters and Clouds (LCCC) Workshop
• 2009: Large-Scale Machine Learning: Parallelism and Massive Datasets
• 2008: Parallel Implementations of Learning Algorithms
• 2007: Efficient Machine Learning - Overcoming Computational Bottlenecks in Machine Learning
• 2007: Learning Using Many Examples Tutorial by Andrew Moore and Alex Gray
“Scaling Up ML” Book
• Cambridge U. Press, shipping January• 21 contributed chapters • Platforms
– map-reduce, multi-node/core, GPU, FPGA…
• Algorithms– Boosted trees, SVMs, DBNs, clustering…
• Tasks and applications– Supervised, semi/unsupervised, online, feature selection, learning
and inference in graphical models– Text classification, vision, speech recognition, …
• Representative yet very sparse sample of the field
Workshop Motivation
Big Learning
Learning on Big DataHardware-accelerated
learning
Fast inference in graphical models Model selection and
significance testing
Analysis of parallel learning algorithms
• Great diversity across tasks, platforms and algorithms• Many common themes
– Dataflow, distribution and coordination, speed-accuracy trade-offs, parallel/distributed convergence, …
Big Learning Workshop
• Friday morning: Hardware accelerated learning• Friday afternoon: Applications and methodology• Saturday morning: Systems & Tools• Saturday afternoon: Models & Algorithms
• Tutorials:– Vowpal Wabbit Software: Today (2:00 – 3:30)– GraphLab Software: Tomorrow (2:00 – 3:30)
• Please vote for your favorite talk(s):– http://biglearn.org/besttalk.html
• Awesome mystery prize sponsored by NVIDIA!– What could it be? We’re not telling!
Best Talk Award!
Schedule Changes
• Change in Afternoon Session:5:25 5:55 Miguel Araujo and Charles Parker
Big Machine Learning made Easy5:55 6:15 Tammo Kruger
Fast Cross-Validation via Sequential Analysis
6:15 6:45 Poster Session
6:45 7:30 Daniel Whiteson Machine Learning's Role in the Search for Fundamental Particles
7:30 7:50 Ariel KleinerBootstrapping Big Data
Posters and Coffee Breaks
• Hang up your posters as early as possible– We will provide you with tape– Please do not hang posters on green walls
• Coffee station in Floor -1 (next to the Library)
Big Thanks to Our Sponsors