11

Contents · 2020. 11. 5. · ID 12: To interact or not? The convergence properties of interacting stochastic mirror descent Anastasia Borovykh*; Nikolas Kantas; Panos Parpas; Grigorios

  • Upload
    others

  • View
    7

  • Download
    0

Embed Size (px)

Citation preview

  • Contents

    About Deepmath 3

    Important Information 4

    Timetable 5Thursday, Nov 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5Friday, Nov 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5List of Posters 6List of Poster Presentations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6Sponsors 10

    2

  • About Deepmath

    Recent advances in deep neural networks (DNNs), combinedwith open, easily-accessible implemen-tations, have made DNNs a powerful, versatile method used widely in both machine learning andneuroscience. These advances in practical results, however, have far outpaced a formal understand-ing of these networks and their training. The dearth of rigorous analysis for these techniques limitstheir usefulness in addressing scientific questions and, more broadly, hinders systematic design ofthe next generation of networks. Recently, long-past-due theoretical results have begun to emergefrom researchers in a number of fields. The purpose of this conference is to give visibility to theseresults, and those that will follow in their wake, to shed light on the properties of large, adaptive,distributed learning architectures, and to revolutionize our understanding of these systems.

    3

  • Important Information

    • The conference venue The focal point of the event is our virtual conference hall, whichyou can access at https://gather.town/app/AceGf3n47aKkqZ1P/DeepMath2020 (login with the same email you used to register for the conference).• Posters. The poster sessions are at 2pm-4pm (14:00-16:00) EST on Thursday and 11:45am-1:45pm (11:45-13:45) EST on Friday. Poster presenters are asked to be at their posters foreven-numbered posters on Thursday and odd-numbered ones on Friday.• Best poster voting: We are crowd-sourcing votes for the DeepMath 2020 best poster andbest student poster (with a secret prize to be announced after the conference). Please votefor your favorite poster at https://forms.gle/tZwFjSNpB7LqQtVL6.

    Questions / Comments / Concerns? Contact one of the organizers in Gather, or email [email protected].

    4

    https://gather.town/app/AceGf3n47aKkqZ1P/DeepMath2020https://forms.gle/tZwFjSNpB7LqQtVL6mailto:[email protected]:[email protected]

  • Timetable

    Thursday, Nov 5

    Friday, Nov 6

    5

  • List of Posters

    List of Poster Presentations

    ID 1: The Recurrent Neural Tangent KernelSina Alemohammad*; Randall Balestriero; Zichao Wang; Richard Baraniuk , Rice UniversityID 2: Provable regression with nets of finite sizeAnirbit Mukherjee*; Ramchandran Muthukumar, Johns Hopkins UniversityID 3: A Game Theoretic Analysis of Additive Adversarial Attacks and DefensesAmbar Pal*; Rene Vidal, Johns Hopkins UniversityID 4: Minimal Adversarial Perturbations via Eikonal EquationsNurislam Tursynbek*; Ivan Oseledets, Skolkovo Institute of Science and TechnologyID 5: Learning Interpretable Representations using Operators Distributed acrossthe Latent SpaceStephane Deny*; Diane Bouchacourt; Mark Ibrahim, Stanford and Facebook AI ResearchID 6: A nonparametric model of independence for cognitive feature representa-tionsGregory Henselman-Petrusek*; Tyler Giallanza; Simon Segert; Sebastian Musslick; JonathanCohen , Princeton UniversityID 7: Random feature networks with neuronal tuningBiraj Pandey*; Kameron Decker Harris; Bingni Brunton, University of WashingtonID 8: Effective Theory of Multilayer PerceptronsBoris Hanin; Daniel A Roberts*; Sho Yaida, Texas A&M, MIT, and Facebook ResearchID 9: The Gaussian Equivalence of generativemodels for learningwith two-layerneural networksSebastian Goldt*; Galen Reeves; MarcMézard; Florent Krzakala; Lenka Zdeborova, Laboratoirede physique théorique, Ecole normale supérieure, Paris

    ID 10: Error Estimates of Several Physics-informed Neural NetworksZhongqiang Zhang*; Yeonjong Shin); George Karniadakis,Worcester Polytechnic Institute andBrown University

    ID 11: Theoretical Analysis of the Representational Power of GANs and FlowModelsFrederic Koehler; Viraj Mehta; Andrej Risteski*,MIT and CMU

    6

  • ID 12: To interact or not? The convergence properties of interacting stochasticmirror descentAnastasia Borovykh*; Nikolas Kantas; Panos Parpas; Grigorios Pavliotis , Imperial CollegeLondon

    ID 13: Analytic Characterization of the Hessian Spectral Density in Shallow ReLUModels: A Tale of SymmetryYossi Arjevani*; Michael Field, NYU and UCSBID 14: Interpreting Deep Learning Models: Flip Points and Homotopy MethodsRoozbeh Yousefzadeh*; Dianne O’Leary; Furong Huang, Yale UniversityID 15: Activation function dependence of the storage capacity of treelike neuralnetworksJacob A Zavatone-Veth*; Cengiz Pehlevan, Harvard UniverityID 16: Guarantees for stable signal and gradient propagation in self-attentiverecurrent networksGiancarlo B Kerg*; Bhargav Kanuparthi; AnirudhGoyal; Kyle Goyette; Yoshua Bengio; GuillaumeLajoie,MILAID 17: Consistency of a Recurrent Language Model With Respect to IncompleteDecodingSean Welleck*; Ilia Kulikov; Jaedeok Kim ; Richard Yuanzhe Pang; Kyunghyun Cho , New YorkUniversity

    ID 18: Learning a Lie Algebra from Unlabeled Data PairsChristopher A Ick*; Vincent Lostanlen, New York UniversityID 19: MomentumRNN: IntegratingMomentum into Recurrent Neural NetworksTan Minh Nguyen*; Richard Baraniuk; Andrea L. Bertozzi; Stanley Osher ; Bao Wang, RiceUniversity

    ID 20: Complexity Bounds forNeuralNetworkswith EncodableWeights in Smooth-ness SpacesIngo Gühring*; Mones Raslan, Technische Universität BerlinID 21: On 1/n neural representation and robustnessJosue Nassar*; Piotr Sokol; Sueyeon Chung; Kenneth Harris; Memming Park, Stony BrookUniversity

    ID 22: A theory of robust and flexible sequencing through multi–dimensionaltransients controlled by low-rank connectivity perturbations in recurrent net-worksLaureline Logiaco*; Larry Abbott; G Sean Escola, Columbia UniversityID 23: Identifying stochastic oracles for fast convergence of RMSPropJiayao Zhang*; Anirbit Mukherjee, University of Pennsylvania

  • ID 24: LOCA: LOcal Conformal AutoencoderOfir Lindenbaum*; Erez Peterfreund; Tom Bertalan; Felix Dietrich; Matan Gavish; IoannisKevrekidis; Ronald Coifman, Yale UniversityID 25: Analytical aspects of non-differentiable neural networksMatteo Spallanzani*; Gian Paolo Leonardi, ETH ZurichID 26: Extendable and invertible manifold learning with geometry regularizedautoencodersAndres F Duque*; Sacha Morin; Guy Wolf; Kevin Moon, Utah State UniversityID 27: Generalization of Kernel Regression for Realistic DatasetsBlake A Bordelon*; Abdulkadir Canatar; Cengiz Pehlevan, Harvard UniversityID 28: The interplay between randomness and structure during learning in RNNsFriedrich Schuessler*; Francesca Mastrogiuseppe; Alexis Dubreuil; Srdjan Ostojic); Omri Barak,Technion

    ID 29: Scheduled Restart Momentum for Accelerated Stochastic Gradient De-scentBao Wang; Tan Minh Nguyen*; Tao Sun; Andrea L. Bertozzi; Richard Baraniuk; Stanley Osher,University of Utah and Rice University

    ID 30: Neural Anisotropy DirectionsGuillermoOrtiz-Jimenez*; ApostolosModas; Seyed-MohsenMoosavi-Dezfooli; Pascal Frossard,EPFL

    ID 31: Quantifying dynamic stability and signal propagation: Lyapunov spectraof chaotic recurrent neural networksRainer Engelken*; Fred Wolf; Larry Abbott, Columbia UniversityID 32: Multiple Timescales of Activity in Recurrent Neural Networks with Clus-tered ConnectivityMerav Stern*; Nicolae Istrate; Luca Mazzucato, University of OregonID 33: Gradient Starvation: A Theory of Learning in Neural NetworksMohammad Pezeshki*; Guillaume Lajoie; Yoshua Bengio ; Aaron Courville; Doina Precup,Mila

    ID 34: Exact Polynomial-time Convex Optimization Formulations for ReLU Net-worksTolga Ergen*; Mert Pilanci, Stanford UniversityID 35: Scattering Priors for Graph Neural NetworksAlexander Y Tong*; Frederik Wenkel; Kincaid Macdonald; Guy Wolf; Smita Krishnaswamy, YaleUniversity

  • ID 36: When and How Can Deep Generative Models be Inverted?Aviad Aberdam*; Dror Simon; Michael Elad, TechnionID 37: Pruning neural networkswithout any data by iteratively conserving synap-tic flowHidenori Tanaka; Daniel Kunin*; Daniel Yamins; Surya Ganguli, NTT Reserach, Inc./StanfordUniversity

    ID 38: Adaptivity of ReLU nets beyond low-dimensional domainsTimo Klock*; Alexander Cloninger, Simula Research Lab/UCSDID 39: UnderstandingRecurrentNeuralNetworksUsingNonequilibriumResponseTheorySoon Hoe Lim*, NorditaID 40: Geometry and Optimization of Shallow Polynomial NetworksMatthew Trager*; Joe Kileel; Yossi Arjevani; Joan Bruna, NYUID 41: RNNs can generate bounded hierarchical languages with optimal memoryJohn Hewitt*; Michael Hahn ; Surya Ganguli; Percy Liang; Christopher D. Manning, StanfordUniversity

    ID 42: Learning high-dimensional Hilbert-valued functions with deep neural net-works from limited data.Sebastian A Moraga Scheuermann*; Nick Dexter; Simone Brugiapaglia; Ben Adcock , SimonFraser University

    ID 43: Traversing the Optimization Landscape of Deep Neural Networks: HowGood is Gradient Descent?Waheed Bajwa*; Rishabh Dixit, Rutgers UniversityID 44: Generalisation error in learning with random features and the hiddenmanifold modelFederica Gerace*; Bruno Loureiro; Florent Krzakala; Marc Mézard; Lenka Zdeborova, Politec-nico di Torino

    ID 45: Stability-performance barriers and practical function approximation withdeep neural networksNick Dexter*; Simone Brugiapaglia; Ben Adcock, Simon Fraser University

  • Sponsors

    The Deepmath conference would like to thank the Simons Foundation and the Princeton Neuro-science Institute for sponsorship for this year’s virtual conference.

    10

  • About DeepmathImportant InformationTimetableThursday, Nov 5Friday, Nov 6

    List of PostersList of Poster Presentations

    Sponsors