Tomaso Poggio: Why and When Can Deep Networks Avoid the Curse of Dimensionality

Submitted by Tony I Garcia on

The Department of Applied Mathematics is pleased to host this series of colloquium lectures, funded in part by a generous gift from the Boeing Company. This series will bring to campus prominent applied mathematicians from around the world.


Speaker: Tomaso Poggio, Massachusetts Institute of Technology (MIT)

Date: February 22nd, 2018, 4pm, reception to follow

Location: (SMI 102

Title: Why and When Can Deep Networks Avoid the Curse of Dimensionality.

Abstract: In recent years, by exploiting machine learning— in which computers learn to perform tasks from sets of training examples — artificialintelligence researchers have built impressive systems. Two of my former postdocs— Demis Hassabis and Amnon Shashua— are behind the two main success stories of AI so far: AlphaGo bettering the best human players at Go and Mobileye leading the whole automotive industry towards vision-based autonomous driving. There is, however, little in terms of a theory explaining why deep networks work so well. In this talk I will review an emerging body of theoretical results on deep learning including the conditions under which it can be exponentially better than shallow learning. The class of deep convolutional networks represent an important special case that avoids the curse of dimensionality for the class of hierarchical locally compositional functions. I will also sketch the vision of the NSF-funded, MIT-based Center for Brains, Minds and Machines which strives to make progress on the science of intelligence by combining machine learning and computer science with neuroscience and cognitive science.

Share