“Building More Intelligent Computer Systems with Large-Scale Deep Learning”
Google Senior Fellow, Google’s Knowledge Group, Google,
February 5, 2014
Three years ago we started a small effort to see if we could build training systems for large-scale deep neural networks and use these to make significant progress on various perceptual tasks. Since then, our software systems and algorithms have been used by dozens of different groups at Google to train state-of-the-art models for speech recognition, image recognition, various visual detection tasks, language modeling, ads click prediction, language translation, and various other tasks. In this talk, I’ll highlight some of the distributed systems and algorithms that we use in order to train large models quickly. I’ll then discuss ways in which we have applied this work to a variety of problems in Google’s products, usually in close collaboration with other teams. This talk describes joint work with many people at Google.
Jeff joined Google in 1999 and is currently a Google Senior Fellow in Google’s Knowledge Group, where he leads Google’s deep learning research team in Mountain View. He has co designed/implemented five generations of Google’s crawling, indexing, and query serving systems, and co designed/implemented major pieces of Google’s initial advertising and AdSense for Content systems. He is also a co-designer and co-implementor of Google’s distributed computing infrastructure, including the MapReduce, BigTable and Spanner systems, protocol buffers, LevelDB, systems infrastructure for statistical machine translation, and a variety of internal and external libraries and developer tools. He is currently working on large-scale distributed systems for machine learning. He is a Fellow of the ACM, a fellow of the AAAS, a member of the U.S. National Academy of Engineering, and a recipient of the Mark Weiser Award and the ACM-Infosys Foundation Award in the Computing Sciences.