Print

Visit of prof. Dan Alistarh

On Thursday at 18:30, our department invited Dan Alistarh (Assistant Professor at IST Austria) to the Machine Learning group presentations in Bucharest:
https://www.meetup.com/Bucharest-Deep-Learning/events/254503107/
The presentation contained several elements related to self-learning and distributed processing, focusing on the results of Dan's team in an ERC Starting Grant project. Below you have the title of the presentation, a brief summary and some information about Dan.

Title:
Scalable Distributed Machine Learning via Sparsification and Quantization

Abstract:
There has recently been significant interest in scalable training of massive machine learning models, in particular of deep neural networks.
To mitigate the high distribution overheads of such workloads, several communication-reduction approaches, such as quantization, large-batch methods, and gradient sparsification have been proposed. Two significant issues hinder the widespread adoption of such methods: the lack of analytical understanding (in terms of convergence and hyper-parameter tuning), and the lack of general, efficient implementations.
In this talk, I will discuss the ScaleML project, which aims to address both of these issues. I will describe recent progress on understanding the impact of communication-reduction methods on the convergence of stochastic gradient descent, showing that they can provably preserve convergence, while reducing communication costs by more than an order of magnitude. On the practical side, I will overview some of the challenges of implementing these methods efficiently, focusing on providing efficient support for sparse collective operations, and on experimental results in Microsoft CNTK and Google Tensorflow.
(The talk will be in English, but the speaker can take questions in Romanian. It assumes some basic familiarity with optimization, but is otherwise self-contained.)

Dan Alistarh is currently an Assistant Professor at IST Austria, in Vienna. Previously, he was affiliated with ETH Zurich, Microsoft Research Cambridge, and MIT. He received his PhD from the EPFL, under the guidance of Prof. Rachid Guerraoui. His research focuses on distributed algorithms and concurrent data structures, and spans from algorithms and lower bounds, to practical implementations. Dan's lab has been awarded an ERC Starting Grant in 2018 for research on distributed machine learning.