Print

Visit of Marco Canini (KAUST)


Marco Canini, one of the "rising stars" in computer networking,  Crack Open (Neural) Nets - Can We Make ML-Based Networked Systems More Trustworthy? took a presentation at Computer Sciecnce Department.The topic is of interest not only to retailers, but also to people interested in machine learning. Details about the presentation can be found below.



Crack Open (Neural) Nets - Can We Make ML-Based Networked Systems More Trustworthy?

Machine learning (ML) solutions to challenging networking problems are a promising avenue but the lack of interpretability and the behavioral uncertainty affect trust and hinder adoption. A key advantage of ML algorithms and architectures, such as deep neural networks and reinforcement learning, is that they can discover solutions that are attuned to specific problem instances. As an example, consider a video bit rate adaptation logic that is tuned specifically for Netflix clients in the United States. Yet, there is a general fear that ML systems are black boxes. This creates uncertainty about why learning systems work, whether they will continue to work in conditions that are different from those seen during training or whether they will fall off performance cliffs. The lack of interpretability of ML models is widely recognized as a major hindrance to adoption. This raises a crucial question: How do we ensure that learned models behave reliably and as intended? ML solutions that cannot be trusted to do so are brittle and may not be deployed despite their performance benefits. We propose an approach to enhance the trustworthiness of ML solutions for networked systems. Our approach builds on innovations in interpretable ML tools. Given a black-box ML model, interpretable ML methods offer anations on any given input instance. By integrating the explanations from these tools with operator’s domain knowledge, our approach can verify that the ML model behaves as per operator expectations, detect misbehaviors and identify corrective actions. To demonstrate our approach, we performed an in-depth case study on Pensieve (a recent neural video rate adaptation system) and identified four classes of undesired behaviors. Bio: Marco Canini is an assistant professor in Computer Science at KAUST. Marco obtained his Ph.D. in computer science and engineering from the University of Genoa in 2009 after spending the last year as a visiting student at the University of Cambridge, Computer Laboratory. He was a postdoctoral researcher at EPFL from 2009 to 2012 and after that a senior research scientist for one year at Deutsche Telekom Innovation Labs & TU Berlin. Before joining KAUST, he was an assistant professor at the Université catholique de Louvain. He also held positions at Intel, Microsoft and Google.