Building Bridges, Fostering Collaborations: The Second SAP Leonardo Machine Learning Research Retreat

September 7th, 2018 — Munich, Germany — Technical University of Munich

SAP AI Research
SAP AI Research
9 min readSep 28, 2018

--

Machine learning researchers mostly work within the context of their research labs, developing algorithms and training models to solve machine learning challenges and propose novel approaches that go beyond the state of the arts. Engineers and developers working on the industrial front of machine learning have the hands-on experience: concrete business use-cases, customers’ demands and needs, and the challenges entailed in actually applying different machine learning models within the business context.

“At SAP Leonardo Machine Learning Research we work on building bridges and facilitating discussions between these two groups” — says Zbigniew Jerzak, the Head of Deep Learning Center of Excellence and Machine Learning Research at SAP. To enable this, we brought together early this month around 40 machine learning experts from academia and industry at our second SAP Leonardo Machine Learning Research Retreat. As we carry on the tradition from the last year, the one-day event, held this year on September 7th in Munich, Germany, enabled dialogue and exchange of ideas between machine learning researchers from various disciplines, and industrial practitioners.

The line-up of speakers included some of our research partners from top-tier universities and research institutes in the U.S. and Europe, joined by participants from different SAP machine learning teams. This year’s event was interdisciplinary and featured talks on topics ranging from deep learning approaches for neuroimaging data, multi-domain approaches spanning computer vision and computational linguistics; to fairness and privacy constraints in multi-task learning.

Fairness and Privacy Constraints in Machine Learning: Between Applicability and Accuracy

We initiated the event with an important topic addressing ethical and societal challenges of artificial intelligence. How can we ensure that data sets used in training machine learning models are anonymized and stored in compliance with data protection and privacy regulations? How can we deal with sensitive information such as gender, ethnicity, nationality, etc., that is contained in the datasets and used during training to minimize the risk of building biased models?

Our research partner from University College London and Italian Institute of Technology, Massimiliano Pontil addressed these widely-debated questions in his talk on “Enhancing Multi-task Learning with Fairness and Privacy Constraints.” He stressed the need for a clearly-defined concept of fairness and methods to impose it during the different phases of model construction; from the pre-processing phase of modifying the data, the in-processing phase of modifying the algorithms to the post-processing phase of modifying the trained model.

Massimiliano Pontil

Based on their work, to appear at NIPS 2018, Pontil and his group propose an empirical risk minimization framework that builds upon kernel methods and imposes fairness constraints in multi-task learning. Removing the sensitive classifiers to maintain fairness constraints usually compromises the model’s accuracy. The proposed approach trains group-specific models with MLT with imposed fairness constraints, without explicitly using the sensitive features in training, but rather predict these features using any learning model and use the predicted sensitive features as a predictor. The results show a boost in fairness and a small decrease in accuracy. Pontil also discussed coupling fairness constraints with differentially private algorithms using “thresholdout” that allows reusable holdout data that would facilitate generalization; while maintaining data privacy.

SAP Leonardo Machine Learning Research team is also actively working on investigating new machine learning approaches that take preserving data privacy as its driving principle. Check out our previous blog post on differentially private federated learning.

Machine learning approaches motivated by maintaining privacy and fairness constraints are still under development. Pontil’s talk led to a lively debate among participants that reflected the community’s questions and concerns regarding the importance of imposing such constraints, but also the challenges entailed in applying these approaches without compromising models’ accuracy and efficiency.

Deep Learning and Progress in the Medical Domain

Jumping from the challenges of maintaining fairness and privacy in machine learning models, discussions moved to the advances made in medical imaging and medical data analysis through deep learning approaches. Our research partner Christian Wachinger from the Lab for Artificial Intelligence in Medical Imaging (AI-Med) at Ludwig Maximilian University of Munich gave his talk on “Deep Learning for Modelling Neuroimaging Data.” Wachinger highlighted how deep learning approaches utilize the multitude of neuroimaging data in various applications; facilitating early detection of diseases and identifying disease mechanisms, which has a direct impact on advancing neuroscience.

Wachinger shared recent progress from his lab, particularly on using fully convolutional networks for image segmentation by employing three phases of Squeeze and Excitation (SE) models on the spatial and channel level. The proposed model architecture outperforms state-of-the-art segmentation models with a marginal increase in model’s complexity. He also presented image segmentation and shape analysis approaches, mainly used for early prediction of Alzheimer’s disease and diabetes.

Check out our recent blog post, written by Benjamin Gutierrez Becker from Wachinger’s group on using age prediction models to detect brain abnormalities caused by various neurodegenerative diseases such as Alzheimer disease.

Christian Wachinger (on the right) and his group

In a similar vein, the talk of Kayhan Batmanghelich, our research partner from the University of Pittsburgh and Carnegie Mellon University, focused on imaging genetics, a domain that works on finding connections between genetic variants and their manifestation in image biomarkers. To that end, Batmanghelich uses probabilistic and Bayesian models that correlate disease-related genetic variants with image features associated with the disease. Examining both factors simultaneously and not separately allows for studying the correlation between genes and image phenotypes.

This unified approach is a significant breakthrough in genome-wide association studies (GWAS), which are concerned with examining genome (genetic) variations and its relation to traits, e.g. changes in images. Not only does this help uncover the relationship between disease manifestation and changes on the genome, but it also has the potential to unlock new revolutionary possibilities in treatment practices.

More importantly, Batmanghelich’s multi-modal approach can be further expanded and implemented in different cross-modal applications beyond the medical domain. Cross-modal applicability is particularly useful in the business context as it enhances models’ scalability and facilitates implementation across various enterprise domains, especially ones that are concerned with both text and image, such as product catalogs.

Deep Learning in Theory and Practice

Whereas the event featured a lot of talks using deep learning in its approaches, our research partner from Cornell University Kilian Weinberger gave an engaging presentation on deep learning as a knowledge domain, highlighting the challenges inherent in creating very deep neural networks. Weinberger talked about his research project “DenseNets.” Unlike state-of-the-art residual neural networks that transfer knowledge across a chain of layers, DenseNets build long-term forward connections between layers; facilitating knowledge transfer between layers and reducing the module’s volume and the computational power consumption.

Kilian Weinberger

Mohammad Rastegari from Allen Institute for Artificial Intelligence (AI2) gave a vivid presentation on “Efficient Methods for Deep Neural Networks.” Rastegari talked about approaches for optimizing standard state-of-the-art CNNs for object detection for smart portable devices like cell phones. Bound by limited memory storage and computational power, small portable devices cannot accommodate visual recognition systems powered by GPU-based CNNs. To enable this, Rastegari discussed two variants of binary CNNs approximations: Binary-Weight-Networks and XNOR-Net that are simple and efficient and lead to comparable accuracies in image classification tasks; while using a fraction of the required memory and computational power for standard CNNs. These new approaches allow DNNs to run on limited-space portable devices and enable real-time inferences!

Volker Tresp from Ludwig Maximilian University of Munich and Siemens also talked in depth about deep learning approaches and gave an overview of research projects in Siemens and LMU. He tackled topics such as industrial recommender systems, enabling knowledge graphs with episodic and semantic memory that can be recollected and expanded, as well as several projects for the healthcare industry.

Newest Trends and Approaches in Vision and Language

Vicente Ordóñez Román, our research partner from the University of Virginia, talked about various approaches in the field of computer vision; ranging from information retrieval for image captioning to recent approaches focusing on reducing gender stereotypes and biases in object classification and visual semantic role labelling tasks using corpus-level constraints. He talked in depth on a new approach, “Feedback-prop” that steps away from rigid neural networks where input and output variables are fixed. The proposed approach uses state-of-the-art convolutional neural networks, particularly in the setting of multi-task or multi-label learning, with both forward and backward feedback-based propagation that leverages additional available information for object recognition tasks. In the context of partial evidence, the approach exploits extra information related to the image’s context, e.g., news text, news captioning, social media comments, etc. to improve the prediction of known labels as well as infer unknown labels.

Graph Convolutional Networks in NLP

Regarding new approaches in the field of natural language processing, Ivan Titov, our research partner from the University of Edinburgh and the University of Amsterdam, shared recent progress on using Graph Convolutional Networks for link prediction (using relational GCNs) and extracting semantic relations (using syntactic GCNs). Relational GCNs work on predicting missing information in knowledge graphs on link prediction and entity classification tasks. Syntactic GCNs examine the interaction between syntax and semantics and work to identify predicates, arguments and assign semantic roles. The new approaches improve the performance and accuracy of standard tasks such as question answering and information retrieval.

Thomas Kipf from the University of Amsterdam previously wrote an interesting blog post on an approach that also uses Graph Convolutional Networks to build models capable of inferring relations and structure among objects.

Ivan Titov

Self-supervised Learning: No Need for Expensive Labelled Data?

Hamed Pirsiavash, our research partner from the University of Maryland, Baltimore County, ended the event with exploring various approaches for self-supervised learning for visual recognition. Annotated data is the basis of state-of-the-art supervised learning models, the process is, however, costly and time-consuming. Pirsiavash shared some of his recent research on representation learning by using counting as an auxiliary task. The approach uses counting visual primitives as a supervision signal to learn visual features without any annotated data. He also introduced a new framework that uses transfer knowledge to fine-tune self-supervised learning models. Given its booming popularity, the presentation incited lots of discussions as participants were eager to know more about self-supervised learning approaches which would reduce the reliance on annotated data.

Young Researchers: Knowledge Exposure and Collaboration Opportunities

The event posed a great opportunity for SAP Master’s and Ph.D. students to benefit from the direct interaction with SAP research partners and from being exposed to the recent developments and trends in a variety of research areas.

SAP Machine Learning Research Team

During the networking breaks, our students also shared some of their recent work in several poster presentations on topics such as lifelong learning, few-shot learning, federated learning, human-machine collaborative learning, and new evaluation metrics for VQA tasks. Whether through networking, exchanging ideas, unlocking new perspectives, our poster sessions provided a fertile ground for knowledge exchange and paved the way for future collaborations between participants.

Takeaways

The machine learning research retreat this year proved once again the importance of facilitating discussions and collaborations between machine learning experts across domains, thanks to the presence of all presenters and participants. We are looking forward to expanding the scope of such event in upcoming years and establish it as a community forum that enables knowledge exchange and fosters collaborations between machine learning researchers and industrial practitioners.

Please visit the Event Website to find out more information about the event and check out some of the presentations slides and the photo gallery.

--

--