Deep Graph Contrastive Representation Learning

Synced
SyncedReview
Published in
3 min readJun 27, 2020

Content provided by Yanqiao Zhu, the first author of the paper Deep Graph Contrastive Representation Learning.

This paper presents a novel contrastive framework for unsupervised graph representation learning. The proposed GRACE framework maximizes the agreement among node representations in two graph views, generated by corruption at both graph structure and attribute levels. Moreover, theoretical analysis based on InfoMax principle and classical triplet loss justifies motivation behind the framework. Extensive experiments demonstrate its superiority over existing state-of-the-art methods. GRACE even surpassed supervised counterparts on transductive tasks.

What’s New:

(1) Contrastive learning techniques are rarely explored in graph representation learning.

(2) Existing work mostly relies on global-local mutual information maximization (InfoMax), which requires an injective readout function to generate global graph embeddings. However, the injective property is too restrictive to fulfill. The GRACE framework is much simpler, which focuses on maximizing agreement at the local level.

(3) Interpreting InfoMax-based framework as optimizing the classical triplet loss further highlights the importance of negative samples involved in the objective, which is often neglected in previous methods. GRACE proposes two levels of graph corruption to generate more diverse contexts of nodes in different graph views.

How It Works: GRACE firstly perform graph corruption at both graph topology and node attributes levels to generate two different graph views from the original graph. It learns representations by first generating graph views using two proposed schemes, removing edges and masking node features, and then applying a contrastive loss to maximize the agreement of node embeddings in these two views.

Key Insights:
(1) Contrastive learning methods are widely employed in visual representation learning. However, its application in the graph domain is rarely explored.

(2) In this paper, GRACE develops a novel graph contrastive representation learning framework based on maximizing the agreement at the node level.

(3) Experiments show its superiority over existing state-of-the-arts. Results further show that GRACE is able to achieve comparable performance compared with supervised counterparts, demonstrating the power of contrastive-based methods in graph representation learning.

The paper Deep Graph Contrastive Representation Learning is on arXiv.

Meet the authors Yanqiao Zhu, Yichen Xu, Feng Yu, Qiang Liu, Shu Wu and Liang Wang from Chinese Academy of Sciences, University of Chinese Academy of Sciences, Beijing University of Posts and Telecommunications, RealAI and Tsinghua University.

Share Your Research With Synced

Share My Research is Synced’s new column that welcomes scholars to share their own research breakthroughs with over 1.5M global AI enthusiasts. Beyond technological advances, Share My Research also calls for interesting stories behind the research and exciting research ideas. Share your research with us by clicking here.

We know you don’t want to miss any story. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

Need a comprehensive review of the past, present and future of modern AI research development? Trends of AI Technology Development Report is out!

2018 Fortune Global 500 Public Company AI Adaptivity Report is out!
Purchase a Kindle-formatted report on Amazon.
Apply for Insight Partner Program to get a complimentary full PDF report.

--

--

Synced
SyncedReview

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global