Skip to yearly menu bar Skip to main content


Poster

Going Deeper into Locally Differentially Private Graph Neural Networks

Longzhu He · Chaozhuo Li · Peng Tang · Sen Su

[ ]
Wed 16 Jul 4:30 p.m. PDT — 7 p.m. PDT
 
Oral presentation: Oral 4C Privacy and Uncertainty Quantification
Wed 16 Jul 3:30 p.m. PDT — 4:30 p.m. PDT

Abstract:

Graph Neural Networks (GNNs) have demonstrated superior performance in a variety of graph mining and learning tasks. However, when node representations involve sensitive personal information or variables related to individuals, learning from graph data can raise significant privacy concerns. Although recent studies have explored local differential privacy (LDP) to address these concerns, they often introduce significant distortions to graph data, severely degrading learning utility (e.g., classification accuracy). In this paper, we present UPGNet, an LDP-based privacy-preserving graph learning framework that enhances utility while safeguarding user privacy. Specifically, we propose a three-stage pipeline that generalizes the LDP protocols for node features, targeting privacy-sensitive scenarios. Our analysis identifies two key factors that affect the utility of privacy-preserving graph learning: feature dimension and neighborhood size. Based on the above analysis, UPGNet enhances utility by introducing two core layers: High-Order Aggregator (HOA) layer and the Node Feature Regularization (NFR) layer. Extensive experiments on real-world datasets indicate that UPGNet significantly outperforms existing methods in terms of both privacy protection and learning utility.

Live content is unavailable. Log in and register to view live content