DPNE: Differentially private network embedding

We show that with only adding a small amount of noise onto the objective function, the learned low-dimensional representations satisfy differential privacy.

The definition of the differential privacy in this paper is following (see Dwork2006calibrating, Laplace distribution):

A graph analysis mechanism M\mathcal{M} satisfies ε-differential privacy, if for all neighboring graphs G and G′ and all subsets Z of M\mathcal{M}’s range:

Pr(M(G)Z)exp(ϵ)Pr(M(G)Z), \text{Pr}(\mathcal{M}(G) \in Z) \leq \exp(\epsilon) \cdot \text{Pr}(\mathcal{M}(G') \in Z),

where G’ has extra (differed) edges.