# DPNE: Differentially private network embedding

We show that with only adding a small amount of noise onto the objective function, the learned low-dimensional representations satisfy differential privacy.

The definition of the differential privacy in this paper is following (see Dwork2006calibrating, Laplace distribution):

A graph analysis mechanism $\mathcal{M}$ satisfies ε-differential privacy, if for all neighboring graphs G and G′ and all subsets Z of $\mathcal{M}$’s range:

$\text{Pr}(\mathcal{M}(G) \in Z) \leq \exp(\epsilon) \cdot \text{Pr}(\mathcal{M}(G') \in Z),$

where G’ has extra (differed) edges.