At first glance, PCA and Laplacian Eigenmaps seem both very similar. We can view both algorithms as constructing a graph from our data, choosing a matrix to represent this graph, computing the eigenvectors of this matrix, and then using these eigenvectors to determine low-dimensionality embeddings of our data.
However, the algorithms can produce very different results, primarily due to the fact that PCA is a linear dimensionality reduction algorithm that makes few assumptions, whereas Laplacian Eigenmaps is a nonlinear dimensionality reduction algorithm that assumes that the data lies on a low-dimensional manifold.
We can explicitly characterize this divergence in terms of three key differences between the algorithms:
- The graph we construct from our data
- The choice of matrix representation of this graph
- The choice of eigenvectors of this matrix
In the following exposition, we will assume we have data points and features, represented in a data matrix with shape . For simplicity we will assume that our data is already normalized and zero-centered.
In PCA, we form a fully connected graph such that the edge between each pair of vertices has weight equal to the dot product of the corresponding data vectors. We represent this graph with the adjacency matrix, . For some our objective is then to construct the matrix such that the following quantity is minimized.
In order to do this, we compute the eigenvectors corresponding to the largest eigenvalues of the matrix . In practice we implement this with SVD rather than explicitly form the matrix . We then define to be . That is, the jth element of the ith data point’s embedding is the product of the square root of the jth largest eigenvalue and the ith element of the corresponding eigenvector.
Because PCA operates over the fully connected graph, all pairwise relationships between data vectors (i.e. elements of ) are considered equally important, and the optimization objective is framed in terms of reconstructing the exact distances between points.
In contrast, in Laplacian Eigenmaps we form a different graph based on a nonlinear transformation of our data, and we represent this graph with its Laplacian matrix.
We begin by choosing a number and building a graph such that there is a unit-weight edge connecting the vertices and if is one of the kth nearest neighbors of or vice-versa. In some implementations of Laplacian Eigenmaps, the weight of this edge is defined to be inversely proportional to the distance between the data vectors corresponding to and .
Now for some our objective is then to construct the matrix such that the following quantity is minimized subject to a few constraints around orthogonality and embedding normalization.
Note that is the ith row of and that if there is an edge between and :
In order to do this, we compute the eigenvectors corresponding to the smallest eigenvalues for the generalized eigenproblem , where is the diagonal matrix where the ith diagonal entry is the degree of . Note that this is equivalent to computing the eigenvectors of the matrix . We then define to be . That is, the jth element of the ith data point’s embedding is the ith element of the eigenvector corresponding to the jth smallest eigenvalue.
Unlike PCA, the Laplacian Eigenmaps algorithm does not try to preserve exact pairwise distances, or even relative pairwise distances between far away points. The algorithm exclusively focuses on mapping the embeddings corresponding to nearest neighbors as close together as possible.
- PCA: Fully connected graph where weights of the graph are determined by the dot product between data points
- Laplacian Eigenmaps: Graph where an edge only exists between and if is one of the kth nearest neighbors of or vice-versa
- PCA: The adjacency matrix
- Laplacian Eigenmaps: The Laplacian matrix
- PCA: The eigenvectors corresponding to the largest eigenvalues
- Laplacian Eigenmaps: The eigenvectors corresponding to the smallest eigenvalues