Skip to main content
โšก Calmops

Graph Embedding Methods: node2vec, DeepWalk, and LINE

Introduction

Graph structured data pervades modern applicationsโ€”from social networks to molecular structures, from recommendation systems to transportation networks. Yet, most machine learning algorithms require fixed-size numerical vectors as input. Graph embedding bridges this gap by learning low-dimensional representations that preserve graph structure.

In 2026, graph embedding remains fundamental for machine learning on graphs. Methods like DeepWalk, node2vec, and LINE have become foundational techniques, powering recommendation systems, fraud detection, and biological network analysis.

This article explores graph embedding fundamentals, major algorithms, implementation strategies, and practical applications.

Fundamentals of Graph Embedding

What is Graph Embedding?

Graph embedding transforms nodes (or edges) into dense vectors in a low-dimensional space while preserving structural properties:

  • Node embeddings: Each node โ†’ d-dimensional vector
  • Edge embeddings: Each edge โ†’ d-dimensional vector
  • Graph embedding: Entire graph โ†’ d-dimensional vector

Desired Properties

Good embeddings should preserve:

  1. First-order proximity: Direct edge connections โ†’ similar embeddings
  2. Second-order proximity: Shared neighbors โ†’ similar embeddings
  3. Community structure: Same community โ†’ similar embeddings
  4. Structural equivalence: Similar structural roles โ†’ similar embeddings

Applications

  • Node classification
  • Link prediction
  • Graph classification
  • Recommendation systems
  • Anomaly detection
  • Visualization

DeepWalk Algorithm

Core Idea

DeepWalk combines random walks with Word2Vec:

  1. Generate random walk sequences from graph
  2. Treat walks as “sentences” and nodes as “words”
  3. Apply Skip-gram to learn embeddings
import numpy as np
import networkx as np as nx
from gensim.models import Word2Vec

class DeepWalk:
    def __init__(self, walk_length=80, num_walks=10, embedding_dim=64, window_size=5):
        self.walk_length = walk_length
        self.num_walks = num_walks
        self.embedding_dim = embedding_dim
        self.window_size = window_size
        self.model = None
    
    def random_walk(self, graph, start_node):
        """Generate a random walk starting from start_node."""
        walk = [start_node]
        
        for _ in range(self.walk_length - 1):
            current = walk[-1]
            neighbors = list(graph.neighbors(current))
            
            if not neighbors:
                break
            
            walk.append(np.random.choice(neighbors))
        
        return walk
    
    def generate_walks(self, graph):
        """Generate multiple random walks from all nodes."""
        walks = []
        nodes = list(graph.nodes())
        
        for _ in range(self.num_walks):
            np.random.shuffle(nodes)
            
            for node in nodes:
                walk = self.random_walk(graph, node)
                walks.append([str(n) for n in walk])
        
        return walks
    
    def fit(self, graph):
        """Learn embeddings from graph."""
        print("Generating random walks...")
        walks = self.generate_walks(graph)
        
        print(f"Training Skip-gram model on {len(walks)} walks...")
        
        self.model = Word2Vec(
            sentences=walks,
            vector_size=self.embedding_dim,
            window=self.window_size,
            min_count=0,
            sg=1,
            workers=4,
            epochs=5
        )
        
        return self
    
    def get_embedding(self, node):
        """Get embedding for a specific node."""
        return self.model.wv[str(node)]
    
    def get_all_embeddings(self):
        """Get embeddings for all nodes."""
        return {node: self.model.wv[str(node)] for node in self.model.wv.index_to_key}


def build_sample_graph():
    """Build a sample graph for demonstration."""
    G = nx.barabasi_albert_graph(100, 3)
    return G

Implementation Details

# Usage example
G = build_sample_graph()

deepwalk = DeepWalk(
    walk_length=40,
    num_walks=5,
    embedding_dim=64,
    window_size=5
)

deepwalk.fit(G)

# Get embeddings
node_embedding = deepwalk.get_embedding(0)
print(f"Node 0 embedding shape: {node_embedding.shape}")

node2vec Algorithm

Core Innovation

node2vec extends DeepWalk with biased random walks, introducing two parameters:

  • p (return parameter): Controls likelihood of immediately revisiting node
  • q (in-out parameter): Balances BFS-like (q > 1) vs DFS-like (q < 1) exploration
class Node2Vec:
    def __init__(self, graph, walk_length=80, num_walks=10, 
                 embedding_dim=64, window_size=5, p=1.0, q=1.0):
        self.graph = graph
        self.walk_length = walk_length
        self.num_walks = num_walks
        self.embedding_dim = embedding_dim
        self.window_size = window_size
        self.p = p
        self.q = q
        self.model = None
        self.alias_nodes = {}
        self.alias_edges = {}
    
    def preprocess_transition_probs(self):
        """Precompute transition probabilities for biased walk."""
        graph = self.graph
        
        for node in graph.nodes():
            neighbors = list(graph.neighbors(node))
            unnormalized_probs = [graph[node][neighbor].get('weight', 1.0) 
                                for neighbor in neighbors]
            
            if sum(unnormalized_probs) > 0:
                normalized_probs = [p / sum(unnormalized_probs) for p in unnormalized_probs]
            else:
                normalized_probs = [1.0 / len(neighbors)] * len(neighbors)
            
            self.alias_nodes[node] = self._create_alias_table(normalized_probs)
    
    def _create_alias_table(self, probs):
        """Create alias table for efficient sampling."""
        import random
        
        n = len(probs)
        alias = np.zeros(n)
        probability = np.zeros(n)
        
        # Flatten into alias table
        norm_const = sum(probs)
        normalized_probs = [p / norm_const for p in probs]
        
        small = []
        large = []
        
        for i, p in enumerate(normalized_probs):
            if p < 1.0 / n:
                small.append(i)
            else:
                large.append(i)
        
        while small and large:
            s_idx = small.pop()
            l_idx = large.pop()
            
            alias[s_idx] = l_idx
            probability[s_idx] = n * normalized_probs[s_idx]
            
            remaining_p = normalized_probs[l_idx] + normalized_probs[s_idx] - 1.0 / n
            
            if remaining_p < 1.0 / n:
                small.append(l_idx)
            else:
                large.append(l_idx)
        
        while large:
            l_idx = large.pop()
            probability[l_idx] = 1.0
        
        while small:
            s_idx = small.pop()
            probability[s_idx] = 1.0
        
        return alias, probability
    
    def _alias_sample(self, alias, probability):
        """Sample from alias table."""
        import random
        
        n = len(probability)
        idx = int(np.floor(np.random.random() * n))
        
        if np.random.random() < probability[idx]:
            return idx
        else:
            return alias[idx]
    
    def node2vec_walk(self, start_node):
        """Generate a biased random walk."""
        walk = [start_node]
        
        while len(walk) < self.walk_length:
            current = walk[-1]
            neighbors = list(self.graph.neighbors(current))
            
            if not neighbors:
                break
            
            if len(walk) == 1:
                # First step - use alias table
                idx = self._alias_sample(*self.alias_nodes[current])
                walk.append(neighbors[idx])
            else:
                # Subsequent steps - use edge alias table
                prev = walk[-2]
                next_probs = self._get_edge_probs(prev, current, neighbors)
                idx = self._alias_sample(*self._create_alias_table(next_probs))
                walk.append(neighbors[idx])
        
        return walk
    
    def _get_edge_probs(self, prev, current, neighbors):
        """Calculate transition probabilities considering p and q."""
        probs = []
        
        for neighbor in neighbors:
            weight = self.graph[current][neighbor].get('weight', 1.0)
            
            if neighbor == prev:
                # Back to previous node
                probs.append(weight / self.p)
            else:
                probs.append(weight)
        
        # Normalize
        total = sum(probs)
        return [p / total for p in probs]
    
    def generate_walks(self):
        """Generate all random walks."""
        self.preprocess_transition_probs()
        
        walks = []
        nodes = list(self.graph.nodes())
        
        for _ in range(self.num_walks):
            np.random.shuffle(nodes)
            
            for node in nodes:
                walk = self.node2vec_walk(node)
                walks.append([str(n) for n in walk])
        
        return walks
    
    def fit(self):
        """Train node2vec embeddings."""
        print("Generating biased random walks...")
        walks = self.generate_walks()
        
        print(f"Training Skip-gram on {len(walks)} walks...")
        
        self.model = Word2Vec(
            sentences=walks,
            vector_size=self.embedding_dim,
            window=self.window_size,
            min_count=0,
            sg=1,
            workers=4,
            epochs=5
        )
        
        return self

Parameter Selection

p q Behavior
High Low DFS-like, homophily
Low High BFS-like, structural equivalence
~1 ~1 Similar to DeepWalk

LINE Algorithm

First and Second Order Proximity

LINE explicitly preserves both:

  1. First-order proximity: Direct edges
  2. Second-order proximity: Shared neighborhood
class LINE:
    def __init__(self, graph, embedding_dim=64, order=2, negative_samples=5):
        self.graph = graph
        self.embedding_dim = embedding_dim
        self.order = order
        self.negative_samples = negative_samples
        self.model = None
    
    def first_order_proximity(self):
        """Learn embeddings preserving first-order proximity."""
        import random
        
        edges = list(self.graph.edges())
        
        # Positive sampling
        pos_pairs = [(u, v) for u, v in edges]
        
        # Negative sampling
        nodes = list(self.graph.nodes())
        neg_pairs = []
        
        for _ in range(len(edges) * self.negative_samples):
            u = random.choice(edges)[0]
            v = random.choice(nodes)
            neg_pairs.append((u, v))
        
        return pos_pairs, neg_pairs
    
    def second_order_proximity(self):
        """Learn embeddings preserving second-order proximity."""
        # For each node, treat its context (neighbors) as "words"
        # Similar to Skip-gram
        pass
    
    def train(self):
        """Train LINE embeddings."""
        # Simplified implementation
        # In practice, use specialized libraries
        
        import random
        
        nodes = list(self.graph.nodes())
        embeddings = {node: np.random.randn(self.embedding_dim) 
                    for node in nodes}
        
        edges = list(self.graph.edges())
        
        # Edge-based loss
        learning_rate = 0.025
        
        for epoch in range(10):
            np.random.shuffle(edges)
            
            for u, v in edges[:1000]:  # Sample edges
                # Similarity between u and v
                sim = np.dot(embeddings[u], embeddings[v])
                
                # Positive sample
                grad = 1 / (1 + np.exp(-sim))
                embeddings[u] += learning_rate * grad * embeddings[v]
                embeddings[v] += learning_rate * grad * embeddings[u]
        
        return embeddings

GraphSAGE Algorithm

Inductive Learning

GraphSAGE learns embeddings by sampling and aggregating neighbors:

import torch
import torch.nn as nn
import torch.nn.functional as F

class GraphSAGE(nn.Module):
    def __init__(self, input_dim, hidden_dim, output_dim, num_layers=2):
        super(GraphSAGE, self).__init__()
        
        self.num_layers = num_layers
        
        # Aggregation functions
        self.aggregators = nn.ModuleList([
            nn.Linear(input_dim if i == 0 else hidden_dim, hidden_dim)
            for i in range(num_layers)
        ])
        
        # Prediction layer
        self.predict = nn.Linear(hidden_dim, output_dim)
    
    def aggregate(self, x, neighbors):
        """Mean aggregation of neighbor features."""
        if len(neighbors) == 0:
            return torch.zeros_like(x)
        
        neighbor_features = torch.stack([x[n] for n in neighbors])
        return torch.mean(neighbor_features, dim=0)
    
    def forward(self, x, edge_index):
        """Forward pass."""
        # edge_index: [2, num_edges]
        src, dst = edge_index
        
        for layer in range(self.num_layers):
            # Get neighbors for each node
            new_features = []
            
            for i in range(len(x)):
                # Find neighbors in src or dst
                neighbors = torch.cat([
                    torch.where(src == i)[0],
                    torch.where(dst == i)[0]
                ]).unique()
                
                if len(neighbors) > 0:
                    # Aggregate neighbor features
                    neighbor_emb = x[neighbors].mean(dim=0)
                    combined = torch.cat([x[i], neighbor_emb])
                else:
                    combined = x[i]
                
                new_features.append(F.relu(self.aggregators[layer](combined)))
            
            x = torch.stack(new_features)
        
        return self.predict(x)

Practical Applications

Node Classification

def node_classification_with_embeddings(G, labels, train_idx, test_idx):
    """Node classification using graph embeddings."""
    
    # Learn embeddings
    deepwalk = DeepWalk(walk_length=40, num_walks=5, embedding_dim=64)
    deepwalk.fit(G)
    
    # Get features and labels
    X = np.array([deepwalk.get_embedding(n) for n in G.nodes()])
    y = np.array([labels[n] for n in G.nodes()])
    
    # Train classifier
    clf = RandomForestClassifier()
    clf.fit(X[train_idx], y[train_idx])
    
    # Evaluate
    accuracy = clf.score(X[test_idx], y[test_idx])
    return accuracy
def link_prediction(G, embeddings, test_edges):
    """Predict missing edges using embeddings."""
    
    predictions = []
    
    for u, v in test_edges:
        # Use embedding similarity
        sim = np.dot(embeddings[u], embeddings[v])
        predictions.append(sim)
    
    return predictions

Recommendation System

def graph_based_recommendation(user_item_graph, user, embeddings, k=10):
    """Recommend items for a user based on graph embeddings."""
    
    # Get user's neighbors in bipartite graph
    neighbors = list(user_item_graph.neighbors(user))
    
    # Score items by embedding similarity
    scores = []
    
    for item in user_item_graph.nodes():
        if item != user and item not in neighbors:
            score = np.dot(embeddings[user], embeddings[item])
            scores.append((item, score))
    
    # Top-k recommendations
    scores.sort(key=lambda x: x[1], reverse=True)
    return scores[:k]

Advanced Methods

struc2vec

Learns structural identity regardless of node labels:

# struc2vec captures structural equivalence
# Nodes with similar structural roles get similar embeddings
# Unlike homophily methods (DeepWalk, node2vec)

Graph Autoencoders

class GraphAutoEncoder(nn.Module):
    def __init__(self, input_dim, hidden_dim, embedding_dim):
        super().__init__()
        
        self.encoder = nn.Sequential(
            nn.Linear(input_dim, hidden_dim),
            nn.ReLU(),
            nn.Linear(hidden_dim, embedding_dim)
        )
        
        self.decoder = nn.Sequential(
            nn.Linear(embedding_dim, hidden_dim),
            nn.ReLU(),
            nn.Linear(hidden_dim, input_dim),
            nn.Sigmoid()
        )
    
    def forward(self, x):
        z = self.encoder(x)
        x_recon = self.decoder(z)
        return z, x_recon

Choosing an Algorithm

Method Preserves Best For Complexity
DeepWalk Local neighborhoods Social networks O(V ร— walks)
node2vec Homophily + structural Diverse networks O(V ร— walks)
LINE 1st + 2nd order Large networks O(E)
struc2vec Structural identity Functional roles O(Vยฒ log V)
GraphSAGE Inductive Dynamic graphs O(V ร— neighbors)

Conclusion

Graph embedding transforms complex graph structures into machine-learning-friendly representations. From DeepWalk’s elegant random walk + Word2Vec combination to node2vec’s flexible exploration control, these methods enable downstream ML tasks on graph data.

In 2026, graph embeddings power:

  • Social network analysis
  • Recommendation systems
  • Drug discovery
  • Fraud detection
  • Knowledge graph completion

Understanding these methods provides essential tools for working with networked data in any domain.

Resources

Comments