source

05 Modern Architectures

Modern Architectures: Software Defined Networking

Prerequisites: 02-Network-Layer-and-Routing Learning Goals: After reading this, you will understand the SDN paradigm, control/data plane separation, the evolution from Active Networks to OpenFlow, controller architectures (ONOS), P4 programmability, and SDX applications.

Introduction

Traditional networks are complex, proprietary, and slow to innovate. Routers tightly couple control logic (routing protocols) with forwarding hardware, making it difficult to deploy new protocols or network-wide policies. Software Defined Networking (SDN) fundamentally changes this by separating the control plane from the data plane, enabling programmable networks managed by centralized (or logically centralized) controllers.

Key Innovation: Move intelligence to software controllers, reduce switches to simple forwarding elements controlled via open APIs.


The Problem with Traditional Networks

Challenges

1. Complexity:

2. Vendor Lock-in:

3. Slow Innovation:

4. Limited Control:

Example Problem:

Goal: Route video traffic through high-bandwidth links, other traffic through normal links

Traditional approach:
  - Configure every router individually
  - Use complex BGP communities and MPLS tunnels
  - Error-prone, hard to verify

Result: Hours/days to deploy, difficult to troubleshoot

The SDN Solution

Core Principle: Separation of Concerns

Traditional Router: Control and Data Plane tightly coupled in one box

SDN Approach: Separate the planes

Benefits:

  1. Centralized Logic: Network-wide view enables global optimization
  2. Programmability: Write software to control network behavior
  3. Innovation: Deploy new protocols without hardware changes
  4. Vendor Neutrality: Open interfaces (OpenFlow) allow multi-vendor networks

Analogy:

Traditional Network = Each car has its own GPS and decides route independently
SDN Network = Central traffic control system directs all cars (network-wide optimization)

History and Evolution of SDN

1. Active Networks (Mid-1990s to Early 2000s)

Goal: Make networks programmable by allowing users to inject code

Two Approaches:

Capsule Model:

Programmable Router Model:

Vision: Accelerate protocol deployment by allowing experimentation

Why It Failed:

Legacy: Inspired SDN’s programmability concept, but SDN learned to separate control from data plane


2. Control/Data Plane Separation (2001-2007)

Motivation: Improve network reliability and manageability

Key Projects:

ForCES (Forwarding and Control Element Separation):

RCP (Routing Control Platform):

Ethane (Precursor to OpenFlow):

Insight: Logically centralized control simplifies management and enables network-wide policies


3. OpenFlow and Modern SDN (2007-Present)

Catalyst: Need for network experimentation in research networks

Problem: Production networks cannot experiment with new protocols (risk downtime)

Solution: Network Slicing - Run experimental protocols alongside production traffic

OpenFlow (2007):

Key Innovation: Commodity Ethernet switches already had flow tables (for VLANs, ACLs) - OpenFlow just opened access to them

Impact:


SDN Architecture

The Layered Model

SDN Stack (Bottom to Top):

1. Infrastructure Layer (Data Plane):

2. Southbound Interface (Control-Data Plane API):

3. Network Operating System (Control Plane):

4. Northbound Interface (Application API):

5. Application Layer:

Diagram:

+--------------------+
|   Applications     |  (Routing, Firewall, Load Balancer)
+--------------------+
         ↕ Northbound API (REST)
+--------------------+
|   Controller       |  (ONOS, OpenDayLight)
|  (Network OS)      |
+--------------------+
         ↕ Southbound API (OpenFlow)
+--------------------+
|    Switches        |  (Forwarding only)
+--------------------+

Flow-Based Forwarding

Traditional Routing: Match on destination IP → forward to next hop

SDN Flow Forwarding: Match on any combination of header fields → execute actions

Flow Table Entry:

Match Fields           | Priority | Actions        | Counters
-----------------------|----------|----------------|----------
src=10.0.0.1, dst=*    |   100    | Forward port 3 | 5000 pkts
dst=192.168.1.0/24     |    50    | Forward port 1 | 10000 pkts
*                      |     1    | Drop           | 200 pkts

Match Fields (12+ fields in OpenFlow 1.0):

Actions:

Matching Process:

  1. Packet arrives at switch
  2. Match against flow table (highest priority first)
  3. If match found: Execute actions
  4. If no match: Send to controller (or drop, depending on config)

OpenFlow Protocol

Controller-to-Switch Messages:

1. Flow Mod (Modify Flow Table):

2. Stats Request:

3. Packet Out:

Switch-to-Controller Messages:

1. Packet In:

2. Flow Removed:

3. Port Status:

Example Flow:

1. New flow arrives at switch (src=10.0.0.1, dst=10.0.0.2)
2. No matching flow entry → Switch sends Packet-In to controller
3. Controller computes path: Switch A port 3 → Switch B port 2 → dst
4. Controller sends Flow-Mod to Switch A: "Match src=10.0.0.1, dst=10.0.0.2 → Forward port 3"
5. Future packets in this flow forwarded directly (no controller involvement)

Granularity: Flow entries can be specific (per-connection) or aggregate (per-prefix)


SDN Controllers

Centralized vs. Distributed Controllers

Centralized Controller (Single instance):

Examples: POX, Floodlight, Ryu

Advantages:

Disadvantages:

Use Case: Small networks, research, prototyping


Distributed Controller (Multiple instances in cluster):

Examples: ONOS, OpenDayLight

Goal: Scalability and fault tolerance

Architecture:

Challenges:

  1. State Consistency: How to keep controllers’ views synchronized?
  2. Fault Tolerance: How to handle controller failures?
  3. Scalability: How to distribute load across controllers?

ONOS (Open Networking Operating System)

Design Philosophy: Distributed, Scalable, Fault-Tolerant

Architecture:

1. Controller Cluster:

2. Mastership Election:

3. Global Network View:

Example:

Cluster: ONOS-1, ONOS-2, ONOS-3

Switch A: Master = ONOS-1, Backup = ONOS-2, ONOS-3
Switch B: Master = ONOS-2, Backup = ONOS-1, ONOS-3

Link A-B fails:
  → Switch A notifies ONOS-1 (master)
  → ONOS-1 updates distributed store
  → ONOS-2, ONOS-3 receive update
  → All have consistent view within milliseconds

ONOS-1 fails:
  → ONOS-2 detects failure (via heartbeat)
  → ONOS-2 becomes new master for Switch A
  → Applications continue running on ONOS-2

Benefits:


Controller Services

Common Services Provided by SDN Controllers:

1. Topology Service:

2. Path Computation:

3. Flow Rule Management:

4. Device Management:

5. Statistics Collection:

Application Example:

# Pseudo-code for L2 forwarding app

def packet_in_handler(event):
    packet = event.packet
    switch = event.switch
    in_port = event.in_port

    # Learn source MAC → port mapping
    mac_table[packet.src_mac] = (switch, in_port)

    # Lookup destination
    if packet.dst_mac in mac_table:
        out_switch, out_port = mac_table[packet.dst_mac]
        if out_switch == switch:
            # Install flow rule
            install_flow(switch, match={dst_mac: packet.dst_mac}, action={output: out_port})
        else:
            # Compute path and install rules
            path = compute_path(switch, out_switch)
            install_path(path, packet.dst_mac)
    else:
        # Flood
        flood(switch, packet, in_port)

Programming the Data Plane: P4

Motivation

Problem with OpenFlow: Fixed match fields

Example:

Limitation: Innovation bottlenecked by standardization process

P4 Solution: Make the data plane itself programmable


P4 Overview

Name: Programming Protocol-independent Packet Processors

Goal: Allow operators to define:

  1. What headers switches should recognize
  2. How to parse those headers
  3. How to process (match-action) packets

Key Properties:

1. Reconfigurability:

2. Protocol Independence:

3. Target Independence:


P4 Programming Model

Two Main Components:

1. Parser:

Example (simplified):

parser start {
    extract(ethernet);
    return select(ethernet.etherType) {
        0x0800: parse_ipv4;
        0x86DD: parse_ipv6;
        default: ingress;
    }
}

parser parse_ipv4 {
    extract(ipv4);
    return ingress;
}

2. Match-Action Tables:

Example:

table ipv4_forwarding {
    reads {
        ipv4.dstAddr : lpm;  // Longest prefix match
    }
    actions {
        forward;
        drop;
    }
}

action forward(port) {
    modify_field(standard_metadata.egress_spec, port);
    modify_field(ipv4.ttl, ipv4.ttl - 1);  // Decrement TTL
}

Control Flow:

control ingress {
    apply(ipv4_forwarding);
}

P4 Use Cases

1. Custom Protocols:

2. In-Network Computing:

3. Network Telemetry:

4. Rapid Prototyping:

Example: INT (In-band Network Telemetry):

// Add switch metadata to packet
action add_int_metadata() {
    push(int_stack, 1);  // Add metadata header
    modify_field(int_stack[0].switch_id, switch_id);
    modify_field(int_stack[0].queue_depth, queue_depth);
    modify_field(int_stack[0].timestamp, timestamp);
}

Result: Packets carry detailed path information for debugging


SDN Applications

SDX (Software Defined Internet Exchange)

Problem: BGP limitations at IXPs (Internet Exchange Points)

IXP Reminder:

BGP Limitations:

  1. Destination-only routing: Can only route based on destination prefix
  2. No application awareness: Cannot route video differently from email
  3. No source-based routing: Cannot prefer certain peers for specific traffic
  4. Coarse granularity: Route entire prefixes, not specific flows

Example Problem:

AS 100 at IXP wants:
  - Route video traffic (port 443, Netflix) via Peer A (high bandwidth)
  - Route other traffic via Peer B (cheaper)

BGP cannot do this: Only destination prefix matching

SDX Architecture

Goal: Give each IXP participant the illusion of their own virtual SDN switch

How It Works:

1. Virtual Switch Abstraction:

2. SDX Controller:

3. Policy Composition:

Example:

AS 100 policy:
  match: dst=192.168.0.0/16, app=video → forward to AS 200 (Peer A)
  match: dst=192.168.0.0/16, app=other → forward to AS 300 (Peer B)

AS 200 policy:
  match: src=AS 100, dst=10.0.0.0/8 → forward to AS 400

SDX Controller:
  Compiles both policies into flow rules on physical switches
  Installs rules that satisfy both ASes' intents

Benefits:

  1. Application-aware routing: Route based on ports, protocols
  2. Traffic engineering: Fine-grained control over traffic paths
  3. Flexibility: Change policies in seconds (vs. hours with BGP)
  4. Transparency: Each AS controls its own policies independently

Deployment: Several IXPs (e.g., AMS-IX research testbed) have deployed SDX


Summary

Key Takeaways

  1. SDN Paradigm:

    • Separates control plane (software) from data plane (hardware)
    • Centralized/logically centralized control enables network-wide optimization
    • Open interfaces (OpenFlow) break vendor lock-in
  2. Evolution:

    • Active Networks (1990s): Programmability via code injection (failed due to security/performance)
    • Control/Data Separation (2000s): Centralized control for better management
    • OpenFlow/SDN (2007+): Standardized API, flow-based forwarding, practical deployment
  3. SDN Architecture:

    • Layers: Applications → Northbound API → Controller → Southbound API → Switches
    • Flow-based forwarding: Match on multiple fields, execute actions
    • Controllers: Centralized (simple) vs. Distributed (scalable, fault-tolerant like ONOS)
  4. P4 Programming:

    • Makes data plane programmable (define parsers and match-action tables)
    • Protocol-independent, target-independent
    • Enables rapid innovation (custom protocols, in-network computing)
  5. SDX Application:

    • Applies SDN to IXPs
    • Overcomes BGP limitations (destination-only routing)
    • Application-aware, fine-grained traffic engineering

Common Patterns

SDN Design Principles:

Trade-offs:

Application Development:


See Also

Next: 06-Application-Layer-Services