Case Study: OpenStack Networking and Neutron

Explore the intricacies of OpenStack Networking with this detailed case study focusing on the Neutron component. Learn about VXLAN tunneling, configuration of cloud environments, and traffic flow analysis. Understand how to implement and manage network resources in a virtualized cloud computing environment, ensuring tenant isolation and efficient inter-node communication. Perfect for those looking to deepen their knowledge of open-source cloud technology and Software Defined Networking (SDN).

Case Study – OpenStack Networking and Neutron

What is OpenStack Networking and How Does Neutron Work? The provided text is a detailed case study on implementing OpenStack networking, specifically focusing on the Neutron component, VXLAN tunneling, and traffic flow analysis in a virtualized cloud environment.

Here is a rephrased, reworded, and reorganized version of the content, maintaining the original technical and formal tone.

1. Introduction to OpenStack and Cloud Computing

Cloud computing, an internet-based service, delivers shared computer processing resources and data on demand. OpenStack is a suite of software tools designed for building and managing cloud computing platforms for both public and private clouds. It orchestrates large pools of compute, networking, and storage resources across a datacenter, managed via a dashboard and a dedicated API.

OpenStack Core Components

ComponentFunction
Dashboard (Horizon)Provides an overview for monitoring all OpenStack services.
KeystoneThe identity service responsible for user authorization and service access control.
Compute (Nova)The primary service for launching and managing virtual machine instances.
Networking (Neutron)Delivers Network-as-a-Service (NaaS) in virtual compute environments.
GlanceStores operating system images used for provisioning virtual machine instances.
SwiftA storage service dedicated to object storage.
CinderA storage service providing plug-in block storage.

1.1 OpenStack Networking Overview

OpenStack Networking allows for the creation and management of network objects—including networks, subnets, and ports—for use by other OpenStack services. The system supports various plug-ins to integrate different networking hardware and software, offering flexibility in architecture and deployment.

1.2 Objective and Scope

The core objective of this case study is to design and implement a cloud-based datacenter using OpenStack, focusing specifically on the networking service provided by the Neutron component.

The implementation involves:

  1. Creating a three-host environment: two compute nodes and one combined network/controller node.
  2. Running these Virtual Machines (VMs) on a virtual environment using the CentOS 7 operating system.
  3. Utilizing VXLAN tunneling for tenant isolation and vSwitch for internal connectivity.
  4. Configuring the Virtual Router on the Network node to perform Source Network Address Translation (SNAT) and assign Floating IP addresses for external reachability.

The case study is divided into three key sections:

  1. Installation and Setup: Installing and establishing the cloud and OpenStack components.
  2. Configuration: Configuring the underlay and overlay networks, including Neutron setup, to enable communication among VMs.
  3. Monitoring and Analysis: Monitoring network behavior using Wireshark for traffic flow analysis.

1.3 Infrastructure Details

The project was implemented in VMware vCloud using three interconnected Virtual Machines.

NameOSMemoryCPU CoresDiskNetwork Interfaces (Management/Tunnel/External)
VM1 (Compute1)CentOS 716 GB416 GBens32: 192.168.13.11, ens34: 172.16.10.101, ens35: 192.168.10.101
VM2 (Compute2)CentOS 716 GB416 GBens32: 192.168.13.12, ens34: 172.16.10.102, ens35: 192.168.10.102
VM3 (Controller)CentOS 78 GB416 GBens32: 192.168.13.13, ens34: 172.16.10.10, ens35: 192.168.10.10

1.4 VXLAN Tunneling Explained

VXLAN (Virtual eXtensible LAN) is a tunneling technique used to transfer encapsulated packets between nodes, primarily for tenant separation in the cloud. The original Layer 2 frame is encapsulated with VXLAN, UDP, and Outer IP headers.

HeaderKey Role
VXLAN HeaderContains the VNI (VXLAN Network Identifier), a 24-bit field that isolates traffic between different segments in the overlay network, expanding the virtual network range to 16 million (compared to 4096 for VLANs).
Outer UDP HeaderUsed for transportation. The destination port is consistently the IANA standard UDP port 4789.
Outer IP HeaderContains the IP addresses of the VTEP (Virtual Tunnel End Points) interfaces: the source IP is the encapsulating VTEP, and the destination IP is the decapsulating VTEP.
Outer Ethernet/MAC HeaderIncludes the MAC addresses for transmission across the underlay network.

The encapsulation process increases the frame size by 50 bytes, necessitating the use of Jumbo Frames in the underlay network to prevent fragmentation.

1.5 Software Defined Networking (SDN)

SDN is a centralized technology that manages and programs network devices through an SDN Controller. It separates the network’s forwarding and control planes, enabling centralized intelligence and dynamic network configuration. This centralization simplifies troubleshooting and monitoring compared to decentralized, complex traditional networks.

1.6 OpenDaylight (ODL)

OpenDaylight is an open-source SDN controller platform. It provides a highly available, modular, and extensible infrastructure for SDN deployments across multi-vendor networks. ODL offers a model-driven service abstraction platform, allowing developers to create applications that interact across a wide range of hardware and protocols.

2. Design and Implementation

2.1 Cloud Environment Design

The case study environment features three VMs: Compute1, Compute2, and Controller/Network Node. Two isolated network segments, Tenant A and Tenant B, were created. Each tenant has three instances, distributed across the two compute nodes for cross-node communication testing. The Controller/Network node hosts the Neutron services, providing external access and inter-tenant communication via a virtual router.

The OpenStack components were installed on the underlay network using the Packstack utility.

2.2 Configuration Prerequisites

The following steps were executed on all hosts prior to installation:

  • Networking: Ensure all network interfaces are active and assigned appropriate IP addresses.
  • Host File Configuration (/etc/hosts):
    • 172.16.10.10 controller.example.com controller
    • 172.16.10.101 compute1.example.com compute1
    • 172.16.10.102 compute2.example.com compute2
  • DNS: Configured specific DNS server addresses in /etc/resolve.conf.
  • Service Management: Disabled the NetworkManager and firewalld services to prevent interference with Neutron functionality. (Note: Disabling firewalls is for case study simplicity and is not recommended in a production environment).

2.3 PackStack Utility Installation

PackStack was installed on the RHEL/CentOS system using the RDO repository:

  1. Installed RDO repository RPM: sudo yum install -y [https://rdoproject.org/repos/rdo-release.rpm](https://rdoproject.org/repos/rdo-release.rpm) (for RHEL) or sudo yum install -y centos-release-openstack-stein (for CentOS).
  2. Enabled the repository: yum-config-manager –enable openstack-stein
  3. Updated packages: sudo yum update -y
  4. Installed PackStack: sudo yum install -y openstack-packstack

2.4 Deployment via Answer File

OpenStack was deployed using the answer-file method, which allows for customized installation, as opposed to the default “All-In-One” option.

  1. Generation: The answer file (answer.txt) was generated: packstack –gen-answer-file=answer.txt
  2. Customization: The file was edited to reflect the required configuration (e.g., in Appendix B), including:
    • Host Definitions: CONFIG_CONTROLLER_HOST=172.16.10.10, CONFIG_COMPUTE_HOSTS=172.16.10.101,172.16.10.102, CONFIG_NETWORK_HOSTS=172.16.10.10
    • Essential Neutron Configurations:
      • CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
      • CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan,flat
      • CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
      • CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
      • CONFIG_NEUTRON_OVS_TUNNEL_IF=eth2
  3. Installation: OpenStack installation was initiated from the controller node: packstack –answer-file=answer.txt. PackStack automatically establishes SSH sessions with the other hosts for deployment.

2.5 Post-Installation Verification

After installation, service status was confirmed:

  • API endpoints: openstack endpoint list
  • Network agents (Neutron): openstack network agent list
  • Compute service (Nova): openstack compute service list

3. Creating the Cloud Environment

The installation was followed by the creation of two tenants (projects) and the necessary networking infrastructure.

3.1 Tenant (Project) Creation

  • openstack project create TenantA
  • openstack project create TenantB
  • Users were assigned admin roles within their respective projects.

3.2 External Network Setup

A shared external network was created to provide internet access to both tenants:

  • Network: neutron net-create External-Network –shared –provider:physical_network extnet –provider:network_type flat –router:external=True
  • Subnet: neutron subnet-create –name Public_Subnet –enable_dhcp=False –allocation_pool start=192.168.13.200,end=192.168.13.220 –gateway=192.168.13.1 External-Network 192.168.13.0/24

3.3 Tenant Network Setup

Internal VXLAN networks and subnets were created for instance-to-instance communication within each tenant:

TenantNetwork CommandSubnet Command
Tenant Aopenstack network create –project TenantA –enable –internal –provider-network-type=vxlan TenantA_Networkopenstack subnet create –project TenantA –subnet-range 10.1.1.0/24 … –network TenantA_Network TenantA_Subnet (Gateway: 10.1.1.1)
Tenant Bopenstack network create –project TenantB –enable –internal –provider-network-type=vxlan TenantB_Networkopenstack subnet create –project TenantB –subnet-range 10.2.2.0/24 … –network TenantB_Network TenantB_Subnet (Gateway: 10.2.2.1)

3.4 Router Configuration

Virtual routers were created for each tenant to facilitate communication between their internal networks and the shared external network.

  1. Router Creation:
    • openstack router create –project TenantA TenantA_R1
    • openstack router create –project TenantB TenantB_R1
  2. Interface Connection (Internal):
    • openstack router add subnet TenantA_R1 TenantA_Subnet
    • openstack router add subnet TenantB_R1 TenantB_Subnet
  3. External Gateway Configuration:
    • openstack router set –external-gateway External-Network TenantA_R1
    • openstack router set –external-gateway External-Network TenantB_R1

3.5 Image, Volume, and Security Setup

  • Image Loading (Glance): The cirros OS image was downloaded and uploaded to the Glance service.
  • Volume Creation (Cinder): Volumes were created for VM booting, e.g., openstack volume create –project TenantA –image cirros –size 1 –availability-zone nova VolA1.
  • Security Groups: Security groups (groupA, groupB) were created for each tenant to allow all incoming (ingress) traffic for testing purposes.
  • Key Pairs: SSH key pairs (keyA, keyB) were generated for secure access to the launched instances.

After these steps, instances were launched using the Horizon dashboard, and Floating IPs were associated with them (e.g., 192.168.13.215 to VM_A1) to provide external reachability via DNAT (Destination Network Address Translation) on the virtual router.

4. Analysis of Network Traffic Flow

Five scenarios were tested to analyze traffic behavior using Wireshark, focusing on the roles of the Linux Bridge, OVS Integration Bridge (Br-int), OVS Tunnel Bridge (Br-tun), and Router Namespace.

4.1 Scenario 1: Intra-Node Communication (VM to VM on Same Compute Node)

Test: Ping between VM_A1 (10.1.1.119) and VM_A2 (10.1.1.148) in the same tenant and on the same Compute1 node.

Flow: The packet is forwarded from the VM’s virtual interface (TAP) to the Linux Bridge (for security check) and then to the Br-int. Since the destination is local, Br-int resolves the MAC address (via ARP if needed) and forwards the packet directly through the relevant Linux Bridge/TAP interface to VM_A2.

Observation: The packet did not leave the Compute1 node and was not encapsulated with VXLAN, UDP, or IP headers.

4.2 Scenario 2: Inter-Node Communication (VM to VM on Different Compute Nodes)

Test: Ping between VM_A1 (Compute1) and VM_A3 (Compute2) in the same tenant.

Flow:

  1. Packet reaches Br-int on Compute1. Since the destination is remote, Br-int applies an internal VLAN tag for tenant differentiation.
  2. Br-int forwards the packet to Br-tun.
  3. Br-tun encapsulates the packet with a VXLAN header (including VNI 1013), followed by a UDP header (Dst Port: 4789), and finally the Outer IP header (Src IP: Compute1 VTEP, Dst IP: Compute2 VTEP).
  4. The encapsulated packet is routed via the overlay network to Compute2.
  5. On Compute2, the packet is received by Br-tun, which decapsulates it.
  6. Br-tun forwards it to Br-int, which removes the VLAN tag.
  7. Br-int forwards the packet to the destination VM_A3 via the Linux Bridge/TAP.

Observation: The packet was fully encapsulated and routed via the VXLAN tunnel between the compute nodes.

4.3 Scenario 3: VM to External Network (Internet)

Test: Ping from VM_A1 (Compute1) to Google DNS (8.8.8.8).

Flow:

  1. Steps 1-6 are identical to Scenario 2, but the Outer IP Destination is the Network Node’s VTEP IP.
  2. At the Network Node, the packet is decapsulated by Br-tun and Br-int.
  3. The packet is forwarded to the Router Namespace (TenantA_R1).
  4. The router performs SNAT, translating the VM’s source IP (e.g., 10.1.1.119) to the router’s external IP (e.g., 192.168.13.214).
  5. The SNAT’d packet is forwarded to the external network interface for transmission to 8.8.8.8.
  6. The return ICMP Echo reply packet follows the reverse path. The router performs DNAT by looking up its NAT table to restore the destination IP back to the VM’s internal IP before encapsulating and tunneling the packet back to Compute1.

Observation: Communication with the external network requires VXLAN tunneling to the Network Node, where the virtual router performs SNAT for outbound traffic.

4.4 Scenario 4: External Network Access to VM (SSH via Floating IP)

Test: SSH connection from an external network Workstation to VM_A1 using its Floating IP (192.168.13.215).

Flow:

  1. The SSH request (Dst IP: Floating IP 192.168.13.215) arrives at the Network Node’s physical external interface.
  2. The packet is forwarded through the OVS provider bridge (br-ext) to Br-int and then to the Router Namespace.
  3. The router performs DNAT, translating the destination Floating IP (192.168.13.215) to the VM’s internal IP (10.1.1.119).
  4. The DNAT’d packet is routed back through Br-int, where it receives a VLAN tag.
  5. The packet is forwarded to Br-tun, encapsulated with VXLAN, UDP, and IP headers, and tunneled to the Compute1 node.
  6. At Compute1, the packet is decapsulated by Br-tun and Br-int, and finally delivered to VM_A1.

Observation: The router on the Network Node is responsible for DNAT, mapping the Floating IP to the internal IP before the packet is VXLAN tunneled to the correct compute node.

Leave a Comment

  • Rating