Explore the intricacies of OpenStack Networking with this detailed case study focusing on the Neutron component. Learn about VXLAN tunneling, configuration of cloud environments, and traffic flow analysis. Understand how to implement and manage network resources in a virtualized cloud computing environment, ensuring tenant isolation and efficient inter-node communication. Perfect for those looking to deepen their knowledge of open-source cloud technology and Software Defined Networking (SDN).
What is OpenStack Networking and How Does Neutron Work? The provided text is a detailed case study on implementing OpenStack networking, specifically focusing on the Neutron component, VXLAN tunneling, and traffic flow analysis in a virtualized cloud environment.
Here is a rephrased, reworded, and reorganized version of the content, maintaining the original technical and formal tone.
Cloud computing, an internet-based service, delivers shared computer processing resources and data on demand. OpenStack is a suite of software tools designed for building and managing cloud computing platforms for both public and private clouds. It orchestrates large pools of compute, networking, and storage resources across a datacenter, managed via a dashboard and a dedicated API.
| Component | Function |
| Dashboard (Horizon) | Provides an overview for monitoring all OpenStack services. |
| Keystone | The identity service responsible for user authorization and service access control. |
| Compute (Nova) | The primary service for launching and managing virtual machine instances. |
| Networking (Neutron) | Delivers Network-as-a-Service (NaaS) in virtual compute environments. |
| Glance | Stores operating system images used for provisioning virtual machine instances. |
| Swift | A storage service dedicated to object storage. |
| Cinder | A storage service providing plug-in block storage. |
OpenStack Networking allows for the creation and management of network objects—including networks, subnets, and ports—for use by other OpenStack services. The system supports various plug-ins to integrate different networking hardware and software, offering flexibility in architecture and deployment.
The core objective of this case study is to design and implement a cloud-based datacenter using OpenStack, focusing specifically on the networking service provided by the Neutron component.
The implementation involves:
The case study is divided into three key sections:
The project was implemented in VMware vCloud using three interconnected Virtual Machines.
| Name | OS | Memory | CPU Cores | Disk | Network Interfaces (Management/Tunnel/External) |
| VM1 (Compute1) | CentOS 7 | 16 GB | 4 | 16 GB | ens32: 192.168.13.11, ens34: 172.16.10.101, ens35: 192.168.10.101 |
| VM2 (Compute2) | CentOS 7 | 16 GB | 4 | 16 GB | ens32: 192.168.13.12, ens34: 172.16.10.102, ens35: 192.168.10.102 |
| VM3 (Controller) | CentOS 7 | 8 GB | 4 | 16 GB | ens32: 192.168.13.13, ens34: 172.16.10.10, ens35: 192.168.10.10 |
VXLAN (Virtual eXtensible LAN) is a tunneling technique used to transfer encapsulated packets between nodes, primarily for tenant separation in the cloud. The original Layer 2 frame is encapsulated with VXLAN, UDP, and Outer IP headers.
| Header | Key Role |
| VXLAN Header | Contains the VNI (VXLAN Network Identifier), a 24-bit field that isolates traffic between different segments in the overlay network, expanding the virtual network range to 16 million (compared to 4096 for VLANs). |
| Outer UDP Header | Used for transportation. The destination port is consistently the IANA standard UDP port 4789. |
| Outer IP Header | Contains the IP addresses of the VTEP (Virtual Tunnel End Points) interfaces: the source IP is the encapsulating VTEP, and the destination IP is the decapsulating VTEP. |
| Outer Ethernet/MAC Header | Includes the MAC addresses for transmission across the underlay network. |
The encapsulation process increases the frame size by 50 bytes, necessitating the use of Jumbo Frames in the underlay network to prevent fragmentation.
SDN is a centralized technology that manages and programs network devices through an SDN Controller. It separates the network’s forwarding and control planes, enabling centralized intelligence and dynamic network configuration. This centralization simplifies troubleshooting and monitoring compared to decentralized, complex traditional networks.
OpenDaylight is an open-source SDN controller platform. It provides a highly available, modular, and extensible infrastructure for SDN deployments across multi-vendor networks. ODL offers a model-driven service abstraction platform, allowing developers to create applications that interact across a wide range of hardware and protocols.
The case study environment features three VMs: Compute1, Compute2, and Controller/Network Node. Two isolated network segments, Tenant A and Tenant B, were created. Each tenant has three instances, distributed across the two compute nodes for cross-node communication testing. The Controller/Network node hosts the Neutron services, providing external access and inter-tenant communication via a virtual router.
The OpenStack components were installed on the underlay network using the Packstack utility.
The following steps were executed on all hosts prior to installation:
PackStack was installed on the RHEL/CentOS system using the RDO repository:
OpenStack was deployed using the answer-file method, which allows for customized installation, as opposed to the default “All-In-One” option.
After installation, service status was confirmed:
The installation was followed by the creation of two tenants (projects) and the necessary networking infrastructure.
A shared external network was created to provide internet access to both tenants:
Internal VXLAN networks and subnets were created for instance-to-instance communication within each tenant:
| Tenant | Network Command | Subnet Command |
| Tenant A | openstack network create –project TenantA –enable –internal –provider-network-type=vxlan TenantA_Network | openstack subnet create –project TenantA –subnet-range 10.1.1.0/24 … –network TenantA_Network TenantA_Subnet (Gateway: 10.1.1.1) |
| Tenant B | openstack network create –project TenantB –enable –internal –provider-network-type=vxlan TenantB_Network | openstack subnet create –project TenantB –subnet-range 10.2.2.0/24 … –network TenantB_Network TenantB_Subnet (Gateway: 10.2.2.1) |
Virtual routers were created for each tenant to facilitate communication between their internal networks and the shared external network.
After these steps, instances were launched using the Horizon dashboard, and Floating IPs were associated with them (e.g., 192.168.13.215 to VM_A1) to provide external reachability via DNAT (Destination Network Address Translation) on the virtual router.
Five scenarios were tested to analyze traffic behavior using Wireshark, focusing on the roles of the Linux Bridge, OVS Integration Bridge (Br-int), OVS Tunnel Bridge (Br-tun), and Router Namespace.
Test: Ping between VM_A1 (10.1.1.119) and VM_A2 (10.1.1.148) in the same tenant and on the same Compute1 node.
Flow: The packet is forwarded from the VM’s virtual interface (TAP) to the Linux Bridge (for security check) and then to the Br-int. Since the destination is local, Br-int resolves the MAC address (via ARP if needed) and forwards the packet directly through the relevant Linux Bridge/TAP interface to VM_A2.
Observation: The packet did not leave the Compute1 node and was not encapsulated with VXLAN, UDP, or IP headers.
Test: Ping between VM_A1 (Compute1) and VM_A3 (Compute2) in the same tenant.
Flow:
Observation: The packet was fully encapsulated and routed via the VXLAN tunnel between the compute nodes.
Test: Ping from VM_A1 (Compute1) to Google DNS (8.8.8.8).
Flow:
Observation: Communication with the external network requires VXLAN tunneling to the Network Node, where the virtual router performs SNAT for outbound traffic.
Test: SSH connection from an external network Workstation to VM_A1 using its Floating IP (192.168.13.215).
Flow:
Observation: The router on the Network Node is responsible for DNAT, mapping the Floating IP to the internal IP before the packet is VXLAN tunneled to the correct compute node.
Discover the best customer success platform tools of 2026—boost retention, satisfaction, and growth with top-rated solutions designed for scalable success.…
Best Revenue Cycle Management Software for 2026 – Streamline billing, boost cash flow, and reduce errors. Compare features, pricing, and…
Discover the best contract lifecycle management software for 2026. Streamline drafting, approvals, tracking, and renewals with top-rated CLM solutions. 2026…
10 best CRM for contractors in 2026. Streamline projects, win more bids, and get paid faster. Find your perfect software…
Discover the best CRM for banks solutions in 2026—enhance compliance, sales & client trust. Boost customer relationships & efficiency with…
Best HR Advice for Small Businesses tips in 2026—boost efficiency, compliance, and employee satisfaction. Get expert advice tailored to your…