• LOGIN
  • No products in the cart.

Open vSwitch Overview

Open vSwitch Overview

Albert Hui

Open vSwitch (OVS) is an open-source software defined as a networking solution to deliver software data center infrastructure as a service functionality for today’s cloud based paradigms. OVS was built and based on Stanford University’s OpenFlow project. OVS functions both as a router and switch. It is also referred to as a multilayer switch since it examines content from the Open System Interconnection (OSI) reference model encompassing Layers 2 through Layer 7. OVS was designed for the dynamic and multi-server heterogeneous hypervisor virtualized environments for easy network stack management on virtualized infrastructure. OVS is supported on Linux, FreeBSD, NetBSD, and Windows operating systems and has a built-in default switch support for ESX, XenServer. Additionally, the data plane development kit (DPDK) provides a user-level library interface which will be discussed in the later sections. We will now examine the key architectural features of the current stable release of OVS
2.9.0.

Open vSwitch Architecture

 

OVS components are comprised of OpenFlow and Open vSwitch Database. As you can see from the above diagram, Open vSwitch manages packets as flows to allow for elastic network configurations. A flow can be identified by any combinations of VLAN ID, Input port, Ethernet source/destination addresses, IP source/destination MAC addresses, TCP/UDP source and destination ports. Packets are sent to the controller and then the controller determines the action for the flow such as forward to port, ports, port mirroring, encapsulation forwarding to the controller or dropping the packets. The packets are then returned to the data path or handled by the data path.

Highlighted OVS Features

OVS supports a wide range of networking switch features and functions such as:
– Native IPv4 and IPv6 addressing
– Link aggregation (LACP IEEE 802.1AX-2008), Dot1q (802.1Q),
– NFV and VNF which are management paradigms for controlling network services such as firewalling, NAT, DNS, caching and related services to be executed in software for consolidation
– Virtual networking for open vswitch part of OVS 2.6
– Neutron integration networking-ovn openstack
– Supports network ACLS distributed L3 routing for IPv4 and IPv6 – internal routing distributed on the hypervisor
– Allows for ARP/ND suppression
– OVN: flow caching, decrement TTL
– Built-in support for NAT, load balancing and DHCP services
– Supports cloud technologies such as Kubernetes, Docker and Openstack
– Features a built-in DHCP server as part of the OVN agent

For additional details, please click on the links provided in the references section.

Software Defined Networking and Network Virtualization

Software defined networking (SDN) allows for the separation of the control plane and data plane. The control plane enables forwarding and routing switch decisions to be made. Similarly, the data plane allows for data forwarding to occur. The separation of control and data forwarding functionalities allows for network control to be programmable. Therefore, forwarding layer abstraction is enabled to allow for easier portability to new hardware and software platforms.

Additionally, OVS functions as the point of egress for the overlay network which operates on top of physical networks within a data centre. OVS also allows for abstraction of network connectivity which had been traditionally delivered via hardware for network virtualization. Network virtualization (NV) encompasses the virtualized L4 through L7 services, load balancing, and firewalling applications. The ability to scale and adjust to the required resource demands meets the elastic requirements of cloud computing.

The data plane development kit (DPDK) is a bare metal cross-platform library and related drivers for fast user level hardware offloaded supported packet processing. It’s designed to minimize the amount of CPU cycles required for fast sending and receiving functions. The performance gains achieved by the DPDK interface is as a result of bypassing the networking and kernel stacks. The DKDP was designed for specific network applications for network function virtualization (NFV) and enables mixed Windows and Linux Kubernetes cluster orchestration.

An interesting feature of OVS is that it supports open virtual network (OVN) architecture, an abstraction for virtual networks. OVN allows OVS to function as a cloud management system for OpenStack integration and also can function as a gateway to allow bi-directional traffic to be tunnelled in between physical Ethernet ports so that transport mode functions to occur.

The objective of this tutorial is to use Open vSwitch on Ubuntu 16.04 64-bit and create a network bridge to connect the Linux KVM virtual machines.

1. Perform a new Ubuntu install (optional step)

2. Install Open vSwitch, the Linux Container, and the KVM package

$ sudo apt-get -y install openvswitch-switch qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utils

3. Let’s setup a KVM to use OVS as bridge

We verify the KVM install is good.

$ sudo virsh list --all

4. We will now create an OVS bridge which will be connected to the running KVM virtual machines.. This will allow the KVM virtual machine to be associated with the internal OVS network.

NOTE: Please be careful when executing the next set of instructions as it may cause you to lose your connection if you’re remotely connected to your server environment. It’s recommended to play with open vswitch within a virtual machine testing environment.

First, let’s disable Network Manager since Open vSwitch is not compatible with OVS switch. We will enable classic networking as the default.

Initialized the OVS database for initial startup

$ ovs-vsctl –-no-wait init

Let’s start open vSwitch daemon

$ sudo systemctl restart openvswitch-switch && sudo systemctl enable openvswitch-switch

Let’s create an Open vSwitch Bridge and verify that the bridge has been created.

$ sudo ovs-vsctl add-br ovs-br0
 $ sudo ip addr
 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 inet 127.0.0.1/8 scope host lo
 valid_lft forever preferred_lft forever
 inet6 ::1/128 scope host
 valid_lft forever preferred_lft forever
 …
 4: ovs-br0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
 link/ether 9e:39:f8:46:eb:46 brd ff:ff:ff:ff:ff:ff

We can now display the created bridge interface properties.

$ sudo ovs-vsctl list bridge
 _uuid : 46f8399e-9d46-46eb-b015-e0f80a4429cd
 auto_attach : []
 controller : []
 datapath_id : "00009e39f846eb46"
 datapath_type : ""
 datapath_version : ""
 external_ids : {}
 fail_mode : []
 flood_vlans : []
 flow_tables : {}
 ipfix : []
 mcast_snooping_enable: false
 mirrors : []
 name : "ovs-br0"
 netflow : []
 other_config : {}
 ports : [915e6628-e720-439c-9e35-37bc8ad69fb6]
 protocols : []
 rstp_enable : false
 rstp_status : {}
 sflow : []
 status : {}
 stp_enable : false

5. Then, create a KVM network for OVS bridge and connected to the KVM virtual machine

Let’s create a new KVM network configuration:

cat < ovs-network.xml

ovs-bridgenet

EOF

We will enable libvirt network to be auto started on host boot using the following commands:

$ sudo virsh net-define ovs-network.xml

Network ovs-bridgenet defined from ovs-network.xml

$ sudo virsh net-start ovs-bridgenet

Network ovs-bridgenet started

$ sudo virsh net-autostart ovs-bridgenet

Network ovs-bridgenet marked as autostarted

$ sudo virsh net-info ovs-bridgenet
 Name: ovs-bridgenet
 UUID: e611f384-2e9a-4669-ac5f-447533edc3a0
 Active: yes
 Persistent: yes
 Autostart: yes
 Bridge: ovs-br0

6. Now we can install a VirtManager graphical interface for creating KVM virtual machines. For a local installation, use the following commands:

$ sudo apt-get install -y virt-manager

For a remote installation, we need to install some additional packages:

$ sudo apt-get install –y virt-manager ssh-askpass-gnome --no-install-recommends

$ sudo systemctl restart virtlockd.service && sudo systemctl enable virtlockd.service
 $ sudo systemctl restart virtlockd.socket && sudo systemctl enable virtlockd.socket
 $ sudo systemctl restart virtlogd.service && sudo systemctl enable virtlogd.service
 $ sudo systemctl restart virtlogd.socket && sudo systemctl enable virtlogd.socket

$ sudo usermod –a -G libvirtd sysop

7. We now launch virt-manager from Applications->System Tools -> Virtual Machine Manager or from the command line: sudo virt-manager. We will use Ubuntu core for our KVM guest for demonstrative purposes.

$ nice wget http://cdimage.ubuntu.com/ubuntu-core/16/stable/current/ubuntu-core-16-amd64.img.xz
 $ unxz ubuntu-core-16-amd64.img.xz

8. Create a new KVM VM. From the New Network of the new virtual machine creation wizard, select ovs-bridgenet for the network selection.

9. Please select ‘Finish’ to complete the VM creation. Thereafter, proceed to complete the guest VM installation when the virtual machine has been launched.

We will now setup static networking on the host and guest. For demonstrative purposes, we will use the IPv4 address 10.0.0.1 with netmask 255.255.255.0 for the open vSwitch host using the following command:

$ sudo ifconfig ovs-br0 10.0.0.1 netmask 255.255.255.0 up

For the KVM VM, we will need to configure the network adaptor using a similar command:

$ sudo ifconfig eth0 10.0.0.2 net mask 255.255.255.0 up

10. We can now test the connectivity between the host and the KVM VM via open vswitch using the ping command to the guest.

$ sudo ping –c 5 10.0.0.2
 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms
 64 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=0.118 ms
 64 bytes from 10.0.0.1: icmp_seq=3 ttl=64 time=0.101 ms
 64 bytes from 10.0.0.1: icmp_seq=4 ttl=64 time=0.121 ms
 64 bytes from 10.0.0.1: icmp_seq=5 ttl=64 time=0.134 ms
 --- 10.0.0.1 ping statistics ---
 5 packets transmitted, 5 received, 0% packet loss, time 4090ms
 rtt min/avg/max/mdev = 0.049/0.104/0.134/0.031 ms

Conclusion

OVS is a versatile SDN framework which provides not only switch related functionality but supports various industry standard protocols and network features. The suite of development and related utilities provided by OVS is a versatile tool for today’s demanding cloud computing challenges.

References and Links

https://www.ubuntu.com/containers/lxc

http://www.openvswitch.org/features/

http://www.openvswitch.org//support/dist-docs/ovn-architecture.7.html

http://www.openvswitch.org/support/dist-docs/

http://docs.openvswitch.org/en/latest/faq/issues/

https://enigma.usenix.org/sites/default/files/nsdi15_full_proceedings_interior.pdf#page=125

https://software.intel.com/en-us/articles/dpdk-performance-optimization-guidelines-white-paper

https://access.redhat.com/documentation/en-us/reference_architectures/2017/html/deploying_mobile_networks_using_network_functions_virtualization/performance_and_optimization#figure16_caption

http://www.openvswitch.org/support/boston2017/0900-ovn-on-windows.pdf

https://linuxcontainers.org/lxc/getting-started/

Please refer to “Talk & Presentations” section for more conference talks http://www.openvswitch.org/

April 17, 2018

Leave a Reply

avatar

This site uses Akismet to reduce spam. Learn how your comment data is processed.

  Subscribe  
Notify of
© HAKIN9 MEDIA SP. Z O.O. SP. K. 2013