Gns3 Openstack

Posted : admin On 1/26/2022

In my last post, I did a simple All In One installation. Now we are moving to the next step and creating a multinode installation where I have the following three nodes.

  • Controller hosting neutron, cinder, glance, horizon and keystone
  • Compute1 hosting nova compute
  • Compute2 hosting nova compute

The steps to put all this together are long, so I am splitting this post into part 1 where I put together the infrastructure and part 2 where we will install OpenStack on the nodes.

And then controlled by OpenStack – which is Open Source. So who knows, there could well be an opportunity for GNS3 to add VIRL routers to GNS3 topologies via OpenStack. But that will require lots of investigation! If you are running QEMU images in GNS3, they are run with the same fundamental KVM virtualization on which tools such as OpenStack and oVirt themselves are based –.

The following instructions are for installing a multinode OpenStack Icehouse on CentOS 6.6 in VirtualBox and GNS3. For CentOS, I started with the CentOS 6.6 Minimal DVD to keep the footprint small. I am also using GNS3 version 1.3.2.

I created three Microsoft Loopback interfaces on my host PC and created a bridge with 2 of the loopbacks and my PC’s NIC. I renamed the looback interfaces to Loopback1, Loopback2, and Loopback3. To have the nodes have access to your network/Internet, I created a Bridge on my PC and added my PCs Ethernet port and also included Loopback1 and Loopback2. For Loopback3, I am going to use that as my management access from my PC into the OpenStack management network. Using Windows, configure Loopback3 with IP address 192.168.0.100/24.

Using VirtualBox, create 3 VMs.

Gns3 Openstack

Controller:

Compute1:

Compute2:

Once you have finished creating the VMs and installed CentOS, eth0 on each VM will be the servers public IP. Record them as you can SSH into to them to configure them

For the interfaces that will be used by the multinode setup, the following are the interfaces used in each node.

Controller: (4 interfaces)

Compute1: (3 interfaces)

Compute2: (3 interfaces)

Using GNS3 and some GNS3 Ethernet switches, we will create the following topology.

I could have done all this with less switches, but I used separate switches for each segment to keep it more clear what was happening and also that GNS3 isn’t very good at drawing the link lines in a nice pattern. So here are the switches that will be used.

  • Tenant VLAN: This will be the switch that connects to the trunk ports on the nodes that use VLANs for tenant traffic isolation.
  • Mgmt: Ethernet network for OpenStack to use for managing all the nodes.
  • Tenant Public: Interface used by the tenants for internet access with floating IP addresses.
  • Server Public: Interface on each node for us to manage the nodes such as software installation and such.

Add the switches to the GNS3 canvas. Next, configure each switch as follows.

  • Tenant VLAN: configure all ports as type ‘dot1q’
  • Mgmt: Configure all port as type ‘access’ with VLAN 1.
  • Tenant Public: Configure all port as type ‘access’ with VLAN 1.
  • Server Public: Configure all port as type ‘access’ with VLAN 1.

In GNS3, import the three Virtual Box VMs. In the GNS3 preferences, under the VirtualBox VMs page, click on the New button and add each of the VMs that you created in VirtualBox. For each also configure the settings.
Controller:

Compute1:

Compute2:

Add the three OpenStack nodes to the GNS3 canvas.

Add two clouds to the GNS3 canvas

  • Name one Internet and add the Loopback1 and Loopback2 to the NIO Ethernet list.
  • Name the other cloud Mgmt Server and add Loopback3 to the NIO Ethernet list.

Now using the GNS3 Link creation, create the links from the switches to the Ethernet ports on the nodes.

Tenant VLAN switch:

Mgmt switch:

Tenant Public switch:

Server Public switch:

At this point, all the nodes, switches and links should all be created and should look something like the diagram above.

Now before we install OpenStack, we’ll need to get the VMs ready to install OpenStack. Start up all the VMs using GNS3(press the Play button) so all the interfaces get added and we have connectivity between the VMs.

On the controller, do the following.

Make sure the system is up to date.

Setup Mgmt interface

Update the /etc/hosts with the hostname of the nodes.

Restart networking to pick up all the settings.

Add OpenStack repo to yum

Disable SELINIX

Now we will do similar setup on Compute node 1.

Make sure the system is up to date.

Setup Mgmt interface

Update the /etc/hosts with the hostname of the nodes.

Restart networking to pick up all the settings.

Add OpenStack repo to yum

Disable SELINIX

Openstack

And finally we will do similar setup on Compute node 2.

Make sure the system is up to date.

Setup Mgmt interface

Update the /etc/hosts with the hostname of the nodes.

Restart networking to pick up all the settings.

## Add OpenStack repo to yum

## Disable SELINIX

Now, at this point, all the nodes are ready to have OpenStack installed. To make sure, you should be able to ping each node using its management IP address using both the IP address and the host names.

That’s it for this post. In part 2 of the post we’ll install OpenStack on all the nodes.

This post ‘Multinode OpenStack with CentOS in Virtual Box with GNS3 Part 1’ first appeared on https://techandtrains.com/.

Published on 7 May 2013 · Filed in Tutorial · 1411 words (estimated 7 minutes to read)

I’m back with another “how to” article on Open vSwitch (OVS), this time taking a look at using GRE (Generic Routing Encapsulation) tunnels with OVS. OVS can use GRE tunnels between hosts as a way of encapsulating traffic and creating an overlay network. OpenStack Quantum can (and does) leverage this functionality, in fact, to help separate different “tenant networks” from one another. In this write-up, I’ll walk you through the process of configuring OVS to build a GRE tunnel to build an overlay network between two hypervisors running KVM.

Naturally, any sort of “how to” such as this always builds upon the work of others. In particular, I found a couple of Brent Salisbury’s articles (here and here) especially useful.

This process has 3 basic steps:

  1. Create an isolated bridge for VM connectivity.

  2. Create a GRE tunnel endpoint on each hypervisor.

  3. Add a GRE interface and establish the GRE tunnel.

These steps assume that you’ve already installed OVS on your Linux distribution of choice. I haven’t explicitly done a write-up on this, but there are numerous posts from a variety of authors (in this regard, Google is your friend).

We’ll start with an overview of the topology, then we’ll jump into the specific configuration steps.

Reviewing the Topology

The graphic below shows the basic topology of what we have going on here:

We have two hypervisors (CentOS 6.3 and KVM, in my case), both running OVS (an older version, version 1.7.1). Each hypervisor has one OVS bridge that has at least one physical interface associated with the bridge (shown as br0 connected to eth0 in the diagram). As part of this process, you’ll create the other internal interfaces (the tep and gre interfaces, as well as the second, isolated bridge to which VMs will connect. You’ll then create a GRE tunnel between the hypervisors and test VM-to-VM connectivity.

Creating an Isolated Bridge

The first step is to create the isolated OVS bridge to which the VMs will connect. I call this an “isolated bridge” because the bridge has no physical interfaces attached. (Side note: this idea of an isolated bridge is fairly common in OpenStack and NVP environments, where it’s usually called the integration bridge. The concept is the same.)

The command is very simple, actually:

Yes, that’s it. Feel free to substitute a different name for br2 in the command above, if you like, but just make note of the name as you’ll need it later.

To make things easier for myself, once I’d created the isolated bridge I then created a libvirt network for it so that it was dead-easy to attach VMs to this new isolated bridge.

Configuring the GRE Tunnel Endpoint

The GRE tunnel endpoint is an interface on each hypervisor that will, as the name implies, serve as the endpoint for the GRE tunnel. My purpose in creating a separate GRE tunnel endpoint is to separate hypervisor management traffic from GRE traffic, thus allowing for an architecture that might leverage a separate management network (which is typically considered a recommended practice).

To create the GRE tunnel endpoint, I’m going to use the same technique I described in my post on running host management traffic through OVS. Specifically, we’ll create an internal interface and assign it an IP address.

To create the internal interface, use this command:

In your environment, you’ll substitute br2 with the name of the isolated bridge you created earlier. You could also use a different name than tep0. Since this name is essentially for human consumption only, use what makes sense to you. Since this is a tunnel endpoint, tep0 made sense to me.

Once the internal interface is established, assign it with an IP address using ifconfig or ip, whichever you prefer. I’m still getting used to using ip (more on that in a future post, most likely), so I tend to use ifconfig, like this:

Obviously, you’ll want to use an IP addressing scheme that makes sense for your environment. One important note: don’t use the same subnet as you’ve assigned to other interfaces on the hypervisor, or else you can’t control that the GRE tunnel will originate (or terminate) on the interface you specify. This is because the Linux routing table on the hypervisor will control how the traffic is routed. (You could use source routing, a topic I plan to discuss in a future post, but that’s beyond the scope of this article.)

Repeat this process on the other hypervisor, and be sure to make note of the IP addresses assigned to the GRE tunnel endpoint on each hypervisor; you’ll need those addresses shortly. Once you’ve established the GRE tunnel endpoint on each hypervisor, test connectivity between the endpoints using ping or a similar tool. If connectivity is good, you’re clear to proceed; if not, you’ll need to resolve that before moving on.

Establishing the GRE Tunnel

By this point, you’ve created the isolated bridge, established the GRE tunnel endpoints, and tested connectivity between those endpoints. You’re now ready to establish the GRE tunnel.

Use this command to add a GRE interface to the isolated bridge on each hypervisor:

Gns3 Openstack Dashboard

Gns3 openstack download

Substitute the name of the isolated bridge you created earlier here for br2 and feel free to use something other than gre0 for the interface name. I think using gre as the base name for the GRE interfaces makes sense, but run with what makes sense to you.

Once you repeat this command on both hypervisors, the GRE tunnel should be up and running. (Troubleshooting the GRE tunnel is one area where my knowledge is weak; anyone have any suggestions or commands that we can use here?)

Testing VM Connectivity

As part of this process, I spun up an Ubuntu 12.04 server image on each hypervisor (using virt-install as I outlined here), attached each VM to the isolated bridge created earlier on that hypervisor, and assigned each VM an IP address from an entirely different subnet than the physical network was using (in this case, 10.10.10.x).

Gns3 openstack 2

Here’s the output of the route -n command on the Ubuntu guest, to show that it has no knowledge of the “external” IP subnet—it knows only about its own interfaces:

Similarly, here’s the output of the route -n command on the CentOS host, showing that it has no knowledge of the guest’s IP subnet:

In my case, VM1 (named web01) was given 10.10.10.1; VM2 (named web02) was given 10.10.10.2. Once I went through the steps outlined above, I was able to successfully ping VM2 from VM1, as you can see in this screenshot:

(Although it’s not shown here, connectivity from VM2 to VM1 was obviously successful as well.)

Gns3 Openstack Server

“OK, that’s cool, but why do I care?” you might ask.

In this particular context, it’s a bit of a science experiment. However, if you take a step back and begin to look at the bigger picture, then (hopefully) something starts to emerge:

Gns3 Openstack Version

  • We can use an encapsulation protocol (GRE in this case, but it could have just as easily been STT or VXLAN) to isolate VM traffic from the physical network and from other VM traffic. (Think multi-tenancy.)

  • While this process was manual, think about some sort of controller (an OpenFlow controller, perhaps?) that could help automate this process based on its knowledge of the VM topology.

  • Using a virtualized router or virtualized firewall, I could easily provide connectivity into or out of this isolated (encapsulated) private network. (This is probably something I’ll experiment with later.)

  • What if we wrapped some sort of orchestration framework around this, to help deploy VMs, create networks, add routers/firewalls automatically, all based on the customer’s needs? (OpenStack Networking, anyone?)

Anyway, I hope this is helpful to someone. As always, I welcome feedback and suggestions for improvement, so feel free to speak up in the comments below. Vendor disclosures, where appropriate, are greatly appreciated. Thanks!

Metadata and Navigation

Be social and share this post!

Related Posts

  • Technology Short Take #294 Feb 2013
  • Looking Back: 2012 Project Report Card7 Jan 2013
  • Technology Short Take #2629 Oct 2012