OpenStack in a local Vagrant box

This post is about running an OpenStack inside a Vagrant box, which can be useful if you need to develop some provisioning tools, like terraform scripts, that use OpenStack API.

I have a bunch of terraform scripts that create resources on AWS public cloud, but now I'd like to provision the same resources on OpenStack private cloud, of course I don't want to touch any production environment just for doing some experiments, so I came up with the idea to install OpenStack locally in a Vagrant box on my laptop. Let's see how I did it.

Creating a Virtual Machine

You can use this Vagrantfile to build a box:

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure(2) do |config|

  config.vm.box = "centos/7"

  machines = {
    'node1.example.dd'    => { :ip => '10.1.0.10'},
  #  'node2.example.dd'    => { :ip =>'10.1.0.12'},
  }

  config.hostmanager.enabled = true
  config.hostmanager.manage_host = true
  config.hostmanager.manage_guest = true
  config.hostmanager.ignore_private_ip = false
  config.hostmanager.include_offline = true

  config.ssh.pty = true

  machines.each do | hostname, attrs|
    config.vm.define hostname do |machine|
      machine.vm.hostname = hostname
      machine.vm.network :private_network, :ip => attrs[:ip]

      machine.vm.provider "virtualbox" do | v |
        v.memory = "4096"
        v.cpus = "2"
      end

    end
  end
end

As you can see, we use the official CentOS 7 box to create a virtual machine.

Note that in this tutorial I'm using only one OpenStack node, but if you need more you can add a line like the following:

'node2.example.dd'    => { :ip =>'10.1.0.12'},

Install Vagrant Host Manager plugin to manage /etc/hosts file:

$ vagrant plugin install vagrant-hostmanager

Now bring it up with:

$ vagrant up

Once the virtual machine is up you can ssh to it:

$ vagrant ssh node1.example.dd

Network Settings

Before installing OpenStack you need to set the network adapter of the virtual machine to allow all traffic, so stop the vagrant machine with:

$ vagrant halt

then open VirtualBox program, select your virtual machine, its name is something like this: dd-openstack-vagrant_node1exampledd_1234567890123_123. Click on Settings -> Network -> Adapter 2 -> Advanced and set Promiscuous Mode to "Allow All".

Now start again your VM with vagrant up and disable NetworkManager and firewalld.

$ sudo systemctl disable firewalld
$ sudo systemctl stop firewalld
$ sudo systemctl disable NetworkManager
$ sudo systemctl stop NetworkManager
$ sudo systemctl enable network
$ sudo systemctl start network

Packstack Installer

To install OpenStack we will use Packstack from the RDO project, as in CentOS the Extras repository is enabled by default we can simply install the RPM to set up the OpenStack repository.

$ sudo yum install -y centos-release-openstack-mitaka

Now update your current packages

$ sudo yum update -y

and install Packstack Installer

$ sudo yum install -y openstack-packstack

Install OpenStack

First generate an answers file:

$ sudo packstack  \
--provision-demo=n \
--os-neutron-ml2-type-drivers=vxlan,flat,vlan \
--gen-answer-file=packstack-answers.txt

Here, I removed provisioning of some demo resources and enabled neutron ML2 plugin.

Now you need to edit the packstack-answers.txt file to add the virtual machine IP address defined in the Vagrantfile, which in our case is 10.1.0.10

$ sudo sed -i -e 's:10.0.2.15:10.1.0.10:' packstack-answers.txt

next add the public and private interface names in the packstack-answers.txt file

$ sudo vi packstack-answers.txt
...
CONFIG_NOVA_COMPUTE_PRIVIF=eth0
CONFIG_NOVA_NETWORK_PUBIF=eth1
CONFIG_NOVA_NETWORK_PRIVIF=eth0

then generate a new ssh key pair and copy the public key to the authorized_keys

$ sudo su -
# ssh-keygen 
# cp /root/.ssh/id_rsa.pub /root/.ssh/authorized_keys
# chmod 400 /root/.ssh/authorized_keys
# exit

if you decided to use more than one node, the you have to copy the public key to all nodes with:

# ssh-copy-id root@node2.example.dd

finally, you can start OpenStack installation with:

$ sudo packstack --answer-file=packstack-answers.txt

... be patient, this is going to take a while, once installation is completed you can log in to the OpenStack by going to http://10.1.0.10/dashboard/. The user name is admin and the password can be found in the file keystonerc_admin in the /root directory of the node.

$ sudo cat /root/keystonerc_admin

Bridged Networking

Now it's time to setup Bridged Networking which permit us to access to the instances running in OpenStack from our laptop.

For this, we are going to use the openstack CLI tools directly in the command line, so to make it easier we need to source admin credentials first.

$ sudo su -
# source keystonerc_admin

Now test your credentials,

# openstack server list

... server list should be empty.

You can now create a new bridge interface by adding the file /etc/sysconfig/network-scripts/ifcfg-br-ex with the following contents:

DEVICE=br-ex
DEVICETYPE=ovs
TYPE=OVSBridge
BOOTPROTO=static
IPADDR=10.1.0.10
NETMASK=255.255.255.0
GATEWAY=10.1.0.1
DNS1=8.8.8.8
ONBOOT=yes

Also setup eth1 interface for OVS bridging by editing /etc/sysconfig/network-scripts/ifcfg-eth1, you have to add the following content:

TYPE=OVSPort
DEVICETYPE=ovs
OVS_BRIDGE=br-ex

Define a logical name for the external physical segment as “extnet”

# mkdir /etc/neutron/plugins/openvswitch
# openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini \
  ovs bridge_mappings extnet:br-ex

Restart the network services

# service network restart
# service neutron-openvswitch-agent restart
# service neutron-server restart

Instances Network

Now that we have a bridge, it's time to setup a network for our instances. For this, you need to run the commands shown below

External Public Network

# neutron net-create public_network --provider:network_type flat \
  --provider:physical_network extnet --router:external --shared

External Public Subnet

# neutron subnet-create --name public_subnet --enable_dhcp=False \
  --allocation-pool=start=10.1.0.128,end=10.1.0.160 \
  --gateway=10.1.0.1 public_network 10.1.0.0/24 \
  --dns-nameservers list=true 8.8.8.8 4.2.2.2

Private Network

# neutron net-create private_network

Private Subnet

# neutron subnet-create --name private_subnet private_network 10.10.10.0/24

Router

# neutron router-create router1

Set the external network gateway for the router

# neutron router-gateway-set router1 public_network

Add an internal network interface to the router

# neutron router-interface-add router1 private_subnet

You should now have an OpenStack network which topology looks like this:

 [public_network]<-->(Router1)<-->[private_network]

You can check it in the dashboard at Projects -> Network -> Network Topology.

Launch an Instance

Before launching your first instance, you have to add the ssh key pair which will let you get SSH access to the instance. Enter to OpenStack dashboard as admin then browse to Projects -> Compute -> Access & Security -> Key Pairs, click on Import Key Pair button, and add the following:

  • Key Pair Name: node1_key
  • Public Key: copy and paste the contents of the ssh public key

You can get the node1's ssh public key with:

# cat /root/.ssh/id_rsa.pub

Also, you have to create few security groups rules. Browse to Projects -> Compute -> Access & Security -> Security Groups Select 'default' and click on Manage Rules, then add rules for ICMP (Ingress/Egress) and SSH (Ingress).

As we installed OpenStack without demo images, we cannot launch a new instance directly from the dashboard web interface, so let's do it with OpenStack CLI tool.

The following command will download 'cirros', a minimal Linux distribution, and it creates a new qcow2 image.

# curl http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img | glance \
  image-create --name='cirros' --container-format=bare --disk-format=qcow2

For creating a new instance, you also need the private_network Id:

# openstack network list

Now, you can launch it with the following command:

# openstack server create --image="cirros" --flavor=m1.tiny \
  --key-name=node1_key --nic net-id="YOUR-PRIVATE-NETWORK-ID" \
  my_instance

Check your new instance status with

# openstack server list

Once you instance is 'ACTIVE' you have to assign a floating ip to the instance, from the dashboard browse to Projects -> Compute -> Instances -> Actions -> Associate Floating IP

You shoud be able to ping this floating ip address from node1 and from you laptop as well

$ ping 10.1.0.129

Now you can to SSH into my_instance directly from your laptop.

$ ssh cirros@10.1.0.129

You can find the cirros password by looking on the instance console.

Now, ... where was I with those terraform scripts?

Danilo

Credits: I'd like to thanks German Moya for his suggestion about bridged networking.