Vnx-labo-openstack-3nodes-basic-liberty

From VNX
Jump to: navigation, search

VNX Openstack Liberty 3-nodes-basic laboratory

Introduction

This is an Openstack tutorial scenario designed to experiment with Openstack free and open-source software platform for cloud-computing.

The scenario is basically the one described in the Openstack Liberty installation guide for Ubuntu. It is made of three virtual machines: a controller with networking capabilities and two compute nodes, all of them based on KVM. Optionally, a third compute node can be added once the scenario is started.

All virtual machines use Ubuntu 14.04.3 LTS and Openstack Liberty.

The scenario has been inspired by the ones developed by Raul Alvarez to test OpenDaylight-Openstack integration, but instead of using Devstack to configure Openstack nodes, the configuration is done by means of commands integrated into the VNX scenario following Openstack installation recipes in http://docs.openstack.org/liberty/install-guide-ubuntu/


Requirements

To use the scenario you need a Linux computer (Ubuntu 14.04 or later recommended) with VNX software installed. At least 4Gb of memory are needed to execute the scenario.

See how to install VNX here: http://vnx.dit.upm.es/vnx/index.php/Vnx-install

If already installed, update VNX to the latest version with:

vnx_update

To make startup faster, enable one-pass-autoconfiguration for KVM virtual machines in /etc/vnx.conf:

[libvirt]
...
one_pass_autoconf=yes

Check that KVM nested virtualization is enabled:

cat /sys/module/kvm_intel/parameters/nested
Y

If not enabled, check, for example, http://docs.openstack.org/developer/devstack/guides/devstack-with-nested-kvm.html to enable it.

Installation

Download the scenario with the virtual machines images included and unpack it:

wget http://idefix.dit.upm.es/vnx/examples/openstack/openstack_tutorial-liberty_3nodes_basic-v010-with-rootfs.tgz
vnx --unpack openstack_tutorial-liberty_3nodes_basic-v010-with-rootfs.tgz

Alternatively, you can download the much lighter version without the images and create the root filesystems from scratch in your computer:

wget http://idefix.dit.upm.es/vnx/examples/openstack/openstack_tutorial-liberty_3nodes_basic-v010.tgz
vnx --unpack openstack_tutorial-liberty_3nodes_basic-v010.tgz
cd openstack_tutorial-liberty_3nodes_basic-v010/filesystems
./create-kvm_ubuntu64-ostack-compute
./create-lxc_ubuntu64-ostack-controller

Starting the scenario

Start the scenario and configure it and load an example cirros image with:

cd openstack_tutorial-v014
vnx -f openstack_tutorial-liberty_3nodes_basic.xml -v -t
vnx -f openstack_tutorial-liberty_3nodes_basic.xml -v -x start-all
vnx -f openstack_tutorial-liberty_3nodes_basic.xml -v -x load-img


Figure 2: Openstack tutorial detailed topology


Once started, you can connect to Openstack Dashboard (admin/xxxx) starting a browser and pointing it to the controller horizon page. For example:

firefox 10.0.10.11/horizon

Access Dashboard page "Project|Network|Network topology" and create a simple demo scenario inside Openstack:

vnx -f openstack_tutorial-liberty_3nodes_basic.xml -v -x create-demo-scenario

You should see the simple scenario as it is being created through the Dashboard.

Once created you should be able to access vm1 console, to ping or ssh from the host to the vm1 or the opposite (see the floating IP assigned to vm1 in the Dashboard, probably 10.0.10.102).

Finally, to allow external Internet access from vm1 you hace to configure a NAT in the host. You can easily do it using vnx_config_nat command distributed with VNX. Just find out the name of the public network interface of your host (i.e eth0) and execute:

vnx_config_nat ExtNet eth0

Stopping or releasing the scenario

To stop the scenario preserving the configuration and the changes made:

vnx -f openstack_tutorial-liberty_3nodes_basic.xml -v --shutdown

To start it again use:

vnx -f openstack_tutorial-liberty_3nodes_basic.xml -v --start

To stop the scenario destroying all the configuration and changes made:

vnx -f openstack_tutorial-liberty_3nodes_basic.xml -v --destroy

To unconfigure the NAT, just execute:

vnx_config_nat -d ExtNet eth0

Adding a third compute node (compute3)

To add a third compute node to the scenario once it is started you can use the VNX modify capacity:

vnx -s openstack_tutorial-liberty_3nodes_basic --modify others/add-compute3.xml -v
vnx -s openstack_tutorial-liberty_3nodes_basic -v -x start-all -M compute3

Once the new node has been joined to the scenario, you must use "-s" option instead of "-f" to manage it (if not, the compute3 node will not be considered). For example:

vnx -s openstack_tutorial-liberty_3nodes_basic -v --destroy

Other useful information

To pack the scenario in a tgz file including the root filesystems use:

bin/pack-scenario --include-rootfs

To pack the scenario without the root filesystems, just delete the "--include-rootfs" parameter.

XML specification of Openstack tutorial scenario

<?xml version="1.0" encoding="UTF-8"?>

<!--
~~~~~~~~~~~~~~~~~~
VNX Sample scenarios
~~~~~~~~~~~~~~~~~~

Name:        openstack_tutorial-liberty_3nodes_basic
Description: This is an Openstack tutorial scenario designed to experiment with Openstack free and open-source 
             software platform for cloud-computing. The scenario is made of three virtual machines: a controller 
             and two compute nodes, all based on KVM. Optionally, a third compute 
             node can be added once the scenario is started.
             Openstack version used: Liberty.

Author:      David Fernandez (david@dit.upm.es)

This file is part of the Virtual Networks over LinuX (VNX) Project distribution. 
(www: http://www.dit.upm.es/vnx - e-mail: vnx@dit.upm.es) 

Departamento de Ingenieria de Sistemas Telematicos (DIT)
Universidad Politecnica de Madrid
SPAIN
-->

<vnx xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:noNamespaceSchemaLocation="/usr/share/xml/vnx/vnx-2.00.xsd">
  <global>
    <version>2.0</version>
    <scenario_name>openstack_tutorial-liberty_3nodes_basic</scenario_name>
    <automac/>
    <!--vm_mgmt type="none" /-->
    <vm_mgmt type="private" network="10.250.0.0" mask="24" offset="12">
       <host_mapping />
    </vm_mgmt> 
    <vm_defaults>
        <console id="0" display="no"/>
        <console id="1" display="yes"/>
    </vm_defaults>
    <cmd-seq seq="start-all">step1,step2,step3,step4,step5,step6</cmd-seq>
  </global>

  <net name="MgmtNet" mode="virtual_bridge" />
  <!--net name="TunnNet" mode="virtual_bridge" /!-->
  <net name="ExtNet"  mode="virtual_bridge" />
  <net name="virbr0"  mode="virtual_bridge" managed="no"/>

  <vm name="controller" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64" vcpu="2">
    <filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-controller</filesystem>
    <mem>4G</mem>
  <!--vm name="controller" type="lxc" arch="x86_64"-->
    <!--filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-controller</filesystem!-->
    <!--console id="0" display="yes"/!-->
    <if id="1" net="MgmtNet">
      <ipv4>10.0.0.11/24</ipv4>
    </if>
    <if id="2" net="ExtNet">
      <ipv4>10.0.10.11/24</ipv4>
    </if>
    <if id="9" net="virbr0">
      <ipv4>dhcp</ipv4>
    </if>

    <!-- Copy /etc/hosts file -->
    <filetree seq="on_boot" root="/root/">conf/hosts</filetree>
    <exec seq="on_boot" type="verbatim">
        cat /root/hosts >> /etc/hosts;
        rm /root/hosts;
    </exec>

    <!-- Copy ntp config and restart service -->
    <filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-controller.conf</filetree>
    <exec seq="on_boot" type="verbatim">
        service chrony restart
    </exec>

    <filetree seq="on_boot" root="/root/">conf/controller/bin</filetree>
    <exec seq="on_boot" type="verbatim">
        chmod +x /root/bin/*
    </exec>

    <!-- STEP 1: Basic services -->
    <filetree seq="step1" root="/etc/mysql/conf.d/">conf/controller/mysql/mysqld_openstack.cnf</filetree>
    <filetree seq="step1" root="/etc/mysql/conf.d/">conf/controller/mongodb/mongodb.conf</filetree>
    <exec seq="step1" type="verbatim">
        service mysql restart
        service mongodb stop
        rm /var/lib/mongodb/journal/prealloc.*
        service mongodb start
        rabbitmqctl add_user openstack xxxx
        rabbitmqctl set_permissions openstack ".*" ".*" ".*" 
    </exec>

    <!-- STEP 2: Identity service -->
    <filetree seq="step2" root="/etc/keystone/">conf/controller/keystone/keystone.conf</filetree>
    <filetree seq="step2" root="/etc/apache2/sites-available/">conf/controller/apache2/wsgi-keystone.conf</filetree>
    <filetree seq="step2" root="/root/bin/">conf/controller/keystone/admin-openrc.sh</filetree>
    <filetree seq="step2" root="/root/bin/">conf/controller/keystone/demo-openrc.sh</filetree>
    <exec seq="step2" type="verbatim">
        mysql -u root --password='xxxx' -e "CREATE DATABASE keystone;"
        mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'xxxx';"
        mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'xxxx';"
        su -s /bin/sh -c "keystone-manage db_sync" keystone
        #bash -c "su -s /bin/sh -c 'keystone-manage db_sync' keystone > /dev/null 2>&1"
        #keystone-manage db_sync

        echo "ServerName controller" >> /etc/apache2/apache2.conf
        ln -s /etc/apache2/sites-available/wsgi-keystone.conf /etc/apache2/sites-enabled
        service apache2 restart
        rm -f /var/lib/keystone/keystone.db

        export OS_TOKEN=ee173fc22384618b472e
        export OS_URL=http://controller:35357/v3
        export OS_IDENTITY_API_VERSION=3
        # Create endpoints
        openstack service create --name keystone --description "OpenStack Identity" identity
        openstack endpoint create --region RegionOne identity public http://controller:5000/v2.0
        openstack endpoint create --region RegionOne identity internal http://controller:5000/v2.0
        openstack endpoint create --region RegionOne identity admin http://controller:35357/v2.0
        # Create users and projects
        openstack project create --domain default --description "Admin Project" admin
        openstack user create --domain default --password=xxxx admin
        openstack role create admin
        openstack role add --project admin --user admin admin
        openstack project create --domain default --description "Service Project" service
        openstack project create --domain default --description "Demo Project" demo
        openstack user create --domain default --password=xxxx demo
        openstack role create user
        openstack role add --project demo --user demo user
    </exec>

    <!-- STEP 3: Image service (Glance) -->
    <filetree seq="step3" root="/etc/glance/">conf/controller/glance/glance-api.conf</filetree>
    <filetree seq="step3" root="/etc/glance/">conf/controller/glance/glance-registry.conf</filetree>
    <exec seq="step3" type="verbatim">
        mysql -u root --password='xxxx' -e "CREATE DATABASE glance;"
        mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'xxxx';"
        mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'xxxx';"
        source /root/bin/admin-openrc.sh
        openstack user create --domain default --password=xxxx glance
        openstack role add --project service --user glance admin
        openstack service create --name glance --description "OpenStack Image service" image
        openstack endpoint create --region RegionOne image public http://controller:9292
        openstack endpoint create --region RegionOne image internal http://controller:9292
        openstack endpoint create --region RegionOne image admin http://controller:9292

        su -s /bin/sh -c "glance-manage db_sync" glance
        service glance-registry restart
        service glance-api restart
        rm -f /var/lib/glance/glance.sqlite
    </exec>

    <!-- STEP 4: Compute service (Nova) -->
    <filetree seq="step4" root="/etc/nova/">conf/controller/nova/nova.conf</filetree>
    <exec seq="step4" type="verbatim">
        mysql -u root --password='xxxx' -e "CREATE DATABASE nova;"
        mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"
        mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"
        source /root/bin/admin-openrc.sh

        openstack user create --domain default --password=xxxx nova
        openstack role add --project service --user nova admin
        openstack service create --name nova --description "OpenStack Compute" compute

        openstack endpoint create --region RegionOne compute public http://controller:8774/v2/%\(tenant_id\)s
        openstack endpoint create --region RegionOne compute internal http://controller:8774/v2/%\(tenant_id\)s
        openstack endpoint create --region RegionOne compute admin http://controller:8774/v2/%\(tenant_id\)s

        su -s /bin/sh -c "nova-manage db sync" nova
        service nova-api restart
        service nova-cert restart
        service nova-consoleauth restart
        service nova-scheduler restart
        service nova-conductor restart
        service nova-novncproxy restart
        rm -f /var/lib/nova/nova.sqlite
    </exec>

    <!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) -->
    <filetree seq="step5" root="/etc/neutron/">conf/controller/neutron/neutron.conf</filetree>
    <filetree seq="step5" root="/etc/neutron/plugins/ml2/">conf/controller/neutron/ml2_conf.ini</filetree>
    <filetree seq="step5" root="/etc/neutron/plugins/ml2/">conf/controller/neutron/linuxbridge_agent.ini</filetree>
    <filetree seq="step5" root="/etc/neutron/">conf/controller/neutron/l3_agent.ini</filetree>
    <filetree seq="step5" root="/etc/neutron/">conf/controller/neutron/dhcp_agent.ini</filetree>
    <filetree seq="step5" root="/etc/neutron/">conf/controller/neutron/dnsmasq-neutron.conf</filetree>
    <filetree seq="step5" root="/etc/neutron/">conf/controller/neutron/metadata_agent.ini</filetree>
    <exec seq="step5" type="verbatim">
        mysql -u root --password='xxxx' -e "CREATE DATABASE neutron;"
        mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'xxxx';"
        mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'xxxx';"
        source /root/bin/admin-openrc.sh
        openstack user create --domain default --password=xxxx neutron
        openstack role add --project service --user neutron admin
        openstack service create --name neutron --description "OpenStack Networking" network
        openstack endpoint create --region RegionOne network public http://controller:9696
        openstack endpoint create --region RegionOne network internal http://controller:9696
        openstack endpoint create --region RegionOne network admin http://controller:9696
        su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
        service nova-api restart
        service neutron-server restart
        service neutron-plugin-linuxbridge-agent restart
        service neutron-dhcp-agent restart
        service neutron-metadata-agent restart
        service neutron-l3-agent restart
        rm -f /var/lib/neutron/neutron.sqlite
    </exec>

    <!-- STEP 6: Dashboard service -->
    <filetree seq="step6" root="/etc/openstack-dashboard/">conf/controller/dashboard/local_settings.py</filetree>
    <exec seq="step6" type="verbatim">
        service apache2 reload
    </exec>

    <exec seq="load-img" type="verbatim">
        source /root/bin/admin-openrc.sh
        
        # Cirros image  
        #wget -P /tmp/images http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
        wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/cirros-0.3.4-x86_64-disk-vnx.img
        glance image-create --name "cirros-0.3.4-x86_64-vnx" --file /tmp/images/cirros-0.3.4-x86_64-disk-vnx.img --disk-format qcow2 --container-format bare --visibility public --progress
        rm /tmp/images/cirros-0.3.4-x86_64-disk*.img
        
        # Ubuntu image
        #wget -P /tmp/images http://138.4.7.228/download/cnvr/ostack-images/trusty-server-cloudimg-amd64-disk1-cnvr.img
        #glance image-create --name "trusty-server-cloudimg-amd64-cnvr" --file /tmp/images/trusty-server-cloudimg-amd64-disk1-cnvr.img --disk-format qcow2 --container-format bare --visibility public --progress
        #rm /tmp/images/trusty-server-cloudimg-amd64-disk1*.img

        # CentOS image
        #wget -P /tmp/images http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2
        #glance image-create --name "CentOS-7-x86_64" --file /tmp/images/CentOS-7-x86_64-GenericCloud.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress
        #rm /tmp/images/CentOS-7-x86_64-GenericCloud.qcow2
    </exec>

    <exec seq="create-demo-scenario" type="verbatim">
        source /root/bin/admin-openrc.sh

        # Create internal network
        neutron net-create net0
        neutron subnet-create net0 10.1.1.0/24 --name subnet0 --gateway 10.1.1.1 --dns-nameserver 8.8.8.8

        # Create virtual machine
        mkdir tmp
        openstack keypair create vm1 > tmp/vm1
        openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm1 --nic net-id=net0

        # Create external network
        #neutron net-create ExtNet --provider:physical_network external --provider:network_type flat --router:external --shared
        neutron net-create ExtNet --provider:physical_network public --provider:network_type flat --router:external --shared
        neutron subnet-create --name ExtSubnet --allocation-pool start=10.0.10.100,end=10.0.10.200 --dns-nameserver 10.0.10.1 --gateway 10.0.10.1 ExtNet 10.0.10.0/24
        neutron router-create r0
        neutron router-gateway-set r0 ExtNet
        neutron router-interface-add r0 subnet0

        # Assign floating IP address to vm
        openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm1

        # Create security group rules to allow ICMP, SSH and WWW access
        openstack security group rule create --proto icmp --dst-port 0  default
        openstack security group rule create --proto tcp  --dst-port 80 default
        openstack security group rule create --proto tcp  --dst-port 22 default

    </exec>

    <exec seq="create-demo-vm2" type="verbatim">
        source /root/bin/admin-openrc.sh
        # Create virtual machine
        mkdir tmp
        openstack keypair create vm2 > tmp/vm2
        openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm2 --nic net-id=net0
        #nova boot --flavor m1.tiny --image cirros-0.3.4-x86_64 --nic net-id=0290474c-fdd7-4b31-8e1e-021b3c04a470   --security-group default --key-name demo-key demo-instance1
    </exec>

  </vm>

  <vm name="compute1" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64" vcpu="2">
    <filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-compute</filesystem>
    <mem>2G</mem>
    <if id="1" net="MgmtNet">
      <ipv4>10.0.0.31/24</ipv4>
    </if>
    <if id="2" net="ExtNet">
      <ipv4>10.0.10.31/24</ipv4>
    </if>
    <if id="9" net="virbr0">
      <ipv4>dhcp</ipv4>
    </if>

    <!-- Copy /etc/hosts file -->
    <filetree seq="on_boot" root="/root/">conf/hosts</filetree>
    <exec seq="on_boot" type="verbatim">
        cat /root/hosts >> /etc/hosts;
        rm /root/hosts;
    </exec>

    <!-- Copy ntp config and restart service -->
    <filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-compute.conf</filetree>
    <exec seq="on_boot" type="verbatim">
        service chrony restart
    </exec>

    <!-- STEP 4: Compute service (Nova) -->
    <filetree seq="step4" root="/etc/nova/">conf/compute1/nova/nova.conf</filetree>
    <filetree seq="step4" root="/etc/nova/">conf/compute1/nova/nova-compute.conf</filetree>
    <exec seq="step4" type="verbatim">
        service nova-compute restart
        rm -f /var/lib/nova/nova.sqlite
    </exec>

    <!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) -->
    <filetree seq="step5" root="/etc/neutron/">conf/compute1/neutron/neutron.conf</filetree>
    <filetree seq="step5" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/linuxbridge_agent.ini</filetree>
    <exec seq="step5" type="verbatim">
        service nova-compute restart
        service neutron-plugin-linuxbridge-agent restart
    </exec>

  </vm>

  <vm name="compute2" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64" vcpu="2">
    <filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-compute</filesystem>
    <mem>2G</mem>
    <if id="1" net="MgmtNet">
      <ipv4>10.0.0.32/24</ipv4>
    </if>
    <if id="2" net="ExtNet">
      <ipv4>10.0.10.32/24</ipv4>
    </if>
    <if id="9" net="virbr0">
      <ipv4>dhcp</ipv4>
    </if>

    <!-- Copy /etc/hosts file -->
    <filetree seq="on_boot" root="/root/">conf/hosts</filetree>
    <exec seq="on_boot" type="verbatim">
        cat /root/hosts >> /etc/hosts;
        rm /root/hosts;
    </exec>

    <!-- Copy ntp config and restart service -->
    <filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-compute.conf</filetree>
    <exec seq="on_boot" type="verbatim">
        service chrony restart
    </exec>

    <!-- STEP 4: Compute service (Nova) -->
    <filetree seq="step4" root="/etc/nova/">conf/compute2/nova/nova.conf</filetree>
    <filetree seq="step4" root="/etc/nova/">conf/compute2/nova/nova-compute.conf</filetree>
    <exec seq="step4" type="verbatim">
        service nova-compute restart
        rm -f /var/lib/nova/nova.sqlite
    </exec>

    <!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) -->
    <filetree seq="step5" root="/etc/neutron/">conf/compute2/neutron/neutron.conf</filetree>
    <filetree seq="step5" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/linuxbridge_agent.ini</filetree>
    <exec seq="step5" type="verbatim">
        service nova-compute restart
        service neutron-plugin-linuxbridge-agent restart
    </exec>

  </vm>


  <host>
    <hostif net="ExtNet">
       <ipv4>10.0.10.1/24</ipv4>
    </hostif>
    <hostif net="MgmtNet">
      <ipv4>10.0.0.1/24</ipv4>
    </hostif>
  </host>

</vnx>