Difference between revisions of "Vnx-labo-openstack-4nodes-classic-ovs-mitaka"

From VNX
Jump to: navigation, search
(Created page with "{{Title|VNX Openstack Mitaka openstack_tutorial-liberty_4nodes_classic_openvswitch}} == Introduction == This is an Openstack tutorial scenario designed to experiment with Op...")
 
(Requirements)
 
(39 intermediate revisions by the same user not shown)
Line 1: Line 1:
{{Title|VNX Openstack Mitaka openstack_tutorial-liberty_4nodes_classic_openvswitch}}
+
{{Title|VNX Openstack Mitaka openstack_tutorial-mitaka_4nodes_classic_openvswitch}}
  
 
== Introduction ==
 
== Introduction ==
Line 7: Line 7:
 
The scenario is made of four virtual machines: a controller based on LXC and a network node and two compute nodes based on KVM. Optionally, a third compute node can be added once the scenario is started.
 
The scenario is made of four virtual machines: a controller based on LXC and a network node and two compute nodes based on KVM. Optionally, a third compute node can be added once the scenario is started.
  
All virtual machines use Ubuntu 14.04.3 LTS and Openstack Liberty. The deployment scenario is the one named "Legacy with Open vSwitch" described in  
+
Openstack version used is Mitaka and there are two versions of the scenario with Ubuntu 14.04 LTS and 16.04 LTS. The deployment scenario is the one named "Classic with Open vSwitch" described in  
 
http://docs.openstack.org/mitaka/networking-guide/scenario-classic-ovs.html
 
http://docs.openstack.org/mitaka/networking-guide/scenario-classic-ovs.html
  
The scenario has been inspired by the ones developed by Raul Alvarez to test OpenDaylight-Openstack integration, but instead of using Devstack to configure Openstack nodes, the configuration is done by means of commands integrated into the VNX scenario following Openstack installation recipes in http://docs.openstack.org/liberty/install-guide-ubuntu/
+
The scenario is similar to the ones developed by Raul Alvarez to test OpenDaylight-Openstack integration, but instead of using Devstack to configure Openstack nodes, the configuration is done by means of commands integrated into the VNX scenario following Openstack installation recipes in http://docs.openstack.org/mitaka/install-guide-ubuntu/
  
<!--
 
 
[[File:Openstack_tutorial.png|center|thumb|600px|<div align=center>
 
[[File:Openstack_tutorial.png|center|thumb|600px|<div align=center>
 
'''Figure 1: Openstack tutorial scenario'''</div>]]
 
'''Figure 1: Openstack tutorial scenario'''</div>]]
-->
 
  
 
== Requirements ==
 
== Requirements ==
Line 27: Line 25:
 
  vnx_update
 
  vnx_update
  
To make startup faster, enable one-pass-autoconfiguration for KVM virtual machines in /etc/vnx.conf:
+
To make startup faster, enable one-pass-autoconfiguration and virtio for KVM virtual machines in /etc/vnx.conf:
  
 
  [libvirt]
 
  [libvirt]
 
  ...
 
  ...
 
  one_pass_autoconf=yes
 
  one_pass_autoconf=yes
 +
virtio=yes
  
 
Check that KVM nested virtualization is enabled:
 
Check that KVM nested virtualization is enabled:
Line 44: Line 43:
 
Download the scenario with the virtual machines images included and unpack it:
 
Download the scenario with the virtual machines images included and unpack it:
  
  wget http://idefix.dit.upm.es/vnx/examples/openstack/openstack_tutorial-liberty_4nodes_legacy_openvswitch-v012-with-rootfs.tgz
+
<ul>
  vnx --unpack openstack_tutorial-liberty_4nodes_legacy_openvswitch-v012-with-rootfs.tgz
+
<li>For Ubuntu 16.04:</li>
 +
  wget http://idefix.dit.upm.es/download/vnx/examples/openstack/openstack_tutorial-mitaka_4nodes_classic_openvswitch-v01-with-rootfs-u16.04.tgz
 +
  vnx --unpack openstack_tutorial-mitaka_4nodes_classic_openvswitch-v01-with-rootfs-u16.04.tgz
 +
 
 +
<li>For Ubuntu 14.04:</li>
 +
wget http://idefix.dit.upm.es/download/vnx/examples/openstack/openstack_tutorial-mitaka_4nodes_classic_openvswitch-v01-with-rootfs-u14.04.tgz
 +
vnx --unpack openstack_tutorial-mitaka_4nodes_classic_openvswitch-v01-with-rootfs-u14.04.tgz
 +
</ul>
  
 
Alternatively, you can download the much lighter version without the images and create the root filesystems from scratch in your computer:
 
Alternatively, you can download the much lighter version without the images and create the root filesystems from scratch in your computer:
  
  wget http://idefix.dit.upm.es/vnx/examples/openstack/openstack_tutorial-liberty_4nodes_legacy_openvswitch-v012.tgz
+
  wget http://idefix.dit.upm.es/vnx/examples/openstack/openstack_tutorial-mitaka_4nodes_classic_openvswitch-v01.tgz
  vnx --unpack openstack_tutorial-liberty_4nodes_legacy_openvswitch-v012.tgz
+
  vnx --unpack openstack_tutorial-mitaka_4nodes_classic_openvswitch-v01.tgz
  cd openstack_tutorial-liberty_4nodes_legacy_openvswitch-v012/filesystems
+
  cd openstack_tutorial-mitaka_4nodes_classic_openvswitch-v01/filesystems
  ./create-lxc_ubuntu64-ostack-controller
+
 
  ./create-kvm_ubuntu64-ostack-network
+
For Ubuntu 16.04 version:
  ./create-kvm_ubuntu64-ostack-compute
+
./create-lxc_ubuntu64-16.04-ostack-controller
 +
./create-kvm_ubuntu64-16.04-ostack-network
 +
./create-kvm_ubuntu64-16.04-ostack-compute
 +
 
 +
For Ubuntu 14.04 version:
 +
  ./create-lxc_ubuntu64-14.04-ostack-controller
 +
  ./create-kvm_ubuntu64-14.04-ostack-network
 +
  ./create-kvm_ubuntu64-14.04-ostack-compute
 +
 
 +
Note: for KVM root filesystems, if you want to see the installation progress, just access the virtual machine console (root/xxxx) and execute "tail -f /var/log/cloud-init-output.log"
  
 
== Starting the scenario ==
 
== Starting the scenario ==
  
Start the scenario and configure it and load an example cirros image with:
+
Start the scenario and configure it and load an example cirros and ubuntu images with:
  cd openstack_tutorial-liberty_4nodes_legacy_openvswitch-v012
+
  cd openstack_tutorial-mitaka_4nodes_classic_openvswitch-v01
  vnx -f openstack_tutorial-liberty_4nodes_legacy_openvswitch.xml -v -t
+
# Start the scenario
  vnx -f openstack_tutorial-liberty_4nodes_legacy_openvswitch.xml -v -x start-all
+
  vnx -f openstack_tutorial-mitaka_4nodes_classic_openvswitch.xml -v -t
  vnx -f openstack_tutorial-liberty_4nodes_legacy_openvswitch.xml -v -x load-img
+
# Configure all Openstack services
 +
  vnx -f openstack_tutorial-mitaka_4nodes_classic_openvswitch.xml -v -x start-all
 +
# Load vm images
 +
  vnx -f openstack_tutorial-mitaka_4nodes_classic_openvswitch.xml -v -x load-img
  
  
[[File:Openstack tutorial-liberty 4nodes legacy- openvswitch-vnx.png|center|thumb|600px|<div align=center>
+
[[File:Tutorial-mitaka 4nodes classic-openvswitch.png|center|thumb|600px|<div align=center>
 
'''Figure 2: Openstack tutorial detailed topology'''</div>]]
 
'''Figure 2: Openstack tutorial detailed topology'''</div>]]
  
Once started, you can connect to Openstack Dashboard (admin/xxxx) starting a browser and pointing it to the controller horizon page. For example:
+
Once started, you can connect to Openstack Dashboard (default/admin/xxxx) starting a browser and pointing it to the controller horizon page. For example:
  
 
  firefox 10.0.10.11/horizon
 
  firefox 10.0.10.11/horizon
  
Access Dashboard page "Project|Network|Network topology" and create a simple demo scenario inside
+
Access Dashboard page "Project|Network|Network topology" and create a simple demo scenario inside Openstack:
Openstack:
 
  
  vnx -f openstack_tutorial-liberty_4nodes_legacy_openvswitch.xml -v -x create-demo-scenario
+
  vnx -f openstack_tutorial-mitaka_4nodes_classic_openvswitch.xml -v -x create-demo-scenario
  
 
You should see the simple scenario as it is being created through the Dashboard.
 
You should see the simple scenario as it is being created through the Dashboard.
Line 81: Line 98:
 
Once created you should be able to access vm1 console, to ping or ssh from the host to the vm1 or the opposite (see the floating IP assigned to vm1 in the Dashboard, probably 10.0.10.102).
 
Once created you should be able to access vm1 console, to ping or ssh from the host to the vm1 or the opposite (see the floating IP assigned to vm1 in the Dashboard, probably 10.0.10.102).
  
Finally, to allow external Internet access from vm1 you hace to configure a NAT in the host. You can easily do it using vnx_config_nat command distributed with VNX. Just find out the name of the public network interface of your host (i.e eth0) and execute:
+
You can create a second virtual machine (vm2) to test conectivity among virtual machines with:
 +
 
 +
vnx -f openstack_tutorial-mitaka_4nodes_classic_openvswitch.xml -v -x create-demo-vm2
 +
 
 +
To allow external Internet access from vm1 you hace to configure a NAT in the host. You can easily do it using vnx_config_nat command distributed with VNX. Just find out the name of the public network interface of your host (i.e eth0) and execute:
  
 
  vnx_config_nat ExtNet eth0
 
  vnx_config_nat ExtNet eth0
 +
 +
Besides, you can access the Openstack controller by ssh from the host and execute management commands directly:
 +
slogin root@controller    # root/xxxx
 +
source bin/admin-openrc.sh # Load admin credentials
 +
 +
For example, to show the virtual machines started:
 +
openstack server list
  
 
== Connecting Openstack VMs to external systems using VLAN network interfaces ==
 
== Connecting Openstack VMs to external systems using VLAN network interfaces ==
Line 92: Line 120:
 
   
 
   
 
To demonstrate how Openstack VMs can be connected with external systems though the VLAN network switches, an additional demo scenario is included. Just execute:
 
To demonstrate how Openstack VMs can be connected with external systems though the VLAN network switches, an additional demo scenario is included. Just execute:
  vnx -f openstack_tutorial-liberty_4nodes_legacy_openvswitch-v012.xml -v -x create-vlan-demo-scenario
+
  vnx -f openstack_tutorial-mitaka_4nodes_classic_openvswitch-v012.xml -v -x create-vlan-demo-scenario
  
 
That scenario will create two new networks and subnetworks associated with VLANs 1000 and 1001, and two VMs, vm3 and vm4 connected to that networks. You can see the scenario created through the openstack Dashboard.
 
That scenario will create two new networks and subnetworks associated with VLANs 1000 and 1001, and two VMs, vm3 and vm4 connected to that networks. You can see the scenario created through the openstack Dashboard.
Line 114: Line 142:
  
 
To demonstrate the connectivity of vm3 and vm4 with external systems connected on VLANs 1000/1001, you can start and additional virtual scenario which creates three additional systems: vmA (vlan 1000), vmB (vlan 1001) and vlan-router (connected to both vlans). To start it just execute:
 
To demonstrate the connectivity of vm3 and vm4 with external systems connected on VLANs 1000/1001, you can start and additional virtual scenario which creates three additional systems: vmA (vlan 1000), vmB (vlan 1001) and vlan-router (connected to both vlans). To start it just execute:
  vnx -f openstack_tutorial-liberty_4nodes_legacy_openvswitch-vms-vlan.xml -v -t
+
  vnx -f openstack_tutorial-mitaka_4nodes_classic_openvswitch-vms-vlan.xml -v -t
  
 
Once the scenario is started, you should be able to ping and ssh among vm3, vm4, vmA and vmB.
 
Once the scenario is started, you should be able to ping and ssh among vm3, vm4, vmA and vmB.
Line 120: Line 148:
 
You can have a look at the virtual switch that supports the Openstack VLAN Network executing the following command in the host:
 
You can have a look at the virtual switch that supports the Openstack VLAN Network executing the following command in the host:
 
  ovs-vsctl show
 
  ovs-vsctl show
 +
 +
 +
[[File:Vnx-demo-scenario-openstack-mitaka.png|center|thumb|600px|<div align=center>
 +
'''Figure 3: Openstack Dashboard view of the demo virtual scenarios created'''</div>]]
  
 
== Stopping or releasing the scenario ==
 
== Stopping or releasing the scenario ==
Line 125: Line 157:
 
To stop the scenario preserving the configuration and the changes made:
 
To stop the scenario preserving the configuration and the changes made:
  
  vnx -f openstack_tutorial-liberty_4nodes_legacy_openvswitch.xml -v --shutdown
+
  vnx -f openstack_tutorial-mitaka_4nodes_classic_openvswitch.xml -v --shutdown
  
 
To start it again use:
 
To start it again use:
  
  vnx -f openstack_tutorial-liberty_4nodes_legacy_openvswitch.xml -v --start
+
  vnx -f openstack_tutorial-mitaka_4nodes_classic_openvswitch.xml -v --start
  
 
To stop the scenario destroying all the configuration and changes made:
 
To stop the scenario destroying all the configuration and changes made:
  
  vnx -f openstack_tutorial-liberty_4nodes_legacy_openvswitch.xml -v --destroy
+
  vnx -f openstack_tutorial-mitaka_4nodes_classic_openvswitch.xml -v --destroy
  
To unconfigure the NAT, just execute:
+
To unconfigure the NAT, just execute (change eth0 by the name of your external interface):
  
 
  vnx_config_nat -d ExtNet eth0
 
  vnx_config_nat -d ExtNet eth0
Line 143: Line 175:
 
To add a third compute node to the scenario once it is started you can use the VNX modify capacity:
 
To add a third compute node to the scenario once it is started you can use the VNX modify capacity:
  
  vnx -s openstack_tutorial-liberty_4nodes_legacy_openvswitch --modify others/add-compute3.xml -v
+
  vnx -s openstack_tutorial-mitaka_4nodes_classic_openvswitch --modify others/add-compute3.xml -v
  vnx -s openstack_tutorial-liberty_4nodes_legacy_openvswitch -v -x start-all -M compute3
+
  vnx -s openstack_tutorial-mitaka_4nodes_classic_openvswitch -v -x start-all -M compute3
  
 
Once the new node has been joined to the scenario, you must use "-s" option instead of "-f" to manage it (if not, the compute3 node will not be considered). For example:
 
Once the new node has been joined to the scenario, you must use "-s" option instead of "-f" to manage it (if not, the compute3 node will not be considered). For example:
  
  vnx -s openstack_tutorial-liberty_4nodes_legacy_openvswitch -v --destroy
+
  vnx -s openstack_tutorial-mitaka_4nodes_classic_openvswitch -v --destroy
  
 
== Other useful information ==
 
== Other useful information ==
Line 157: Line 189:
  
 
To pack the scenario without the root filesystems, just delete the "--include-rootfs" parameter.
 
To pack the scenario without the root filesystems, just delete the "--include-rootfs" parameter.
 +
 +
== Other Openstack Dashboard screen captures ==
 +
 +
[[File:Vnx-demo-scenario-openstack-mitaka-compute-overview.png|center|thumb|600px|<div align=center>
 +
'''Figure 4: Openstack Dashboard compute overview'''</div>]]
 +
 +
[[File:Vnx-demo-scenario-openstack-mitaka-instances.png|center|thumb|600px|<div align=center>
 +
'''Figure 5: Openstack Dashboard view of the demo virtual machines created'''</div>]]
  
 
== XML specification of Openstack tutorial scenario ==
 
== XML specification of Openstack tutorial scenario ==
Line 168: Line 208:
 
~~~~~~~~~~~~~~~~~~
 
~~~~~~~~~~~~~~~~~~
  
Name:        openstack_tutorial-liberty_4nodes_legacy_openvswitch
+
Name:        openstack_tutorial-mitaka_4nodes_classic_openvswitch
 
Description: This is an Openstack tutorial scenario designed to experiment with Openstack free and open-source  
 
Description: This is an Openstack tutorial scenario designed to experiment with Openstack free and open-source  
 
             software platform for cloud-computing. The scenario is made of four virtual machines: a controller  
 
             software platform for cloud-computing. The scenario is made of four virtual machines: a controller  
 
             based on LXC and a network node and two compute nodes based on KVM. Optionally, a third compute  
 
             based on LXC and a network node and two compute nodes based on KVM. Optionally, a third compute  
 
             node can be added once the scenario is started.
 
             node can be added once the scenario is started.
             Openstack version used: Liberty.
+
             Openstack version used: Mitaka.
             The network configuration is the one named "Legacy with Open vSwitch" described here:
+
             The network configuration is the one named "Classic with Open vSwitch" described here:
                   http://docs.openstack.org/networking-guide/scenario_legacy_ovs.html
+
                   http://docs.openstack.org/mitaka/networking-guide/scenario-classic-ovs.html
 
 
  
 
Author:      David Fernandez (david@dit.upm.es)
 
Author:      David Fernandez (david@dit.upm.es)
Line 192: Line 231:
 
   <global>
 
   <global>
 
     <version>2.0</version>
 
     <version>2.0</version>
     <scenario_name>openstack_tutorial-liberty_4nodes_legacy_openvswitch</scenario_name>
+
     <scenario_name>openstack_tutorial-mitaka_4nodes_classic_openvswitch</scenario_name>
 
     <ssh_key>/root/.ssh/id_dsa.pub</ssh_key>
 
     <ssh_key>/root/.ssh/id_dsa.pub</ssh_key>
 
     <automac/>
 
     <automac/>
Line 203: Line 242:
 
         <console id="1" display="yes"/>
 
         <console id="1" display="yes"/>
 
     </vm_defaults>
 
     </vm_defaults>
 +
    <cmd-seq seq="step4">step41,step42</cmd-seq>
 +
    <cmd-seq seq="step5">step51,step52,step53</cmd-seq>
 
     <cmd-seq seq="start-all">step1,step2,step3,step4,step5,step6</cmd-seq>
 
     <cmd-seq seq="start-all">step1,step2,step3,step4,step5,step6</cmd-seq>
 
   </global>
 
   </global>
Line 233: Line 274:
  
 
     <!-- Copy ntp config and restart service -->
 
     <!-- Copy ntp config and restart service -->
     <filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-controller.conf</filetree>
+
     <!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized
 +
        between the vms/containers and the host -->
 +
    <!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-controller.conf</filetree>
 
     <exec seq="on_boot" type="verbatim">
 
     <exec seq="on_boot" type="verbatim">
 
         service chrony restart
 
         service chrony restart
     </exec>
+
     </exec-->
  
 
     <filetree seq="on_boot" root="/root/">conf/controller/bin</filetree>
 
     <filetree seq="on_boot" root="/root/">conf/controller/bin</filetree>
Line 245: Line 288:
 
     <!-- STEP 1: Basic services -->
 
     <!-- STEP 1: Basic services -->
 
     <filetree seq="step1" root="/etc/mysql/conf.d/">conf/controller/mysql/mysqld_openstack.cnf</filetree>
 
     <filetree seq="step1" root="/etc/mysql/conf.d/">conf/controller/mysql/mysqld_openstack.cnf</filetree>
     <filetree seq="step1" root="/etc/mysql/conf.d/">conf/controller/mongodb/mongodb.conf</filetree>
+
    <!--
 +
# mariadb in ubuntu 16.04 does not seem to manage correctly the "bind-address" and a
 +
        # "DBAPIError exception ... 'Specified key was too long; max key length is 767 bytes" error
 +
        # when executing su -s /bin/sh -c "keystone-manage db_sync" keystone
 +
        # the bind problem can be solved by changing the default value of bind-address to listen on all
 +
# interfaces
 +
        sed -i -e 's/^bind-address\s*=.*/bind-address = 0.0.0.0/' /etc/mysql/mariadb.conf.d/50-server.cnf
 +
# But both problems can be solved with this 60-openstack.cnf script taken from
 +
  # https://bugs.launchpad.net/openstack-manuals/+bug/1575688
 +
    -->
 +
     <filetree seq="step1" root="/etc/mysql/mariadb.conf.d/">conf/controller/mysql/60-openstack.cnf</filetree>
 +
    <filetree seq="step1" root="/etc/mongodb.conf">conf/controller/mongodb/mongodb.conf</filetree>
 
     <exec seq="step1" type="verbatim">
 
     <exec seq="step1" type="verbatim">
 +
# Change all ocurrences of utf8mb4 to utf8. See comment above
 +
for f in $( find /etc/mysql/mariadb.conf.d/ -type f ); do echo "Changing utf8mb4 to utf8 in file $f"; sed -i -e 's/utf8mb4/utf8/g' $f; done
 
         service mysql restart
 
         service mysql restart
 +
#mysql_secure_installation # to be run manually
 +
 
         service mongodb stop
 
         service mongodb stop
         rm /var/lib/mongodb/journal/prealloc.*
+
         rm -f /var/lib/mongodb/journal/prealloc.*
 
         service mongodb start
 
         service mongodb start
 
         rabbitmqctl add_user openstack xxxx
 
         rabbitmqctl add_user openstack xxxx
Line 256: Line 314:
  
 
     <!-- STEP 2: Identity service -->
 
     <!-- STEP 2: Identity service -->
 +
    <filetree seq="step2" root="/etc/">conf/controller/memcached/memcached.conf</filetree>
 
     <filetree seq="step2" root="/etc/keystone/">conf/controller/keystone/keystone.conf</filetree>
 
     <filetree seq="step2" root="/etc/keystone/">conf/controller/keystone/keystone.conf</filetree>
 
     <filetree seq="step2" root="/etc/apache2/sites-available/">conf/controller/apache2/wsgi-keystone.conf</filetree>
 
     <filetree seq="step2" root="/etc/apache2/sites-available/">conf/controller/apache2/wsgi-keystone.conf</filetree>
Line 261: Line 320:
 
     <filetree seq="step2" root="/root/bin/">conf/controller/keystone/demo-openrc.sh</filetree>
 
     <filetree seq="step2" root="/root/bin/">conf/controller/keystone/demo-openrc.sh</filetree>
 
     <exec seq="step2" type="verbatim">
 
     <exec seq="step2" type="verbatim">
 +
service memcached restart
 
         mysql -u root --password='xxxx' -e "CREATE DATABASE keystone;"
 
         mysql -u root --password='xxxx' -e "CREATE DATABASE keystone;"
 
         mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'xxxx';"
 
         mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'xxxx';"
         mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'xxxx';"
+
         mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'xxxx'; flush privileges;"
        su -s /bin/sh -c "keystone-manage db_sync" keystone
+
 
 +
su -s /bin/sh -c "keystone-manage db_sync" keystone
 +
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
  
 
         echo "ServerName controller" >> /etc/apache2/apache2.conf
 
         echo "ServerName controller" >> /etc/apache2/apache2.conf
Line 270: Line 332:
 
         service apache2 restart
 
         service apache2 restart
 
         rm -f /var/lib/keystone/keystone.db
 
         rm -f /var/lib/keystone/keystone.db
 +
sleep 3
  
 
         export OS_TOKEN=ee173fc22384618b472e
 
         export OS_TOKEN=ee173fc22384618b472e
Line 276: Line 339:
 
         # Create endpoints
 
         # Create endpoints
 
         openstack service create --name keystone --description "OpenStack Identity" identity
 
         openstack service create --name keystone --description "OpenStack Identity" identity
         openstack endpoint create --region RegionOne identity public http://controller:5000/v2.0
+
         openstack endpoint create --region RegionOne identity public http://controller:5000/v3
         openstack endpoint create --region RegionOne identity internal http://controller:5000/v2.0
+
         openstack endpoint create --region RegionOne identity internal http://controller:5000/v3
         openstack endpoint create --region RegionOne identity admin http://controller:35357/v2.0
+
         openstack endpoint create --region RegionOne identity admin http://controller:35357/v3
 
         # Create users and projects
 
         # Create users and projects
 +
openstack domain create --description "Default Domain" default
 
         openstack project create --domain default --description "Admin Project" admin
 
         openstack project create --domain default --description "Admin Project" admin
 
         openstack user create --domain default --password=xxxx admin
 
         openstack user create --domain default --password=xxxx admin
Line 313: Line 377:
  
 
     <!-- STEP 4: Compute service (Nova) -->
 
     <!-- STEP 4: Compute service (Nova) -->
     <filetree seq="step4" root="/etc/nova/">conf/controller/nova/nova.conf</filetree>
+
     <filetree seq="step41" root="/etc/nova/">conf/controller/nova/nova.conf</filetree>
     <exec seq="step4" type="verbatim">
+
     <exec seq="step41" type="verbatim">
 +
        mysql -u root --password='xxxx' -e "CREATE DATABASE nova_api;"
 
         mysql -u root --password='xxxx' -e "CREATE DATABASE nova;"
 
         mysql -u root --password='xxxx' -e "CREATE DATABASE nova;"
 +
        mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"
 +
        mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"
 
         mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"
 
         mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"
 
         mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"
 
         mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"
 +
 
         source /root/bin/admin-openrc.sh
 
         source /root/bin/admin-openrc.sh
  
Line 324: Line 392:
 
         openstack service create --name nova --description "OpenStack Compute" compute
 
         openstack service create --name nova --description "OpenStack Compute" compute
  
         openstack endpoint create --region RegionOne compute public http://controller:8774/v2/%\(tenant_id\)s
+
         openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1/%\(tenant_id\)s
         openstack endpoint create --region RegionOne compute internal http://controller:8774/v2/%\(tenant_id\)s
+
         openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1/%\(tenant_id\)s
         openstack endpoint create --region RegionOne compute admin http://controller:8774/v2/%\(tenant_id\)s
+
         openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1/%\(tenant_id\)s
  
        su -s /bin/sh -c "nova-manage db sync" nova
+
su -s /bin/sh -c "nova-manage api_db sync" nova
 +
su -s /bin/sh -c "nova-manage db sync" nova
 
         service nova-api restart
 
         service nova-api restart
        service nova-cert restart
 
 
         service nova-consoleauth restart
 
         service nova-consoleauth restart
 
         service nova-scheduler restart
 
         service nova-scheduler restart
Line 339: Line 407:
  
 
     <!-- STEP 5: Network service (Neutron) -->
 
     <!-- STEP 5: Network service (Neutron) -->
     <filetree seq="step5" root="/etc/neutron/">conf/controller/neutron/neutron.conf</filetree>
+
     <filetree seq="step51" root="/etc/neutron/">conf/controller/neutron/neutron.conf</filetree>
     <filetree seq="step5" root="/etc/neutron/plugins/ml2/">conf/controller/neutron/ml2_conf.ini</filetree>
+
     <filetree seq="step51" root="/etc/neutron/plugins/ml2/">conf/controller/neutron/ml2_conf.ini</filetree>
     <exec seq="step5" type="verbatim">
+
     <exec seq="step51" type="verbatim">
 
         mysql -u root --password='xxxx' -e "CREATE DATABASE neutron;"
 
         mysql -u root --password='xxxx' -e "CREATE DATABASE neutron;"
 
         mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'xxxx';"
 
         mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'xxxx';"
Line 373: Line 441:
 
          
 
          
 
         # Ubuntu image
 
         # Ubuntu image
         #wget -P /tmp/images http://138.4.7.228/download/cnvr/ostack-images/trusty-server-cloudimg-amd64-disk1-cnvr.img
+
         wget -P /tmp/images http://138.4.7.228/download/cnvr/ostack-images/trusty-server-cloudimg-amd64-disk1-cnvr.img
         #glance image-create --name "trusty-server-cloudimg-amd64-cnvr" --file /tmp/images/trusty-server-cloudimg-amd64-disk1-cnvr.img --disk-format qcow2 --container-format bare --visibility public --progress
+
         glance image-create --name "trusty-server-cloudimg-amd64-cnvr" --file /tmp/images/trusty-server-cloudimg-amd64-disk1-cnvr.img --disk-format qcow2 --container-format bare --visibility public --progress
         #rm /tmp/images/trusty-server-cloudimg-amd64-disk1*.img
+
         rm /tmp/images/trusty-server-cloudimg-amd64-disk1*.img
 +
openstack flavor create m1.smaller --ram 512 --disk 3 --vcpus 1 --id 6
  
 
         # CentOS image
 
         # CentOS image
Line 391: Line 460:
  
 
         # Create virtual machine
 
         # Create virtual machine
         mkdir tmp
+
         mkdir -p tmp
 
         openstack keypair create vm1 > tmp/vm1
 
         openstack keypair create vm1 > tmp/vm1
 
         openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm1 --nic net-id=net0
 
         openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm1 --nic net-id=net0
Line 444: Line 513:
 
   <vm name="network" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64">
 
   <vm name="network" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64">
 
     <filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-network</filesystem>
 
     <filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-network</filesystem>
     <mem>512M</mem>
+
     <mem>1G</mem>
 
     <if id="1" net="MgmtNet">
 
     <if id="1" net="MgmtNet">
 
       <ipv4>10.0.0.21/24</ipv4>
 
       <ipv4>10.0.0.21/24</ipv4>
Line 469: Line 538:
  
 
     <!-- Copy ntp config and restart service -->
 
     <!-- Copy ntp config and restart service -->
     <filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree>
+
     <!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized
 +
        between the vms/containers and the host -->
 +
    <!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree>
 
     <exec seq="on_boot" type="verbatim">
 
     <exec seq="on_boot" type="verbatim">
 
         service chrony restart
 
         service chrony restart
     </exec>
+
     </exec-->
  
 
     <filetree seq="on_boot" root="/root/">conf/network/bin</filetree>
 
     <filetree seq="on_boot" root="/root/">conf/network/bin</filetree>
Line 480: Line 551:
  
 
     <!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) -->
 
     <!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) -->
     <filetree seq="step5" root="/etc/neutron/">conf/network/neutron/neutron.conf</filetree>
+
     <filetree seq="step52" root="/etc/neutron/">conf/network/neutron/neutron.conf</filetree>
     <filetree seq="step5" root="/etc/neutron/plugins/ml2/">conf/network/neutron/ml2_conf.ini</filetree>
+
     <filetree seq="step52" root="/etc/neutron/plugins/ml2/">conf/network/neutron/ml2_conf.ini</filetree>
     <filetree seq="step5" root="/etc/neutron/">conf/network/neutron/l3_agent.ini</filetree>
+
 
     <filetree seq="step5" root="/etc/neutron/">conf/network/neutron/dhcp_agent.ini</filetree>
+
    <filetree seq="step52" root="/etc/neutron/plugins/ml2/">conf/network/neutron/openvswitch_agent.ini</filetree>
     <filetree seq="step5" root="/etc/neutron/">conf/network/neutron/dnsmasq-neutron.conf</filetree>
+
     <filetree seq="step52" root="/etc/neutron/">conf/network/neutron/l3_agent.ini</filetree>
     <filetree seq="step5" root="/etc/neutron/">conf/network/neutron/metadata_agent.ini</filetree>
+
     <filetree seq="step52" root="/etc/neutron/">conf/network/neutron/dhcp_agent.ini</filetree>
     <exec seq="step5" type="verbatim">
+
     <filetree seq="step52" root="/etc/neutron/">conf/network/neutron/dnsmasq-neutron.conf</filetree>
 +
     <filetree seq="step52" root="/etc/neutron/">conf/network/neutron/metadata_agent.ini</filetree>
 +
     <exec seq="step52" type="verbatim">
 
         ovs-vsctl add-br br-vlan
 
         ovs-vsctl add-br br-vlan
 
         ovs-vsctl add-port br-vlan eth3
 
         ovs-vsctl add-port br-vlan eth3
Line 492: Line 565:
 
         ovs-vsctl add-port br-ex eth4
 
         ovs-vsctl add-port br-ex eth4
 
         service openvswitch-switch restart
 
         service openvswitch-switch restart
         service neutron-plugin-openvswitch-agent restart
+
         service neutron-openvswitch-agent restart
 
         service neutron-l3-agent restart
 
         service neutron-l3-agent restart
 
         service neutron-dhcp-agent restart
 
         service neutron-dhcp-agent restart
Line 525: Line 598:
  
 
     <!-- Copy ntp config and restart service -->
 
     <!-- Copy ntp config and restart service -->
     <filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree>
+
     <!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized
 +
        between the vms/containers and the host -->
 +
    <!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree>
 
     <exec seq="on_boot" type="verbatim">
 
     <exec seq="on_boot" type="verbatim">
 
         service chrony restart
 
         service chrony restart
     </exec>
+
     </exec-->
  
     <!-- STEP 4: Compute service (Nova) -->
+
     <!-- STEP 42: Compute service (Nova) -->
     <filetree seq="step4" root="/etc/nova/">conf/compute1/nova/nova.conf</filetree>
+
     <filetree seq="step42" root="/etc/nova/">conf/compute1/nova/nova.conf</filetree>
     <filetree seq="step4" root="/etc/nova/">conf/compute1/nova/nova-compute.conf</filetree>
+
     <filetree seq="step42" root="/etc/nova/">conf/compute1/nova/nova-compute.conf</filetree>
     <exec seq="step4" type="verbatim">
+
     <exec seq="step42" type="verbatim">
 
         service nova-compute restart
 
         service nova-compute restart
         rm -f /var/lib/nova/nova.sqlite
+
         #rm -f /var/lib/nova/nova.sqlite
 
     </exec>
 
     </exec>
  
 
     <!-- STEP 5: Network service (Neutron) -->
 
     <!-- STEP 5: Network service (Neutron) -->
     <filetree seq="step5" root="/etc/neutron/">conf/compute1/neutron/neutron.conf</filetree>
+
     <filetree seq="step53" root="/etc/neutron/">conf/compute1/neutron/neutron.conf</filetree>
     <filetree seq="step5" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/ml2_conf.ini</filetree>
+
     <filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/openvswitch_agent.ini</filetree>
     <exec seq="step5" type="verbatim">
+
    <filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/ml2_conf.ini</filetree>
 +
     <exec seq="step53" type="verbatim">
 
         ovs-vsctl add-br br-vlan
 
         ovs-vsctl add-br br-vlan
 
         ovs-vsctl add-port br-vlan eth3
 
         ovs-vsctl add-port br-vlan eth3
 
         service openvswitch-switch restart
 
         service openvswitch-switch restart
 
         service nova-compute restart
 
         service nova-compute restart
         service neutron-plugin-openvswitch-agent restart
+
         service neutron-openvswitch-agent restart
 
     </exec>
 
     </exec>
  
Line 574: Line 650:
  
 
     <!-- Copy ntp config and restart service -->
 
     <!-- Copy ntp config and restart service -->
     <filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree>
+
     <!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized
 +
        between the vms/containers and the host -->
 +
    <!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree>
 
     <exec seq="on_boot" type="verbatim">
 
     <exec seq="on_boot" type="verbatim">
 
         service chrony restart
 
         service chrony restart
     </exec>
+
     </exec-->
  
     <!-- STEP 4: Compute service (Nova) -->
+
     <!-- STEP 42: Compute service (Nova) -->
     <filetree seq="step4" root="/etc/nova/">conf/compute2/nova/nova.conf</filetree>
+
     <filetree seq="step42" root="/etc/nova/">conf/compute2/nova/nova.conf</filetree>
     <filetree seq="step4" root="/etc/nova/">conf/compute2/nova/nova-compute.conf</filetree>
+
     <filetree seq="step42" root="/etc/nova/">conf/compute2/nova/nova-compute.conf</filetree>
     <exec seq="step4" type="verbatim">
+
     <exec seq="step42" type="verbatim">
 
         service nova-compute restart
 
         service nova-compute restart
         rm -f /var/lib/nova/nova.sqlite
+
         #rm -f /var/lib/nova/nova.sqlite
 
     </exec>
 
     </exec>
  
 
     <!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) -->
 
     <!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) -->
     <filetree seq="step5" root="/etc/neutron/">conf/compute2/neutron/neutron.conf</filetree>
+
     <filetree seq="step53" root="/etc/neutron/">conf/compute2/neutron/neutron.conf</filetree>
     <filetree seq="step5" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/ml2_conf.ini</filetree>
+
     <filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/openvswitch_agent.ini</filetree>
     <exec seq="step5" type="verbatim">
+
    <filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/ml2_conf.ini</filetree>
 +
     <exec seq="step53" type="verbatim">
 
         ovs-vsctl add-br br-vlan
 
         ovs-vsctl add-br br-vlan
 
         ovs-vsctl add-port br-vlan eth3
 
         ovs-vsctl add-port br-vlan eth3
 
         service openvswitch-switch restart
 
         service openvswitch-switch restart
 
         service nova-compute restart
 
         service nova-compute restart
         service neutron-plugin-openvswitch-agent restart
+
         service neutron-openvswitch-agent restart
 
     </exec>
 
     </exec>
  

Latest revision as of 02:04, 29 November 2016

VNX Openstack Mitaka openstack_tutorial-mitaka_4nodes_classic_openvswitch

Introduction

This is an Openstack tutorial scenario designed to experiment with Openstack free and open-source software platform for cloud-computing.

The scenario is made of four virtual machines: a controller based on LXC and a network node and two compute nodes based on KVM. Optionally, a third compute node can be added once the scenario is started.

Openstack version used is Mitaka and there are two versions of the scenario with Ubuntu 14.04 LTS and 16.04 LTS. The deployment scenario is the one named "Classic with Open vSwitch" described in http://docs.openstack.org/mitaka/networking-guide/scenario-classic-ovs.html

The scenario is similar to the ones developed by Raul Alvarez to test OpenDaylight-Openstack integration, but instead of using Devstack to configure Openstack nodes, the configuration is done by means of commands integrated into the VNX scenario following Openstack installation recipes in http://docs.openstack.org/mitaka/install-guide-ubuntu/

Figure 1: Openstack tutorial scenario

Requirements

To use the scenario you need a Linux computer (Ubuntu 14.04 or later recommended) with VNX software installed. At least 4Gb of memory are needed to execute the scenario.

See how to install VNX here: http://vnx.dit.upm.es/vnx/index.php/Vnx-install

If already installed, update VNX to the latest version with:

vnx_update

To make startup faster, enable one-pass-autoconfiguration and virtio for KVM virtual machines in /etc/vnx.conf:

[libvirt]
...
one_pass_autoconf=yes
virtio=yes

Check that KVM nested virtualization is enabled:

cat /sys/module/kvm_intel/parameters/nested
Y

If not enabled, check, for example, http://docs.openstack.org/developer/devstack/guides/devstack-with-nested-kvm.html to enable it.

Installation

Download the scenario with the virtual machines images included and unpack it:

Alternatively, you can download the much lighter version without the images and create the root filesystems from scratch in your computer:

wget http://idefix.dit.upm.es/vnx/examples/openstack/openstack_tutorial-mitaka_4nodes_classic_openvswitch-v01.tgz
vnx --unpack openstack_tutorial-mitaka_4nodes_classic_openvswitch-v01.tgz
cd openstack_tutorial-mitaka_4nodes_classic_openvswitch-v01/filesystems

For Ubuntu 16.04 version:

./create-lxc_ubuntu64-16.04-ostack-controller
./create-kvm_ubuntu64-16.04-ostack-network
./create-kvm_ubuntu64-16.04-ostack-compute

For Ubuntu 14.04 version:

./create-lxc_ubuntu64-14.04-ostack-controller
./create-kvm_ubuntu64-14.04-ostack-network
./create-kvm_ubuntu64-14.04-ostack-compute

Note: for KVM root filesystems, if you want to see the installation progress, just access the virtual machine console (root/xxxx) and execute "tail -f /var/log/cloud-init-output.log"

Starting the scenario

Start the scenario and configure it and load an example cirros and ubuntu images with:

cd openstack_tutorial-mitaka_4nodes_classic_openvswitch-v01
# Start the scenario
vnx -f openstack_tutorial-mitaka_4nodes_classic_openvswitch.xml -v -t
# Configure all Openstack services
vnx -f openstack_tutorial-mitaka_4nodes_classic_openvswitch.xml -v -x start-all
# Load vm images
vnx -f openstack_tutorial-mitaka_4nodes_classic_openvswitch.xml -v -x load-img


Figure 2: Openstack tutorial detailed topology

Once started, you can connect to Openstack Dashboard (default/admin/xxxx) starting a browser and pointing it to the controller horizon page. For example:

firefox 10.0.10.11/horizon

Access Dashboard page "Project|Network|Network topology" and create a simple demo scenario inside Openstack:

vnx -f openstack_tutorial-mitaka_4nodes_classic_openvswitch.xml -v -x create-demo-scenario

You should see the simple scenario as it is being created through the Dashboard.

Once created you should be able to access vm1 console, to ping or ssh from the host to the vm1 or the opposite (see the floating IP assigned to vm1 in the Dashboard, probably 10.0.10.102).

You can create a second virtual machine (vm2) to test conectivity among virtual machines with:

vnx -f openstack_tutorial-mitaka_4nodes_classic_openvswitch.xml -v -x create-demo-vm2

To allow external Internet access from vm1 you hace to configure a NAT in the host. You can easily do it using vnx_config_nat command distributed with VNX. Just find out the name of the public network interface of your host (i.e eth0) and execute:

vnx_config_nat ExtNet eth0

Besides, you can access the Openstack controller by ssh from the host and execute management commands directly:

slogin root@controller     # root/xxxx
source bin/admin-openrc.sh # Load admin credentials

For example, to show the virtual machines started:

openstack server list

Connecting Openstack VMs to external systems using VLAN network interfaces

Compute nodes in this scenario have two network interfaces for internal and external connections:

  • eth2, connected to Tunnel network and used to connect with VMs in other compute nodes or routers in the network node
  • eth3, connected to VLAN network and used for the same purpose and also to connect to external systems through the VLAN based network infraestructure.

To demonstrate how Openstack VMs can be connected with external systems though the VLAN network switches, an additional demo scenario is included. Just execute:

vnx -f openstack_tutorial-mitaka_4nodes_classic_openvswitch-v012.xml -v -x create-vlan-demo-scenario

That scenario will create two new networks and subnetworks associated with VLANs 1000 and 1001, and two VMs, vm3 and vm4 connected to that networks. You can see the scenario created through the openstack Dashboard.

The commands used to create that networks and vms are the following (can be consulted also in the scenario XML file):

# Networks
neutron net-create vlan1000 --shared --provider:physical_network vlan --provider:network_type vlan   --provider:segmentation_id 1000
neutron net-create vlan1001 --shared --provider:physical_network vlan --provider:network_type vlan   --provider:segmentation_id 1001
neutron subnet-create vlan1000 10.1.2.0/24 --name vlan1000-subnet --allocation-pool start=10.1.2.2,end=10.1.2.99 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8
neutron subnet-create vlan1001 10.1.3.0/24 --name vlan1001-subnet --allocation-pool start=10.1.3.2,end=10.1.3.99 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8

# VMs
mkdir -p tmp
openstack keypair create vm3 > tmp/vm3
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm3 --nic net-id=vlan1000
openstack keypair create vm4 > tmp/vm4
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm4 --nic net-id=vlan1001

To demonstrate the connectivity of vm3 and vm4 with external systems connected on VLANs 1000/1001, you can start and additional virtual scenario which creates three additional systems: vmA (vlan 1000), vmB (vlan 1001) and vlan-router (connected to both vlans). To start it just execute:

vnx -f openstack_tutorial-mitaka_4nodes_classic_openvswitch-vms-vlan.xml -v -t

Once the scenario is started, you should be able to ping and ssh among vm3, vm4, vmA and vmB.

You can have a look at the virtual switch that supports the Openstack VLAN Network executing the following command in the host:

ovs-vsctl show


Figure 3: Openstack Dashboard view of the demo virtual scenarios created

Stopping or releasing the scenario

To stop the scenario preserving the configuration and the changes made:

vnx -f openstack_tutorial-mitaka_4nodes_classic_openvswitch.xml -v --shutdown

To start it again use:

vnx -f openstack_tutorial-mitaka_4nodes_classic_openvswitch.xml -v --start

To stop the scenario destroying all the configuration and changes made:

vnx -f openstack_tutorial-mitaka_4nodes_classic_openvswitch.xml -v --destroy

To unconfigure the NAT, just execute (change eth0 by the name of your external interface):

vnx_config_nat -d ExtNet eth0

Adding a third compute node (compute3)

To add a third compute node to the scenario once it is started you can use the VNX modify capacity:

vnx -s openstack_tutorial-mitaka_4nodes_classic_openvswitch --modify others/add-compute3.xml -v
vnx -s openstack_tutorial-mitaka_4nodes_classic_openvswitch -v -x start-all -M compute3

Once the new node has been joined to the scenario, you must use "-s" option instead of "-f" to manage it (if not, the compute3 node will not be considered). For example:

vnx -s openstack_tutorial-mitaka_4nodes_classic_openvswitch -v --destroy

Other useful information

To pack the scenario in a tgz file including the root filesystems use:

bin/pack-scenario --include-rootfs

To pack the scenario without the root filesystems, just delete the "--include-rootfs" parameter.

Other Openstack Dashboard screen captures

Figure 4: Openstack Dashboard compute overview
Figure 5: Openstack Dashboard view of the demo virtual machines created

XML specification of Openstack tutorial scenario

<?xml version="1.0" encoding="UTF-8"?>

<!--
~~~~~~~~~~~~~~~~~~
VNX Sample scenarios
~~~~~~~~~~~~~~~~~~

Name:        openstack_tutorial-mitaka_4nodes_classic_openvswitch
Description: This is an Openstack tutorial scenario designed to experiment with Openstack free and open-source 
             software platform for cloud-computing. The scenario is made of four virtual machines: a controller 
             based on LXC and a network node and two compute nodes based on KVM. Optionally, a third compute 
             node can be added once the scenario is started.
             Openstack version used: Mitaka.
             The network configuration is the one named "Classic with Open vSwitch" described here:
                  http://docs.openstack.org/mitaka/networking-guide/scenario-classic-ovs.html

Author:      David Fernandez (david@dit.upm.es)

This file is part of the Virtual Networks over LinuX (VNX) Project distribution. 
(www: http://www.dit.upm.es/vnx - e-mail: vnx@dit.upm.es) 

Departamento de Ingenieria de Sistemas Telematicos (DIT)
Universidad Politecnica de Madrid
SPAIN
-->

<vnx xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:noNamespaceSchemaLocation="/usr/share/xml/vnx/vnx-2.00.xsd">
  <global>
    <version>2.0</version>
    <scenario_name>openstack_tutorial-mitaka_4nodes_classic_openvswitch</scenario_name>
    <ssh_key>/root/.ssh/id_dsa.pub</ssh_key>
    <automac/>
    <!--vm_mgmt type="none" /-->
    <vm_mgmt type="private" network="10.250.0.0" mask="24" offset="12">
       <host_mapping />
    </vm_mgmt> 
    <vm_defaults>
        <console id="0" display="no"/>
        <console id="1" display="yes"/>
    </vm_defaults>
    <cmd-seq seq="step4">step41,step42</cmd-seq>
    <cmd-seq seq="step5">step51,step52,step53</cmd-seq>
    <cmd-seq seq="start-all">step1,step2,step3,step4,step5,step6</cmd-seq>
  </global>

  <net name="MgmtNet" mode="virtual_bridge" />
  <net name="TunnNet" mode="virtual_bridge" />
  <net name="ExtNet"  mode="virtual_bridge" />
  <net name="VlanNet" mode="openvswitch" />
  <net name="virbr0"  mode="virtual_bridge" managed="no"/>

  <vm name="controller" type="lxc" arch="x86_64">
    <filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-controller</filesystem>
    <console id="0" display="yes"/>
    <if id="1" net="MgmtNet">
      <ipv4>10.0.0.11/24</ipv4>
    </if>
    <if id="2" net="ExtNet">
      <ipv4>10.0.10.11/24</ipv4>
    </if>
    <if id="9" net="virbr0">
      <ipv4>dhcp</ipv4>
    </if>

    <!-- Copy /etc/hosts file -->
    <filetree seq="on_boot" root="/root/">conf/hosts</filetree>
    <exec seq="on_boot" type="verbatim">
        cat /root/hosts >> /etc/hosts;
        rm /root/hosts;
    </exec>

    <!-- Copy ntp config and restart service -->
    <!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized
         between the vms/containers and the host --> 
    <!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-controller.conf</filetree>
    <exec seq="on_boot" type="verbatim">
        service chrony restart
    </exec-->

    <filetree seq="on_boot" root="/root/">conf/controller/bin</filetree>
    <exec seq="on_boot" type="verbatim">
        chmod +x /root/bin/*
    </exec>

    <!-- STEP 1: Basic services -->
    <filetree seq="step1" root="/etc/mysql/conf.d/">conf/controller/mysql/mysqld_openstack.cnf</filetree>
    <!--
	# mariadb in ubuntu 16.04 does not seem to manage correctly the "bind-address" and a 
        # "DBAPIError exception ... 'Specified key was too long; max key length is 767 bytes" error
        # when executing su -s /bin/sh -c "keystone-manage db_sync" keystone
        # the bind problem can be solved by changing the default value of bind-address to listen on all 
	# interfaces
        sed -i -e 's/^bind-address\s*=.*/bind-address = 0.0.0.0/' /etc/mysql/mariadb.conf.d/50-server.cnf 
	# But both problems can be solved with this 60-openstack.cnf script taken from 
   	#	https://bugs.launchpad.net/openstack-manuals/+bug/1575688
    -->
    <filetree seq="step1" root="/etc/mysql/mariadb.conf.d/">conf/controller/mysql/60-openstack.cnf</filetree>
    <filetree seq="step1" root="/etc/mongodb.conf">conf/controller/mongodb/mongodb.conf</filetree>
    <exec seq="step1" type="verbatim">
	# Change all ocurrences of utf8mb4 to utf8. See comment above
	for f in $( find /etc/mysql/mariadb.conf.d/ -type f ); do echo "Changing utf8mb4 to utf8 in file $f"; sed -i -e 's/utf8mb4/utf8/g' $f; done
        service mysql restart
	#mysql_secure_installation # to be run manually

        service mongodb stop
        rm -f /var/lib/mongodb/journal/prealloc.*
        service mongodb start
        rabbitmqctl add_user openstack xxxx
        rabbitmqctl set_permissions openstack ".*" ".*" ".*" 
    </exec>

    <!-- STEP 2: Identity service -->
    <filetree seq="step2" root="/etc/">conf/controller/memcached/memcached.conf</filetree>
    <filetree seq="step2" root="/etc/keystone/">conf/controller/keystone/keystone.conf</filetree>
    <filetree seq="step2" root="/etc/apache2/sites-available/">conf/controller/apache2/wsgi-keystone.conf</filetree>
    <filetree seq="step2" root="/root/bin/">conf/controller/keystone/admin-openrc.sh</filetree>
    <filetree seq="step2" root="/root/bin/">conf/controller/keystone/demo-openrc.sh</filetree>
    <exec seq="step2" type="verbatim">
	service memcached restart
        mysql -u root --password='xxxx' -e "CREATE DATABASE keystone;"
        mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'xxxx';"
        mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'xxxx'; flush privileges;"

	su -s /bin/sh -c "keystone-manage db_sync" keystone
	keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone

        echo "ServerName controller" >> /etc/apache2/apache2.conf
        ln -s /etc/apache2/sites-available/wsgi-keystone.conf /etc/apache2/sites-enabled
        service apache2 restart
        rm -f /var/lib/keystone/keystone.db
	sleep 3

        export OS_TOKEN=ee173fc22384618b472e
        export OS_URL=http://controller:35357/v3
        export OS_IDENTITY_API_VERSION=3
        # Create endpoints
        openstack service create --name keystone --description "OpenStack Identity" identity
        openstack endpoint create --region RegionOne identity public http://controller:5000/v3
        openstack endpoint create --region RegionOne identity internal http://controller:5000/v3
        openstack endpoint create --region RegionOne identity admin http://controller:35357/v3
        # Create users and projects
	openstack domain create --description "Default Domain" default
        openstack project create --domain default --description "Admin Project" admin
        openstack user create --domain default --password=xxxx admin
        openstack role create admin
        openstack role add --project admin --user admin admin
        openstack project create --domain default --description "Service Project" service
        openstack project create --domain default --description "Demo Project" demo
        openstack user create --domain default --password=xxxx demo
        openstack role create user
        openstack role add --project demo --user demo user
    </exec>

    <!-- STEP 3: Image service (Glance) -->
    <filetree seq="step3" root="/etc/glance/">conf/controller/glance/glance-api.conf</filetree>
    <filetree seq="step3" root="/etc/glance/">conf/controller/glance/glance-registry.conf</filetree>
    <exec seq="step3" type="verbatim">
        mysql -u root --password='xxxx' -e "CREATE DATABASE glance;"
        mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'xxxx';"
        mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'xxxx';"
        source /root/bin/admin-openrc.sh
        openstack user create --domain default --password=xxxx glance
        openstack role add --project service --user glance admin
        openstack service create --name glance --description "OpenStack Image service" image
        openstack endpoint create --region RegionOne image public http://controller:9292
        openstack endpoint create --region RegionOne image internal http://controller:9292
        openstack endpoint create --region RegionOne image admin http://controller:9292

        su -s /bin/sh -c "glance-manage db_sync" glance
        service glance-registry restart
        service glance-api restart
        rm -f /var/lib/glance/glance.sqlite
    </exec>

    <!-- STEP 4: Compute service (Nova) -->
    <filetree seq="step41" root="/etc/nova/">conf/controller/nova/nova.conf</filetree>
    <exec seq="step41" type="verbatim">
        mysql -u root --password='xxxx' -e "CREATE DATABASE nova_api;"
        mysql -u root --password='xxxx' -e "CREATE DATABASE nova;"
        mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"
        mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"
        mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"
        mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"

        source /root/bin/admin-openrc.sh

        openstack user create --domain default --password=xxxx nova
        openstack role add --project service --user nova admin
        openstack service create --name nova --description "OpenStack Compute" compute

        openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1/%\(tenant_id\)s
        openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1/%\(tenant_id\)s
        openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1/%\(tenant_id\)s

	su -s /bin/sh -c "nova-manage api_db sync" nova
	su -s /bin/sh -c "nova-manage db sync" nova
        service nova-api restart
        service nova-consoleauth restart
        service nova-scheduler restart
        service nova-conductor restart
        service nova-novncproxy restart
        rm -f /var/lib/nova/nova.sqlite
    </exec>

    <!-- STEP 5: Network service (Neutron) -->
    <filetree seq="step51" root="/etc/neutron/">conf/controller/neutron/neutron.conf</filetree>
    <filetree seq="step51" root="/etc/neutron/plugins/ml2/">conf/controller/neutron/ml2_conf.ini</filetree>
    <exec seq="step51" type="verbatim">
        mysql -u root --password='xxxx' -e "CREATE DATABASE neutron;"
        mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'xxxx';"
        mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'xxxx';"
        source /root/bin/admin-openrc.sh
        openstack user create --domain default --password=xxxx neutron
        openstack role add --project service --user neutron admin
        openstack service create --name neutron --description "OpenStack Networking" network
        openstack endpoint create --region RegionOne network public http://controller:9696
        openstack endpoint create --region RegionOne network internal http://controller:9696
        openstack endpoint create --region RegionOne network admin http://controller:9696
        su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
        service nova-api restart
        service neutron-server restart
    </exec>

    <!-- STEP 6: Dashboard service -->
    <filetree seq="step6" root="/etc/openstack-dashboard/">conf/controller/dashboard/local_settings.py</filetree>
    <exec seq="step6" type="verbatim">
        service apache2 reload
    </exec>

    <exec seq="load-img" type="verbatim">
        source /root/bin/admin-openrc.sh
        
        # Cirros image  
        #wget -P /tmp/images http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
        wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/cirros-0.3.4-x86_64-disk-vnx.img
        glance image-create --name "cirros-0.3.4-x86_64-vnx" --file /tmp/images/cirros-0.3.4-x86_64-disk-vnx.img --disk-format qcow2 --container-format bare --visibility public --progress
        rm /tmp/images/cirros-0.3.4-x86_64-disk*.img
        
        # Ubuntu image
        wget -P /tmp/images http://138.4.7.228/download/cnvr/ostack-images/trusty-server-cloudimg-amd64-disk1-cnvr.img
        glance image-create --name "trusty-server-cloudimg-amd64-cnvr" --file /tmp/images/trusty-server-cloudimg-amd64-disk1-cnvr.img --disk-format qcow2 --container-format bare --visibility public --progress
        rm /tmp/images/trusty-server-cloudimg-amd64-disk1*.img
	openstack flavor create m1.smaller --ram 512 --disk 3 --vcpus 1 --id 6

        # CentOS image
        #wget -P /tmp/images http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2
        #glance image-create --name "CentOS-7-x86_64" --file /tmp/images/CentOS-7-x86_64-GenericCloud.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress
        #rm /tmp/images/CentOS-7-x86_64-GenericCloud.qcow2
    </exec>

    <exec seq="create-demo-scenario" type="verbatim">
        source /root/bin/admin-openrc.sh

        # Create internal network
        neutron net-create net0
        neutron subnet-create net0 10.1.1.0/24 --name subnet0 --gateway 10.1.1.1 --dns-nameserver 8.8.8.8

        # Create virtual machine
        mkdir -p tmp
        openstack keypair create vm1 > tmp/vm1
        openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm1 --nic net-id=net0

        # Create external network
        #neutron net-create ExtNet --provider:physical_network external --provider:network_type flat --router:external --shared
        neutron net-create ExtNet --provider:physical_network external --provider:network_type flat --router:external --shared
        neutron subnet-create --name ExtSubnet --allocation-pool start=10.0.10.100,end=10.0.10.200 --dns-nameserver 10.0.10.1 --gateway 10.0.10.1 ExtNet 10.0.10.0/24
        neutron router-create r0
        neutron router-gateway-set r0 ExtNet
        neutron router-interface-add r0 subnet0

        # Assign floating IP address to vm
        openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm1

        # Create security group rules to allow ICMP, SSH and WWW access
        openstack security group rule create --proto icmp --dst-port 0  default
        openstack security group rule create --proto tcp  --dst-port 80 default
        openstack security group rule create --proto tcp  --dst-port 22 default

    </exec>

    <exec seq="create-demo-vm2" type="verbatim">
        source /root/bin/admin-openrc.sh
        # Create virtual machine
        mkdir -p tmp
        openstack keypair create vm2 > tmp/vm2
        openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm2 --nic net-id=net0
    </exec>

    <exec seq="create-vlan-demo-scenario" type="verbatim">
        source /root/bin/admin-openrc.sh

        # Create vlan based networks and subnetworks
        neutron net-create vlan1000 --shared --provider:physical_network vlan --provider:network_type vlan   --provider:segmentation_id 1000
        neutron net-create vlan1001 --shared --provider:physical_network vlan --provider:network_type vlan   --provider:segmentation_id 1001
        neutron subnet-create vlan1000 10.1.2.0/24 --name vlan1000-subnet --allocation-pool start=10.1.2.2,end=10.1.2.99 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8
        neutron subnet-create vlan1001 10.1.3.0/24 --name vlan1001-subnet --allocation-pool start=10.1.3.2,end=10.1.3.99 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8

        # Create virtual machine
        mkdir -p tmp
        openstack keypair create vm3 > tmp/vm3
        openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm3 --nic net-id=vlan1000
        openstack keypair create vm4 > tmp/vm4
        openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm4 --nic net-id=vlan1001

    </exec>


  </vm>

  <vm name="network" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64">
    <filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-network</filesystem>
    <mem>1G</mem>
    <if id="1" net="MgmtNet">
      <ipv4>10.0.0.21/24</ipv4>
    </if>
    <if id="2" net="TunnNet">
      <ipv4>10.0.1.21/24</ipv4>
    </if>
    <if id="3" net="VlanNet">
    </if>
    <if id="4" net="ExtNet">
    </if>
    <if id="9" net="virbr0">
      <ipv4>dhcp</ipv4>
    </if>
    <forwarding type="ip" />
    <forwarding type="ipv6" />

   <!-- Copy /etc/hosts file -->
    <filetree seq="on_boot" root="/root/">conf/hosts</filetree>
    <exec seq="on_boot" type="verbatim">
        cat /root/hosts >> /etc/hosts;
        rm /root/hosts;
    </exec>

    <!-- Copy ntp config and restart service -->
    <!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized
         between the vms/containers and the host --> 
    <!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree>
    <exec seq="on_boot" type="verbatim">
        service chrony restart
    </exec-->

    <filetree seq="on_boot" root="/root/">conf/network/bin</filetree>
    <exec seq="on_boot" type="verbatim">
        chmod +x /root/bin/*
    </exec>

    <!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) -->
    <filetree seq="step52" root="/etc/neutron/">conf/network/neutron/neutron.conf</filetree>
    <filetree seq="step52" root="/etc/neutron/plugins/ml2/">conf/network/neutron/ml2_conf.ini</filetree>

    <filetree seq="step52" root="/etc/neutron/plugins/ml2/">conf/network/neutron/openvswitch_agent.ini</filetree>
    <filetree seq="step52" root="/etc/neutron/">conf/network/neutron/l3_agent.ini</filetree>
    <filetree seq="step52" root="/etc/neutron/">conf/network/neutron/dhcp_agent.ini</filetree>
    <filetree seq="step52" root="/etc/neutron/">conf/network/neutron/dnsmasq-neutron.conf</filetree>
    <filetree seq="step52" root="/etc/neutron/">conf/network/neutron/metadata_agent.ini</filetree>
    <exec seq="step52" type="verbatim">
        ovs-vsctl add-br br-vlan
        ovs-vsctl add-port br-vlan eth3
        ovs-vsctl add-br br-ex
        ovs-vsctl add-port br-ex eth4
        service openvswitch-switch restart
        service neutron-openvswitch-agent restart
        service neutron-l3-agent restart
        service neutron-dhcp-agent restart
        service neutron-metadata-agent restart
        rm -f /var/lib/neutron/neutron.sqlite
    </exec>

  </vm>


  <vm name="compute1" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64" vcpu="2">
    <filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-compute</filesystem>
    <mem>2G</mem>
    <if id="1" net="MgmtNet">
      <ipv4>10.0.0.31/24</ipv4>
    </if>
    <if id="2" net="TunnNet">
      <ipv4>10.0.1.31/24</ipv4>
    </if>
    <if id="3" net="VlanNet">
    </if>
    <if id="9" net="virbr0">
      <ipv4>dhcp</ipv4>
    </if>

    <!-- Copy /etc/hosts file -->
    <filetree seq="on_boot" root="/root/">conf/hosts</filetree>
    <exec seq="on_boot" type="verbatim">
        cat /root/hosts >> /etc/hosts;
        rm /root/hosts;
    </exec>

    <!-- Copy ntp config and restart service -->
    <!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized
         between the vms/containers and the host --> 
    <!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree>
    <exec seq="on_boot" type="verbatim">
        service chrony restart
    </exec-->

    <!-- STEP 42: Compute service (Nova) -->
    <filetree seq="step42" root="/etc/nova/">conf/compute1/nova/nova.conf</filetree>
    <filetree seq="step42" root="/etc/nova/">conf/compute1/nova/nova-compute.conf</filetree>
    <exec seq="step42" type="verbatim">
        service nova-compute restart
        #rm -f /var/lib/nova/nova.sqlite
    </exec>

    <!-- STEP 5: Network service (Neutron) -->
    <filetree seq="step53" root="/etc/neutron/">conf/compute1/neutron/neutron.conf</filetree>
    <filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/openvswitch_agent.ini</filetree>
    <filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/ml2_conf.ini</filetree>
    <exec seq="step53" type="verbatim">
        ovs-vsctl add-br br-vlan
        ovs-vsctl add-port br-vlan eth3
        service openvswitch-switch restart
        service nova-compute restart
        service neutron-openvswitch-agent restart
    </exec>

  </vm>

  <vm name="compute2" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64" vcpu="2">
    <filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-compute</filesystem>
    <mem>2G</mem>
    <if id="1" net="MgmtNet">
      <ipv4>10.0.0.32/24</ipv4>
    </if>
    <if id="2" net="TunnNet">
      <ipv4>10.0.1.32/24</ipv4>
    </if>
    <if id="3" net="VlanNet">
    </if>
    <if id="9" net="virbr0">
      <ipv4>dhcp</ipv4>
    </if>

    <!-- Copy /etc/hosts file -->
    <filetree seq="on_boot" root="/root/">conf/hosts</filetree>
    <exec seq="on_boot" type="verbatim">
        cat /root/hosts >> /etc/hosts;
        rm /root/hosts;
    </exec>

    <!-- Copy ntp config and restart service -->
    <!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized
         between the vms/containers and the host --> 
    <!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree>
    <exec seq="on_boot" type="verbatim">
        service chrony restart
    </exec-->

    <!-- STEP 42: Compute service (Nova) -->
    <filetree seq="step42" root="/etc/nova/">conf/compute2/nova/nova.conf</filetree>
    <filetree seq="step42" root="/etc/nova/">conf/compute2/nova/nova-compute.conf</filetree>
    <exec seq="step42" type="verbatim">
        service nova-compute restart
        #rm -f /var/lib/nova/nova.sqlite
    </exec>

    <!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) -->
    <filetree seq="step53" root="/etc/neutron/">conf/compute2/neutron/neutron.conf</filetree>
    <filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/openvswitch_agent.ini</filetree>
    <filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/ml2_conf.ini</filetree>
    <exec seq="step53" type="verbatim">
        ovs-vsctl add-br br-vlan
        ovs-vsctl add-port br-vlan eth3
        service openvswitch-switch restart
        service nova-compute restart
        service neutron-openvswitch-agent restart
    </exec>

  </vm>


  <host>
    <hostif net="ExtNet">
       <ipv4>10.0.10.1/24</ipv4>
    </hostif>
    <hostif net="MgmtNet">
      <ipv4>10.0.0.1/24</ipv4>
    </hostif>
  </host>

</vnx>