Difference between revisions of "Vnx-labo-openstack"

From VNX
Jump to: navigation, search
(Starting the scenario)
 
(23 intermediate revisions by the same user not shown)
Line 1: Line 1:
{{Title|VNX Openstack laboratory}}
+
{{Title|VNX Openstack laboratories}}
  
== Introduction ==
+
This is a set of Openstack tutorial scenarios designed to experiment with [http://openstack.org Openstack] free and open-source software platform for cloud-computing.
  
This is an Openstack tutorial scenario designed to experiment with Openstack free and open-source
+
Several tutorial scenarios are available covering Stein, Ocata, Mitaka, Liberty and Kilo Openstack versions and several deployment configurations:
software platform for cloud-computing.
 
  
The scenario is made of four virtual machines: a controller based on LXC and a network node and two
+
<ul>
compute nodes based on KVM. Optionally, a third compute node can be added once the scenario is started.
+
<li>'''Openstack Antelope:'''</li>
 +
<ul>
 +
<li>[[Vnx-labo-openstack-4nodes-classic-ovs-antelope|Four-nodes-classic-openvswitch]]. A basic scenario using Openstack Antelope (2023.1) made of four virtual machines: a controller, a network node and two compute nodes all based on LXC. </li>
 +
</ul>
  
All virtual machines use Ubuntu 14.04.3 LTS and Openstack Kilo.
+
<li>'''Openstack Stein:'''</li>
 +
<ul>
 +
<li>[[Vnx-labo-openstack-4nodes-classic-ovs-stein|Four-nodes-classic-openvswitch]]. A basic scenario using Openstack Stein (April 2019) made of four virtual machines: a controller, a network node and two compute nodes all based on LXC. </li>
 +
</ul>
  
The scenario has been inspired by the ones developed by Raul Alvarez to test OpenDaylight-Openstack
+
<li>'''Openstack Ocata:'''</li>
integration, but instead of using Devstack to configure Openstack nodes, the configuration is done
+
<ul>
by means of commands integrated into the VNX scenario following Openstack installation recipes in
+
<li>[[Vnx-labo-openstack-4nodes-classic-ovs-ocata|Four-nodes-classic-openvswitch]]. A basic scenario using Openstack Ocata made of four virtual machines: a controller, a network node and two compute nodes all based on LXC. The deployement scenario used is [http://docs.openstack.org/mitaka/networking-guide/scenario-classic-ovs.html Classic with Open vSwitch]</li>
http://docs.openstack.org/kilo/install-guide/install/apt/content/
+
</ul>
  
[[File:Openstack_tutorial.png|center|thumb|600px|<div align=center>
+
<li>'''Openstack Mitaka:'''</li>
'''Figure 1: Openstack tutorial scenario'''</div>]]
+
<ul>
 +
<li>[[Vnx-labo-openstack-4nodes-classic-ovs-mitaka|Four-nodes-classic-openvswitch]]. A basic scenario using Openstack Mitaka made of four virtual machines: a controller based on LXC and a network and two compute nodes based on KVM. The deployement scenario used is [http://docs.openstack.org/mitaka/networking-guide/scenario-classic-ovs.html Classic with Open vSwitch]</li>
 +
</ul>
  
== Requirements ==
+
<li>'''Openstack Liberty:'''</li>
 +
<ul>
 +
<li>[[Vnx-labo-openstack-3nodes-basic-liberty|Liberty 3-nodes-basic]]. A basic scenario using Openstack Liberty made of three KVM virtual machines: a controller with networking capabilities and two compute nodes.</li>
 +
<li>[[Vnx-labo-openstack-4nodes-basic-liberty|Liberty 4-nodes-legacy-openvswitch]]: a basic scenario using Openstack Liberty made of four virtual machines: a controller based on LXC and a network and two compute nodes based on KVM. The deployement scenario used is [http://docs.openstack.org/networking-guide/scenario_legacy_ovs.html Legacy with Open vSwitch]</li>
 +
</ul>
  
To use the scenario you need a Linux computer (Ubuntu 14.04 or later recommended) with VNX software
+
<li>'''Openstack Kilo:'''</li>
installed. At least 4Gb of memory are needed to execute the scenario.
+
<ul>
 +
<li>[[Vnx-labo-openstack-4nodes-basic-kilo|Kilo 4-nodes-basic]]. A basic scenario using Openstack Kilo made of four virtual machines: a controller based on LXC and a network node and two compute nodes based on KVM.</li>
 +
</ul>
  
See how to install VNX here:  http://vnx.dit.upm.es/vnx/index.php/Vnx-install
+
</ul>
 
 
If already installed, update VNX to the latest version with:
 
 
 
vnx_update
 
 
 
To make startup faster, enable one-pass-autoconfiguration for KVM virtual machines in /etc/vnx.conf:
 
 
 
[libvirt]
 
...
 
one_pass_autoconf=yes
 
 
 
Check that KVM nested virtualization is enabled:
 
 
 
cat /sys/module/kvm_intel/parameters/nested
 
Y
 
 
 
If not enabled, check, for example, http://docs.openstack.org/developer/devstack/guides/devstack-with-nested-kvm.html to enable it.
 
 
 
== Installation ==
 
 
 
Download the scenario with the virtual machines images included and unpack it:
 
 
 
wget http://idefix.dit.upm.es/cnvr/openstack_tutorial-v013-with-rootfs.tgz
 
vnx --unpack openstack_tutorial-v013-with-rootfs.tgz
 
 
 
Alternatively, you can download the much lighter version without the images and create
 
the root filesystems from scratch in your computer:
 
 
 
wget http://idefix.dit.upm.es/cnvr/openstack_tutorial-v013-with-rootfs.tgz
 
vnx --unpack openstack_tutorial-v013.tgz
 
cd openstack_tutorial-v013/filesystems
 
./create-kvm_ubuntu64-ostack-compute
 
./create-kvm_ubuntu64-ostack-network
 
./create-lxc_ubuntu64-ostack-controller
 
 
 
== Starting the scenario ==
 
 
 
Start the scenario and configure it and load an example cirros image with:
 
cd openstack_tutorial-v013
 
vnx -f openstack_tutorial-4nodes.xml -v -t
 
vnx -f openstack_tutorial-4nodes.xml -v -x start-all
 
vnx -f openstack_tutorial-4nodes.xml -v -x load-img
 
 
 
[[File:Openstack_tutorial2.png|center|thumb|600px|<div align=center>
 
'''Figure 2: Openstack tutorial detailed topology'''</div>]]
 
 
 
Once started, you can connect to Openstack Dashboard (admin/xxxx) starting a browser and pointing it
 
to the controller horizon page. For example:
 
 
 
firefox 10.0.10.11/horizon
 
 
 
Access Dashboard page "Project|Network|Network topology" and create a simple demo scenario inside
 
Openstack:
 
 
 
vnx -f openstack_tutorial-4nodes.xml -v -x create-demo-scenario
 
 
 
You should see the simple scenario as it is being created through the Dashboard.
 
 
 
Once created you should be able to access vm1 console, to ping or ssh from the host to the vm1 or the
 
opposite (see the floating IP assigned to vm1 in the Dashboard, probably 10.0.10.102).
 
 
 
Finally, to allow external Internet access from vm1 you hace to configure a NAT in the host. You can
 
easily do it using vnx_config_nat command distributed with VNX. Just find out the name of the public
 
network interface of your host (i.e eth0) and execute:
 
 
 
vnx_config_nat ExtNet eth0
 
 
 
== Stopping or releasing the scenario ==
 
 
 
To stop the scenario preserving the configuration and the changes made:
 
 
 
vnx -f openstack_tutorial-4nodes.xml -v --shutdown
 
 
 
To start it again use:
 
 
 
vnx -f openstack_tutorial-4nodes.xml -v --start
 
 
 
To stop the scenario destroying all the configuration and changes made:
 
 
 
vnx -f openstack_tutorial-4nodes.xml -v --destroy
 
 
 
To unconfigure the NAT, just execute:
 
 
 
vnx_config_nat -d ExtNet eth0
 
 
 
== Adding a third compute node (compute3) ==
 
 
 
To add a third compute node to the scenario once it is started you can use the VNX modify capacity:
 
 
 
vnx -s openstack_tutorial-4nodes --modify others/add-compute3.xml -v
 
vnx -s openstack_tutorial-4nodes -v -x start-all -M compute3
 
 
 
Once the new node has been joined to the scenario, you must use "-s" option instead of "-f" to manage it
 
(if not, the compute3 node will not be considered). For example,
 
 
 
vnx -s openstack_tutorial-4nodes -v --destroy
 
 
 
== Other useful information ==
 
 
 
To pack the scenario in a tgz file including the root filesystems use:
 
 
 
bin/pack-scenario --include-rootfs
 
 
 
To pack the scenario without the root filesystems, just delete the "--include-rootfs" parameter.
 
 
 
== XML specification of Openstack tutorial scenario ==
 
 
 
<pre>
 
<?xml version="1.0" encoding="UTF-8"?>
 
 
 
<!--
 
~~~~~~~~~~~~~~~~~~
 
VNX Sample scenarios
 
~~~~~~~~~~~~~~~~~~
 
 
 
Name:        openstack_tutorial-4nodes
 
Description: This is an Openstack tutorial scenario designed to experiment with Openstack free and open-source
 
            software platform for cloud-computing. The scenario is made of four virtual machines: a controller
 
            based on LXC and a network node and two compute nodes based on KVM. Optionally, a third compute
 
            node can be added once the scenario is started.
 
 
 
Author:      David Fernandez (david@dit.upm.es)
 
 
 
This file is part of the Virtual Networks over LinuX (VNX) Project distribution.
 
(www: http://www.dit.upm.es/vnx - e-mail: vnx@dit.upm.es)
 
 
 
Departamento de Ingenieria de Sistemas Telematicos (DIT)
 
Universidad Politecnica de Madrid
 
SPAIN
 
-->
 
 
 
<vnx xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
 
  xsi:noNamespaceSchemaLocation="/usr/share/xml/vnx/vnx-2.00.xsd">
 
  <global>
 
    <version>2.0</version>
 
    <scenario_name>openstack_tutorial-4nodes</scenario_name>
 
    <automac/>
 
    <!--vm_mgmt type="none" /-->
 
    <vm_mgmt type="private" network="10.250.0.0" mask="24" offset="12">
 
      <host_mapping />
 
    </vm_mgmt>
 
    <vm_defaults>
 
        <console id="0" display="no"/>
 
        <console id="1" display="yes"/>
 
    </vm_defaults>
 
    <cmd-seq seq="start-all">step2,step3,step4,step5,step6,step7,step8,step9,step10</cmd-seq>
 
  </global>
 
 
 
  <net name="MgmtNet" mode="virtual_bridge" />
 
  <net name="TunnNet" mode="virtual_bridge" />
 
  <net name="ExtNet"  mode="virtual_bridge" />
 
  <net name="virbr0"  mode="virtual_bridge" managed="no"/>
 
 
 
  <vm name="controller" type="lxc" arch="x86_64">
 
    <console id="0" display="yes"/>
 
    <filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-controller</filesystem>
 
    <!--mem>2G</mem-->
 
    <if id="1" net="MgmtNet">
 
      <ipv4>10.0.0.11/24</ipv4>
 
    </if>
 
    <if id="2" net="ExtNet">
 
      <ipv4>10.0.10.11/24</ipv4>
 
    </if>
 
    <if id="9" net="virbr0">
 
      <ipv4>dhcp</ipv4>
 
    </if>
 
 
 
    <!-- Copy /etc/hosts file -->
 
    <filetree seq="on_boot" root="/root/">conf/hosts</filetree>
 
    <exec seq="on_boot" type="verbatim">
 
        cat /root/hosts >> /etc/hosts;
 
        rm /root/hosts;
 
    </exec>
 
 
 
    <filetree seq="on_boot" root="/root/">conf/controller/bin</filetree>
 
    <exec seq="on_boot" type="verbatim">
 
        chmod +x /root/bin/*
 
    </exec>
 
 
 
    <!--exec seq="step1" type="verbatim">
 
        apt-get -y install ubuntu-cloud-keyring
 
        echo "deb http://ubuntu-cloud.archive.canonical.com/ubuntu" "trusty-updates/kilo main" > /etc/apt/sources.list.d/cloudarchive-kilo.list
 
        apt-get update
 
        apt-get -y dist-upgrade
 
    </exec-->
 
 
 
    <filetree seq="step2" root="/etc/mysql/conf.d/">conf/controller/mysql/mysqld_openstack.cnf</filetree>
 
    <exec seq="step2" type="verbatim">
 
        #export DEBIAN_FRONTEND=noninteractive
 
        #debconf-set-selections &lt;&lt;&lt; 'mariadb-server-5.5 mysql-server/root_password password xxxx'
 
        #debconf-set-selections &lt;&lt;&lt; 'mariadb-server-5.5 mysql-server/root_password_again password xxxx'
 
        #apt-get -y install mariadb-server python-mysqldb
 
        #apt-get -y install rabbitmq-server
 
        rabbitmqctl add_user openstack xxxx
 
        rabbitmqctl set_permissions openstack ".*" ".*" ".*"
 
    </exec>
 
 
 
    <filetree seq="step3" root="/etc/keystone/">conf/controller/keystone/keystone.conf</filetree>
 
    <exec seq="step3" type="verbatim">
 
        service mysql restart
 
        mysql -u root --password='xxxx' -e "CREATE DATABASE keystone;"
 
        mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'xxxx';"
 
        mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'xxxx';"
 
        #echo "manual" > /etc/init/keystone.override
 
        #export DEBIAN_FRONTEND=noninteractive
 
        #apt-get install -y -o Dpkg::Options::="--force-confold" keystone python-openstackclient apache2 libapache2-mod-wsgi memcached python-memcache
 
        keystone-manage db_sync
 
    </exec>
 
 
 
    <filetree seq="step4" root="/etc/apache2/sites-enabled/">conf/controller/apache2/wsgi-keystone.conf</filetree>
 
    <exec seq="step4" type="verbatim">
 
        echo "ServerName controller" >> /etc/apache2/apache2.conf
 
        mkdir -p /var/www/cgi-bin/keystone
 
        curl http://git.openstack.org/cgit/openstack/keystone/plain/httpd/keystone.py?h=stable/kilo | tee /var/www/cgi-bin/keystone/main /var/www/cgi-bin/keystone/admin
 
        chown -R keystone:keystone /var/www/cgi-bin/keystone
 
        chmod 755 /var/www/cgi-bin/keystone/*
 
 
 
        service apache2 restart
 
        rm -f /var/lib/keystone/keystone.db
 
 
 
        export OS_TOKEN=ee173fc22384618b472e
 
        export OS_URL=http://controller:35357/v2.0
 
        openstack service create --name keystone --description "OpenStack Identity" identity
 
 
 
        openstack endpoint create --publicurl http://controller:5000/v2.0 --internalurl http://controller:5000/v2.0 --adminurl http://controller:35357/v2.0 --region RegionOne identity
 
        openstack project create --description "Admin Project" admin
 
        openstack user create --password=xxxx admin
 
        openstack role create admin
 
        openstack role add --project admin --user admin admin
 
        openstack project create --description "Service Project" service
 
        openstack project create --description "Demo Project" demo
 
        openstack user create --password=xxxx demo
 
        openstack role create user
 
        openstack role add --project demo --user demo user
 
    </exec>
 
 
 
    <filetree seq="step5" root="/root/bin/">conf/controller/glance/admin-openrc.sh</filetree>
 
    <filetree seq="step5" root="/root/bin/">conf/controller/glance/demo-openrc.sh</filetree>
 
    <filetree seq="step5" root="/etc/glance/">conf/controller/glance/glance-api.conf</filetree>
 
    <filetree seq="step5" root="/etc/glance/">conf/controller/glance/glance-registry.conf</filetree>
 
    <exec seq="step5" type="verbatim">
 
        mysql -u root --password='xxxx' -e "CREATE DATABASE glance;"
 
        mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'xxxx';"
 
        mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'xxxx';"
 
        source /root/bin/admin-openrc.sh
 
        openstack user create --password=xxxx glance
 
        openstack role add --project service --user glance admin
 
        openstack service create --name glance --description "OpenStack Image service" image
 
        openstack endpoint create --publicurl http://controller:9292 --internalurl http://controller:9292 --adminurl http://controller:9292 --region RegionOne image
 
        #apt-get  install glance python-glanceclient
 
        su -s /bin/sh -c "glance-manage db_sync" glance
 
        service glance-registry restart
 
        service glance-api restart
 
        rm -f /var/lib/glance/glance.sqlite
 
    </exec>
 
 
 
    <!-- Install Nova compute in controller -->
 
    <filetree seq="step6" root="/etc/nova/">conf/controller/nova/nova.conf</filetree>
 
    <exec seq="step6" type="verbatim">
 
        mysql -u root --password='xxxx' -e "CREATE DATABASE nova;"
 
        mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"
 
        mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"
 
        source /root/bin/admin-openrc.sh
 
        openstack user create --password=xxxx nova
 
        openstack role add --project service --user nova admin
 
        openstack service create --name nova --description "OpenStack Compute" compute
 
        openstack endpoint create --publicurl http://controller:8774/v2/%\(tenant_id\)s --internalurl http://controller:8774/v2/%\(tenant_id\)s --adminurl http://controller:8774/v2/%\(tenant_id\)s --region RegionOne compute
 
 
 
        #apt-get -y -o Dpkg::Options::="--force-confold" install nova-api nova-cert nova-conductor nova-consoleauth nova-novncproxy nova-scheduler python-novaclient
 
        su -s /bin/sh -c "nova-manage db sync" nova
 
        service nova-api restart
 
        service nova-cert restart
 
        service nova-consoleauth restart
 
        service nova-scheduler restart
 
        service nova-conductor restart
 
        service nova-novncproxy restart
 
        rm -f /var/lib/nova/nova.sqlite
 
    </exec>
 
 
 
    <!-- Install Nova compute in controller -->
 
    <filetree seq="step7" root="/etc/neutron/">conf/controller/neutron/neutron.conf</filetree>
 
    <filetree seq="step7" root="/etc/neutron/plugins/ml2/">conf/controller/neutron/ml2_conf.ini</filetree>
 
    <exec seq="step7" type="verbatim">
 
        mysql -u root --password='xxxx' -e "CREATE DATABASE neutron;"
 
        mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'xxxx';"
 
        mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'xxxx';"
 
        source /root/bin/admin-openrc.sh
 
        openstack user create --password=xxxx neutron
 
        openstack role add --project service --user neutron admin
 
        openstack service create --name neutron --description "OpenStack Networking" network
 
        openstack endpoint create --publicurl http://controller:9696 --adminurl http://controller:9696 --internalurl http://controller:9696 --region RegionOne network
 
        #apt-get -y -o Dpkg::Options::="--force-confold" install neutron-server neutron-plugin-ml2 python-neutronclient
 
        su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
 
        service nova-api restart
 
        service neutron-server restart
 
    </exec>
 
 
 
    <exec seq="step9" type="verbatim">
 
        source /root/bin/admin-openrc.sh
 
        #neutron net-create ext-net --router:external --provider:physical_network external --provider:network_type flat
 
        #neutron subnet-create ext-net 192.168.0.0/24 --name ext-subnet --allocation-pool start=192.168.0.200,end=192.168.0.250 --disable-dhcp --gateway 192.168.0.1
 
        #neutron net-create demo-net
 
        #neutron subnet-create demo-net 192.168.1.0/24 --name demo-subnet --gateway 192.168.1.1
 
        #neutron router-create demo-router
 
        #neutron router-interface-add demo-router demo-subnet 
 
        #neutron router-gateway-set demo-router ext-net
 
 
 
        #neutron net-create int-net
 
        #neutron subnet-create int-net 192.168.100.0/24 --name int-subnet --gateway 192.168.100.1
 
        #neutron router-create int-router
 
        #neutron router-interface-add int-router int-subnet
 
        #neutron router-gateway-set int-router ext-net
 
    </exec>
 
 
 
    <!-- Install Dashboard -->
 
    <filetree seq="step10" root="/etc/openstack-dashboard/">conf/controller/dashboard/local_settings.py</filetree>
 
    <exec seq="step10" type="verbatim">
 
        #apt-get -y -o Dpkg::Options::="--force-confold" install openstack-dashboard
 
        service apache2 restart
 
    </exec>
 
 
 
    <exec seq="load-img" type="verbatim">
 
        source /root/bin/admin-openrc.sh
 
       
 
        # Cirros image 
 
        #wget -P /tmp/images http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
 
        wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/cirros-0.3.4-x86_64-disk-vnx.img
 
        glance image-create --name "cirros-0.3.4-x86_64-vnx" --file /tmp/images/cirros-0.3.4-x86_64-disk-vnx.img --disk-format qcow2 --container-format bare --visibility public --progress
 
        rm /tmp/images/cirros-0.3.4-x86_64-disk*.img
 
       
 
        # Ubuntu image
 
        #wget -P /tmp/images http://138.4.7.228/download/cnvr/ostack-images/trusty-server-cloudimg-amd64-disk1-cnvr.img
 
        #glance image-create --name "trusty-server-cloudimg-amd64-cnvr" --file /tmp/images/trusty-server-cloudimg-amd64-disk1-cnvr.img --disk-format qcow2 --container-format bare --visibility public --progress
 
        #rm /tmp/images/trusty-server-cloudimg-amd64-disk1*.img
 
 
 
        # CentOS image
 
        #wget -P /tmp/images http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2
 
        #glance image-create --name "CentOS-7-x86_64" --file /tmp/images/CentOS-7-x86_64-GenericCloud.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress
 
        #rm /tmp/images/CentOS-7-x86_64-GenericCloud.qcow2
 
    </exec>
 
 
 
    <exec seq="create-demo-instance" type="verbatim">
 
        source /root/bin/admin-openrc.sh
 
        nova boot --flavor m1.tiny --image cirros-0.3.4-x86_64 --nic net-id=0290474c-fdd7-4b31-8e1e-021b3c04a470  --security-group default --key-name demo-key demo-instance1
 
    </exec>
 
 
 
    <exec seq="create-demo-scenario" type="verbatim">
 
        source /root/bin/admin-openrc.sh
 
 
 
        # Create internal network
 
        neutron net-create net0
 
        neutron subnet-create net0 10.1.1.0/24 --name subnet0 --gateway 10.1.1.1 --dns-nameserver 8.8.8.8
 
 
 
        # Create virtual machine
 
        mkdir tmp
 
        openstack keypair create vm1 > tmp/vm1
 
        openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm1 --nic net-id=net0
 
 
 
        # Create external network
 
        neutron net-create ExtNet --provider:physical_network external --provider:network_type flat --router:external --shared
 
        neutron subnet-create --name ExtSubnet --allocation-pool start=10.0.10.100,end=10.0.10.200 --dns-nameserver 10.0.10.1 --gateway 10.0.10.1 ExtNet 10.0.10.0/24
 
        neutron router-create r0
 
        neutron router-gateway-set r0 ExtNet
 
        neutron router-interface-add r0 subnet0
 
 
 
        # Assign floating IP address to vm
 
openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm1
 
 
 
# Create security group rules to allow ICMP, SSH and WWW access
 
openstack security group rule create --proto icmp --dst-port 0  default
 
openstack security group rule create --proto tcp  --dst-port 80 default
 
openstack security group rule create --proto tcp  --dst-port 22 default
 
 
 
    </exec>
 
 
 
  </vm>
 
 
 
  <vm name="network" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64">
 
    <filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-network</filesystem>
 
    <mem>512M</mem>
 
    <if id="1" net="MgmtNet">
 
      <ipv4>10.0.0.21/24</ipv4>
 
    </if>
 
    <if id="2" net="TunnNet">
 
      <ipv4>10.0.1.21/24</ipv4>
 
    </if>
 
    <if id="3" net="ExtNet">
 
      <ipv4>10.0.10.2/24</ipv4>
 
    </if>
 
    <if id="9" net="virbr0">
 
      <ipv4>dhcp</ipv4>
 
    </if>
 
    <forwarding type="ip" />
 
    <forwarding type="ipv6" />
 
 
 
    <!-- Copy /etc/hosts file -->
 
    <filetree seq="on_boot" root="/root/">conf/hosts</filetree>
 
    <exec seq="on_boot" type="verbatim">
 
        cat /root/hosts >> /etc/hosts;
 
        rm /root/hosts;
 
    </exec>
 
 
 
    <!-- Copy ntp config and restart service -->
 
    <filetree seq="on_boot" root="/etc/">conf/ntp/ntp.conf</filetree>
 
    <exec seq="on_boot" type="verbatim">
 
        service ntp restart
 
        ifconfig eth3 0.0.0.0
 
        ifconfig br-ex 10.0.10.2/24
 
    </exec>
 
 
 
    <!--exec seq="step1" type="verbatim">
 
    apt-get -y install ubuntu-cloud-keyring
 
    echo "deb http://ubuntu-cloud.archive.canonical.com/ubuntu" "trusty-updates/kilo main" > /etc/apt/sources.list.d/cloudarchive-kilo.list
 
    apt-get update
 
    apt-get -y  dist-upgrade
 
    </exec-->
 
 
 
    <!-- Install and configure Neutron -->
 
    <filetree seq="step8" root="/etc/neutron/">conf/network/neutron/neutron.conf</filetree>
 
    <filetree seq="step8" root="/etc/neutron/plugins/ml2/">conf/network/neutron/ml2_conf.ini</filetree>
 
    <filetree seq="step8" root="/etc/neutron/">conf/network/neutron/l3_agent.ini</filetree>
 
    <filetree seq="step8" root="/etc/neutron/">conf/network/neutron/dhcp_agent.ini</filetree>
 
    <filetree seq="step8" root="/etc/neutron/">conf/network/neutron/metadata_agent.ini</filetree>
 
    <filetree seq="step8" root="/etc/neutron/">conf/network/neutron/dnsmasq-neutron.conf</filetree>
 
    <exec seq="step8" type="verbatim">
 
        #sed -i -e '/net\.ipv4\.ip_forward/d' /etc/sysctl.conf
 
        #sed -i -e '/net\.ipv4\.conf\.all\.rp_filter/d' /etc/sysctl.conf
 
        #sed -i -e '/net\.ipv4\.conf\.default\.rp_filter/d' /etc/sysctl.conf
 
        #echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf
 
        #echo "net.ipv4.conf.all.rp_filter=0" >> /etc/sysctl.conf
 
        #echo "net.ipv4.conf.default.rp_filter=0" >> /etc/sysctl.conf
 
        #sysctl -p
 
        #apt-get -y -o Dpkg::Options::="--force-confold" install neutron-plugin-ml2 neutron-plugin-openvswitch-agent neutron-l3-agent neutron-dhcp-agent neutron-metadata-agent
 
        service openvswitch-switch restart
 
        ovs-vsctl add-br br-ex
 
        ovs-vsctl add-port br-ex eth3
 
        service neutron-plugin-openvswitch-agent restart
 
        service neutron-l3-agent restart
 
        service neutron-dhcp-agent restart
 
        service neutron-metadata-agent restart
 
    </exec>
 
 
 
  </vm>
 
 
 
  <vm name="compute1" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64" vcpu="2">
 
    <filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-compute</filesystem>
 
    <mem>2G</mem>
 
    <if id="1" net="MgmtNet">
 
      <ipv4>10.0.0.31/24</ipv4>
 
    </if>
 
    <if id="2" net="TunnNet">
 
      <ipv4>10.0.1.31/24</ipv4>
 
    </if>
 
    <if id="9" net="virbr0">
 
      <ipv4>dhcp</ipv4>
 
    </if>
 
 
 
    <!-- Copy /etc/hosts file -->
 
    <filetree seq="on_boot" root="/root/">conf/hosts</filetree>
 
    <exec seq="on_boot" type="verbatim">
 
        cat /root/hosts >> /etc/hosts;
 
        rm /root/hosts;
 
    </exec>
 
 
 
    <!-- Copy ntp config and restart service -->
 
    <filetree seq="on_boot" root="/etc/">conf/ntp/ntp.conf</filetree>
 
    <exec seq="on_boot" type="verbatim">
 
        service ntp restart
 
    </exec>
 
 
 
    <!--exec seq="step1" type="verbatim">
 
        apt-get -y install ubuntu-cloud-keyring
 
        echo "deb http://ubuntu-cloud.archive.canonical.com/ubuntu" "trusty-updates/kilo main" > /etc/apt/sources.list.d/cloudarchive-kilo.list
 
        apt-get update
 
        apt-get -y dist-upgrade
 
    </exec-->
 
 
 
    <!-- Install Nova  -->
 
    <filetree seq="step6" root="/etc/nova/">conf/compute1/nova/nova.conf</filetree>
 
    <filetree seq="step6" root="/etc/nova/">conf/compute1/nova/nova-compute.conf</filetree>
 
    <exec seq="step6" type="verbatim">
 
        #apt-get -y -o Dpkg::Options::="--force-confold" install nova-compute sysfsutils
 
        service nova-compute restart
 
        rm -f /var/lib/nova/nova.sqlite
 
    </exec>
 
 
 
    <filetree seq="step8" root="/etc/neutron/">conf/compute1/neutron/neutron.conf</filetree>
 
    <filetree seq="step8" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/ml2_conf.ini</filetree>
 
    <exec seq="step8" type="verbatim">
 
        sed -i -e '/net\.ipv4\.conf\.all\.rp_filter/d' /etc/sysctl.conf
 
        sed -i -e '/net\.ipv4\.conf\.default\.rp_filter/d' /etc/sysctl.conf
 
        sed -i -e '/net\.bridge\.bridge-nf-call-iptables/d' /etc/sysctl.conf
 
        sed -i -e '/net\.bridge\.bridge-nf-call-ip6tables/d' /etc/sysctl.conf
 
        echo "net.ipv4.conf.all.rp_filter=0" >> /etc/sysctl.conf
 
        echo "net.ipv4.conf.default.rp_filter=0" >> /etc/sysctl.conf
 
        echo "net.bridge.bridge-nf-call-iptables=1" >> /etc/sysctl.conf
 
        echo "net.bridge.bridge-nf-call-ip6tables=1" >> /etc/sysctl.conf
 
        sysctl -p
 
 
 
        #apt-get -y -o Dpkg::Options::="--force-confold" install neutron-plugin-ml2 neutron-plugin-openvswitch-agent
 
        service openvswitch-switch restart
 
        service nova-compute restart
 
        service neutron-plugin-openvswitch-agent restart
 
    </exec>
 
 
 
  </vm>
 
 
 
  <vm name="compute2" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64" vcpu="2">
 
    <filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-compute</filesystem>
 
    <mem>2G</mem>
 
    <if id="1" net="MgmtNet">
 
      <ipv4>10.0.0.32/24</ipv4>
 
    </if>
 
    <if id="2" net="TunnNet">
 
      <ipv4>10.0.1.32/24</ipv4>
 
    </if>
 
    <if id="9" net="virbr0">
 
      <ipv4>dhcp</ipv4>
 
    </if>
 
 
 
    <!-- Copy /etc/hosts file -->
 
    <filetree seq="on_boot" root="/root/">conf/hosts</filetree>
 
    <exec seq="on_boot" type="verbatim">
 
        cat /root/hosts >> /etc/hosts;
 
        rm /root/hosts;
 
    </exec>
 
 
 
    <!-- Copy ntp config and restart service -->
 
    <filetree seq="on_boot" root="/etc/">conf/ntp/ntp.conf</filetree>
 
    <exec seq="on_boot" type="verbatim">
 
        service ntp restart
 
    </exec>
 
 
 
    <!--exec seq="step1" type="verbatim">
 
        apt-get -y install ubuntu-cloud-keyring
 
        echo "deb http://ubuntu-cloud.archive.canonical.com/ubuntu" "trusty-updates/kilo main" > /etc/apt/sources.list.d/cloudarchive-kilo.list
 
        apt-get update
 
        apt-get -y dist-upgrade
 
    </exec-->
 
 
 
    <!-- Install Nova  -->
 
    <filetree seq="step6" root="/etc/nova/">conf/compute2/nova/nova.conf</filetree>
 
    <filetree seq="step6" root="/etc/nova/">conf/compute2/nova/nova-compute.conf</filetree>
 
    <exec seq="step6" type="verbatim">
 
        #apt-get -y -o Dpkg::Options::="--force-confold" install nova-compute sysfsutils
 
        service nova-compute restart
 
        #rm -f /var/lib/nova/nova.sqlite
 
    </exec>
 
 
 
    <filetree seq="step8" root="/etc/neutron/">conf/compute2/neutron/neutron.conf</filetree>
 
    <filetree seq="step8" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/ml2_conf.ini</filetree>
 
    <exec seq="step8" type="verbatim">
 
        #sed -i -e '/net\.ipv4\.conf\.all\.rp_filter/d' /etc/sysctl.conf
 
        #sed -i -e '/net\.ipv4\.conf\.default\.rp_filter/d' /etc/sysctl.conf
 
        #sed -i -e '/net\.bridge\.bridge-nf-call-iptables/d' /etc/sysctl.conf
 
        #sed -i -e '/net\.bridge\.bridge-nf-call-ip6tables/d' /etc/sysctl.conf
 
        #echo "net.ipv4.conf.all.rp_filter=0" >> /etc/sysctl.conf
 
        #echo "net.ipv4.conf.default.rp_filter=0" >> /etc/sysctl.conf
 
        #echo "net.bridge.bridge-nf-call-iptables=1" >> /etc/sysctl.conf
 
        #echo "net.bridge.bridge-nf-call-ip6tables=1" >> /etc/sysctl.conf
 
        #sysctl -p
 
 
 
        #apt-get -y -o Dpkg::Options::="--force-confold" install neutron-plugin-ml2 neutron-plugin-openvswitch-agent
 
        service openvswitch-switch restart
 
        service nova-compute restart
 
        service neutron-plugin-openvswitch-agent restart
 
    </exec>
 
 
 
  </vm>
 
 
 
 
 
  <host>
 
    <hostif net="ExtNet">
 
      <ipv4>10.0.10.1/24</ipv4>
 
    </hostif>
 
    <hostif net="MgmtNet">
 
      <ipv4>10.0.0.1/24</ipv4>
 
    </hostif>
 
  </host>
 
 
 
</vnx>
 
</pre>
 

Latest revision as of 09:42, 18 September 2023

VNX Openstack laboratories

This is a set of Openstack tutorial scenarios designed to experiment with Openstack free and open-source software platform for cloud-computing.

Several tutorial scenarios are available covering Stein, Ocata, Mitaka, Liberty and Kilo Openstack versions and several deployment configurations:

  • Openstack Antelope:
    • Four-nodes-classic-openvswitch. A basic scenario using Openstack Antelope (2023.1) made of four virtual machines: a controller, a network node and two compute nodes all based on LXC.
  • Openstack Stein:
    • Four-nodes-classic-openvswitch. A basic scenario using Openstack Stein (April 2019) made of four virtual machines: a controller, a network node and two compute nodes all based on LXC.
  • Openstack Ocata:
  • Openstack Mitaka:
  • Openstack Liberty:
    • Liberty 3-nodes-basic. A basic scenario using Openstack Liberty made of three KVM virtual machines: a controller with networking capabilities and two compute nodes.
    • Liberty 4-nodes-legacy-openvswitch: a basic scenario using Openstack Liberty made of four virtual machines: a controller based on LXC and a network and two compute nodes based on KVM. The deployement scenario used is Legacy with Open vSwitch
  • Openstack Kilo:
    • Kilo 4-nodes-basic. A basic scenario using Openstack Kilo made of four virtual machines: a controller based on LXC and a network node and two compute nodes based on KVM.