http://web.dit.upm.es/vnxwiki/api.php?action=feedcontributions&user=David&feedformat=atom
VNX - User contributions [en]
2024-03-19T13:21:18Z
User contributions
MediaWiki 1.30.0
http://web.dit.upm.es/vnxwiki/index.php?title=Vnx-rootfsubuntu&diff=2673
Vnx-rootfsubuntu
2023-12-09T00:41:18Z
<p>David: </p>
<hr />
<div>{{Title|How to create a KVM Ubuntu root filesystem for VNX}}<br />
<br />
== Basic installation ==<br />
<br />
Follow this procedure to create a KVM Ubuntu based root filesystem for VNX. The procedure has been tested with Ubuntu 9.10, 10.04, 10.10, 11.04, 12.04, 13.04, 13.10, 14.04, 14.10, 15.04, 15.10 and 16.04.<br />
<ul><br />
<li>Create the filesystem disk image:</li><br />
qemu-img create -f qcow2 vnx_rootfs_kvm_ubuntu.qcow2 20G<br />
<li>Get Ubuntu installation CD. For example:</li><br />
wget ftp://ftp.rediris.es/mirror/ubuntu-releases/16.04/ubuntu-16.04-server-i386.iso<br />
cp ubuntu-16.04-server-i386.iso /almacen/iso<br />
Note: use 'server' or 'desktop' CD versions depending on the system you want to create.<br />
<li>Create the virtual machine with:</li><br />
vnx --create-rootfs vnx_rootfs_kvm_ubuntu.qcow2 --install-media /almacen/iso/ubuntu-16.04-server-i386.iso --mem 512M<br />
Note: add '''"--arch x86_64"''' option for 64 bits virtual machines<br />
<li>Follow Ubuntu installation menus to install a basic system with ssh server.</li><br />
<li>Configure a serial console on ttyS0 (skip this step for 15.04 or later releases):</li><br />
cd /etc/init<br />
cp tty1.conf ttyS0.conf<br />
sed -i -e 's/tty1/ttyS0/' ttyS0.conf<br />
<li>Activate startup traces on serial console by editting /etc/default/grub file and setting the GRUB_CMDLINE_LINUX_DEFAULT variable to "console=ttyS0". Also change the boot menu timeout to 0 (sometimes virtual machines get stopped on the boot menu when starting on high loaded systems):</li><br />
GRUB_CMDLINE_LINUX_DEFAULT="console=ttyS0"<br />
GRUB_TIMEOUT=0<br />
GRUB_RECORDFAIL_TIMEOUT=1<br />
<!--li>Only for Ubuntu 15.10 or later releases (do not include it in 20.04, it conflicts with udev):</li><br />
GRUB_CMDLINE_LINUX="net.ifnames=0 biosdevname=0"<br />
--><br />
<li>Make grub process the previous changes:</li><br />
update-grub<br />
<li>Add a timeout to systemd-networkd-wait-online service to avoid long waits at startup. Edit /lib/systemd/system/systemd-networkd-wait-online.service and change the following line:</li><br />
ExecStart=/lib/systemd/systemd-networkd-wait-online --timeout 20<br />
<li>Finally, delete the net udev rules file and halt the system:</li><br />
rm /etc/udev/rules.d/70-persistent-net.rules<br />
halt -p<br />
</ul><br />
<br />
== Configuration ==<br />
<br />
<ul><br />
<li>Restart the system with the following command:</li><br />
vnx --modify-rootfs vnx_rootfs_kvm_ubuntu.qcow2 --update-aced --mem 512M<br />
Note: add '''"--arch x86_64"''' option for 64 bits virtual machines<br />
Note: ignore the errors "timeout waiting for response on VM socket". 768M are needed if you are installing a root filesystem with desktop interface<br />
<li>Access the system through the text console to easy the copy-paste of commands:</li><br />
virsh console vnx_rootfs_kvm_ubuntu.qcow2<br />
<li>Access the console and sudo root:</li><br />
sudo su<br />
<li>Update the system</li><br />
apt-get update<br />
apt-get dist-upgrade<br />
<li>Install XML::DOM perl package and ACPI daemon:</li><br />
apt-get install libxml-libxml-perl libnetaddr-ip-perl acpid<br />
<li>For 17.10 or newer install ifupdown</li><br />
apt-get install ifupdown<br />
<!--li>Only for Ubuntu 10.04:</li><br />
<ul><br />
<li>create /media/cdrom* directories:</li><br />
mkdir /media/cdrom0<br />
mkdir /media/cdrom1<br />
ln -s /media/cdrom0 /media/cdrom<br />
ln -s /cdrom /media/cdrom<br />
<li>add the following lines to /etc/fstab:</li><br />
/dev/scd0 /media/cdrom0 udf,iso9660 user,noauto,exec,utf8 0 0<br />
/dev/scd1 /media/cdrom1 udf,iso9660 user,noauto,exec,utf8 0 0<br />
</ul--><br />
<li>Install VNX autoconfiguration daemon:</li><br />
mount /dev/sdb /mnt/<br />
perl /mnt/vnxaced-lf/install_vnxaced<br />
Change 'sdb' by 'vdb' in case virtio drivers are being used.<br />
<li>Edit /etc/network/interfaces file and comment all lines related to eth0, eth1, etc interfaces. Leave only the loopback (lo) interface.</li><br />
<li>For Ubuntu 22.04, uninstall cloud-init:</li><br />
apt remove --purge cloud-init cloud-initramfs-copymods cloud-initramfs-dyn-netconf cloud-guest-utils netplan.io<br />
<li>Reduce networkd-wait timeout. Edit the systemd-networkd-wait-online.service:</li> <br />
sudo vim /etc/systemd/system/network-online.target.wants/systemd-networkd-wait-online.service<br />
And in the [Service] block, add the following line:<br />
TimeoutStartSec=5sec<br />
<li>Optional: install graphical user interface.</li><br />
<ul><br />
<li>Minimal:</li><br />
# recommended option<br />
sudo apt-get install lubuntu-desktop<br />
<br />
# old recipe not tested in later versions<br />
sudo apt-get install xorg gnome-core gksu gdm gnome-system-tools gnome-nettool firefox-gnome-support<br />
<li>Complete:</li><br />
sudo apt-get install ubuntu-desktop<br />
Note: to avoid nautilus being launched any time you remotely execute a command on the virtual machine using VNX (which interferes with the normal execution of commands), you should disable the start of programs when media insertion takes place. Go to "System settings->System->Details->Removable Media" and deselect the checkbox "Never prompt or start programs on media insertion".<br />
<!--<br />
nautilus automount feature. Just execute gconf-editor and create a variable "/apps/nautilus/preferences/media_automount" and set it to 0. <br />
This does not seem to work:<br />
gconftool-2 --direct --config-source xml:readwrite:/etc/gconf/gconf.xml.mandatory --type bool --set "/apps/nautilus/preferences/media_automount" "false"<br />
gconftool-2 --direct --config-source xml:readwrite:/etc/gconf/gconf.xml.mandatory --type bool --set "/apps/nautilus/preferences/media_automount_open" "false"<br />
--><br />
</ul><br />
<li>Optional: install other services:</li><br />
<ul><br />
<li>Apache server:</li><br />
sudo apt-get install apache2<br />
update-rc.d -f apache2 remove # to avoid automatic start in old versions<br />
systemctl disable apache2.service # to avoid automatic start in new versions<br />
<br />
<li>Other tools</li><br />
sudo apt-get install traceroute<br />
sudo apt-get install xterm # needed to have the 'resize' tool to resize consoles <br />
</ul><br />
<br />
<li>Create a file /etc/vnx_rootfs_version to store version number and informacion about modification:</li><br />
<pre><br />
VER=v0.25<br />
OS=Ubuntu 16.04 32 bits<br />
DESC=Basic Ubuntu 16.04 root filesystem without GUI<br />
</pre><br />
<br />
<li>Zero the image empty space to allow reducing the size of the image:</li><br />
dd if=/dev/zero of=/mytempfile<br />
rm -f /mytempfile<br />
<br />
<li>Stop the machine with vnx_halt:</li><br />
sudo vnx_halt<br />
<br />
<li>Reduce the size of the image:</li><br />
mv vnx_rootfs_kvm_ubuntu.qcow2 vnx_rootfs_kvm_ubuntu.qcow2.bak<br />
qemu-img convert -O qcow2 vnx_rootfs_kvm_ubuntu.qcow2.bak vnx_rootfs_kvm_ubuntu.qcow2<br />
<br />
</ul><br />
<br />
If everything went well, your root filesystem will be ready to be used with VNX. You can make a simple test using the simple_ubuntu.xml scenario distributed with VNX.<br />
<br />
== Installing additional software ==<br />
<br />
To install additional software or to modify your root file system, you just have to:<br />
<ul><br />
<li>Start a virtual machine from it:</li><br />
vnx --modify-rootfs vnx_rootfs_kvm_ubuntu.qcow2<br />
<li>Check network connectivity. Maybe you have to activate the network interface by hand:</li><br />
dhclient eth0<br />
Note: use "ip link show" to know which network interface to use.<br />
<li>Do the modifications you want.</li><br />
<li>Finally, halt the system using:</li><br />
vnx_halt<br />
</ul><br />
<br />
==== Examples ====<br />
<br />
<ul><br />
<li>dhcp server and relay:</li><br />
<ul><br />
<li>Install dhcp3 packages:</li><br />
apt-get install dhcp3-server dhcp3-relay<br />
<li>Disable autostart (optional):</li><br />
update-rc.d -f isc-dhcp-server remove<br />
update-rc.d -f isc-dhcp-relay remove<br />
</ul><br />
<br />
<br />
</ul><br />
<br />
== Updating VNXACED ==<br />
<br />
You can automatically update the VNXACE daemon with the following command:<br />
vnx --modify-rootfs vnx_rootfs_kvm_ubuntu.qcow2 --update-aced -y<br />
If VNXACE daemon is not updated automatically, you can do it manually by accessing the virtual machine console and type:<br />
mount /dev/sdb /mnt/<br />
perl /mnt/vnxaced-lf/install_vnxaced<br />
<br />
== Known problems ==<br />
<br />
<ul><br />
<li>Sometimes after restarting, the virtual machines stop at showing the grub menu and do not start until you manually choose one option. To avoid it, just follow the instructions here: http://www.linuxquestions.org/questions/linux-server-73/how-to-disable-grub-2-menu-even-after-server-crash-796562/. Beware that the changes you make to grub.cfg file are lost after executing "update-grub" command.<br />
</li><br />
<li>In Ubuntu 12.04 Desktop, graphical commands execution does not work. Command execution fails with "ERROR: no user logged on display :0.0" (see /var/log/vnxaced.log). If you just open a "terminal" window, commands work correctly (does not work if you open other applications; only when you start a terminal...).</li><br />
<li>Each time a cdrom is mounted (for example, whenever a command is executed on the virtual machine) the following error appears in the console:</li><br />
<pre><br />
Jul 27 22:33:31 vnx kernel: [ 4384.875886] ata1.01: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6<br />
Jul 27 22:33:31 vnx kernel: [ 4385.291374] ata1.01: BMDMA stat 0x5<br />
Jul 27 22:33:31 vnx kernel: [ 4385.493411] sr 0:0:1:0: [sr0] CDB: Read(10): 28 00 00 00 00 18 00 00 01 00<br />
Jul 27 22:33:31 vnx kernel: [ 4385.493460] ata1.01: cmd a0/01:00:00:00:08/00:00:00:00:00/b0 tag 0 dma 2048 in<br />
Jul 27 22:33:31 vnx kernel: [ 4385.493461] res 01/60:00:00:00:08/00:00:00:00:00/b0 Emask 0x3 (HSM violation)<br />
Jul 27 22:33:31 vnx kernel: [ 4386.263553] ata1.01: status: { ERR }<br />
</pre><br />
Despite of the error trace, the commands are executed correctly. This error does not appear on Ubuntu 9.10 filesystems.<br />
<br />
</ul></div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Vnx-rootfsopenbsd&diff=2672
Vnx-rootfsopenbsd
2023-12-02T12:47:53Z
<p>David: </p>
<hr />
<div>{{Title|How to create a KVM OpenBSD root filesystem for VNX}}<br />
<br />
Follow this procedure to create a KVM OpenBSD based root filesystem for VNX. The procedure has been tested with OpenBSD 7.2. <br />
<br />
== Basic installation ==<br />
<ul><br />
<li>Create the filesystem disk image:</li><br />
qemu-img create -f qcow2 vnx_rootfs_kvm_openbsd64-7.4.qcow2 20G<br />
<li>Get OpenBSD installation CD. For example:</li><br />
wget http://ftp.eu.openbsd.org/pub/OpenBSD/7.4/amd64/install74.iso<br />
mv install74.iso /almacen/iso/openbsd-install74.iso<br />
<li>Create the virtual machine with:</li><br />
vnx --create-rootfs vnx_rootfs_kvm_openbsd64-7.4.qcow2 --install-media /almacen/iso/openbsd-install74.iso --mem 2G --arch=x86_64 --vcpu 2<br />
<li>Follow OpenBSD installation menus to install a basic system:</li><br />
<ul><br />
<li>When asked about the network interface, answer "done" to not configure the network now.</li><br />
<li>Answer 'yes' to the question "Change the default console to com0" to enable serial console.</li><br />
<li>Add a user named "vnx".</li><br />
<li>Use the whole "wd0" or "sd0" disk and "Auto layout".</li><br />
<li>Choose cd0 for the "location of sets". Choose the default "sets".</li><br />
</ul><br />
<li>After ending installation, but before shutting down the virtual machine, you have to disable mpbios, as follows:</li><br />
chroot /mnt<br />
config -ef /bsd<br />
disable mpbios<br />
quit<br />
<li>Finally, halt the system:</li><br />
halt -p<br />
</ul><br />
<br />
The OS installer will offer to reboot, but do not do that. Instead, close the VM console window and then, from the host OS, destroy the virtual machine:<br />
# virsh list<br />
Id Nombre Estado<br />
----------------------------------------------------<br />
33 vnx_rootfs_kvm_openbsd64-7.4.qcow2-7440 ejecutando<br />
<br />
# virsh destroy 33<br />
<br />
== Configuration ==<br />
<br />
<ul><br />
<li>Start the system with the following command:</li><br />
vnx --modify-rootfs vnx_rootfs_kvm_openbsd64-7.4.qcow2 --update-aced --mem 2G --arch x86_64 --vcpu 2<br />
Note: ignore the errors "timeout waiting for response on VM socket".<br />
<li>Access the system through the text console to easy the copy-paste of commands:</li><br />
# virsh list<br />
Id Name State<br />
----------------------------------------------------<br />
31 vnx_rootfs_kvm_openbsd64-7.4.qcow2-912 running<br />
<br />
# virsh console 31<br />
<li>In case you do not have access to the serial console, you can configure it manually by editing /etc/ttys file and changing the line:</li><br />
tty00 "/usr/libexec/getty std.9600" dialup off secure<br />
to:<br />
tty00 "/usr/libexec/getty std.9600" vt100 on secure<br />
Reboot the system after modifying the ttys file.<br />
<li>Loogin as root in the console and configure the network with DHCP:</li><br />
dhclient re0<br />
<li>Configure the environment variable with network repository:</li><br />
export PKG_PATH=http://ftp.eu.openbsd.org/pub/OpenBSD/`uname -r`/packages/`machine -a`/<br />
<li>Install bash and change package repository (change ftp.es.freebsd.org to your nearest mirror):</li><br />
pkg_add -r bash <br />
usermod -s /usr/local/bin/bash root<br />
usermod -s /usr/local/bin/bash vnx<br />
echo "export PKG_PATH=http://ftp.eu.openbsd.org/pub/OpenBSD/`uname -r`/packages/`machine -a`/" > ~/.bash_profile<br />
Note: in case pkg_add does not progress, try using another openbsd mirror in PKG_PATH variable.<br />
<li>Install XML::LibXML and NetAddr-IP perl libraries:</li><br />
pkg_add -r p5-XML-LibXML p5-NetAddr-IP <br />
<li>Install VNX autoconfiguration daemon:</li><br />
mount_msdos /dev/wd1i /mnt # if virtio=no in vnx.conf<br />
mount_msdos /dev/sd1c /mnt # if virtio=yes in vnx.conf<br />
perl /mnt/vnxaced-lf/install_vnxaced<br />
sed -i 's#bin/sh#bin/ksh#' /etc/rc.d/vnxaced<br />
<li>Create a file /etc/vnx_rootfs_version to store version number and informacion about modification:</li><br />
VER=v0.25<br />
OS=OpenBSD 7.4<br />
DESC=Basic OpenBSD 7.4 root filesystem without GUI<br />
<li>Configure interface em0 so that it does not get configured with DHCP. To do that, if file /etc/hostname.em0 exists, edit it and delete or comment the line with "dhcp".</li><br />
<li>Stop the machine with that script:</li><br />
vnx_halt<br />
<br />
</ul><br />
<br />
If everything went well, your root filesystem will be ready to be used with VNX. You can make a simple test using the simple_openbsd64.xml scenario distributed with VNX.<br />
<br />
== Installing additional software ==<br />
<br />
To install additional software or to modify your root file system, you just have to: <br />
<ul><br />
<li>Start a virtual machine from it:</li><br />
vnx --modify-rootfs vnx_rootfs_kvm_openbsd64-7.2.qcow2 --arch x86_64<br />
<li>Check network connectivity. Maybe you have to activate the network interface by hand:</li><br />
dhclient re0<br />
<li>Do the modifications you want.</li><br />
<li>Finally, halt the system using:</li><br />
vnx_halt<br />
</ul><br />
<br />
== Known problems ==<br />
<br />
== OpenBSD tips ==<br />
<br />
To upgrade OpenBSD to the next release, the OpenBSD site provides useful hints. For instance, to upgrade from 7.1 to 7.2, you can follow the instructions provided in http://www.openbsd.org/faq/upgrade72.html</div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Vnx-rootfsopenbsd&diff=2671
Vnx-rootfsopenbsd
2023-12-02T12:46:20Z
<p>David: </p>
<hr />
<div>{{Title|How to create a KVM OpenBSD root filesystem for VNX}}<br />
<br />
Follow this procedure to create a KVM OpenBSD based root filesystem for VNX. The procedure has been tested with OpenBSD 7.2. <br />
<br />
== Basic installation ==<br />
<ul><br />
<li>Create the filesystem disk image:</li><br />
qemu-img create -f qcow2 vnx_rootfs_kvm_openbsd64-7.4.qcow2 20G<br />
<li>Get OpenBSD installation CD. For example:</li><br />
wget http://ftp.eu.openbsd.org/pub/OpenBSD/7.4/amd64/install74.iso<br />
mv install74.iso /almacen/iso/openbsd-install74.iso<br />
<li>Create the virtual machine with:</li><br />
vnx --create-rootfs vnx_rootfs_kvm_openbsd64-7.4.qcow2 --install-media /almacen/iso/openbsd-install74.iso --mem 2G --arch=x86_64 --vcpu 2<br />
<li>Follow OpenBSD installation menus to install a basic system:</li><br />
<ul><br />
<li>When asked about the network interface, answer "done" to not configure the network now.</li><br />
<li>Answer 'yes' to the question "Change the default console to com0" to enable serial console.</li><br />
<li>Add a user named "vnx".</li><br />
<li>Use the whole "wd0" or "sd0" disk and "Auto layout".</li><br />
<li>Choose cd0 for the "location of sets". Choose the default "sets".</li><br />
</ul><br />
<li>After ending installation, but before shutting down the virtual machine, you have to disable mpbios, as follows:</li><br />
chroot /mnt<br />
config -ef /bsd<br />
disable mpbios<br />
quit<br />
<li>Finally, halt the system:</li><br />
halt -p<br />
</ul><br />
<br />
The OS installer will offer to reboot, but do not do that. Instead, close the VM console window and then, from the host OS, destroy the virtual machine:<br />
# virsh list<br />
Id Nombre Estado<br />
----------------------------------------------------<br />
33 vnx_rootfs_kvm_openbsd64-7.4.qcow2-7440 ejecutando<br />
<br />
# virsh destroy 33<br />
<br />
== Configuration ==<br />
<br />
<ul><br />
<li>Start the system with the following command:</li><br />
vnx --modify-rootfs vnx_rootfs_kvm_openbsd64-7.4.qcow2 --update-aced --mem 2G --arch x86_64 --vcpu 2<br />
Note: ignore the errors "timeout waiting for response on VM socket".<br />
<li>Access the system through the text console to easy the copy-paste of commands:</li><br />
# virsh list<br />
Id Name State<br />
----------------------------------------------------<br />
31 vnx_rootfs_kvm_openbsd64-7.4.qcow2-912 running<br />
<br />
# virsh console 31<br />
<li>In case you do not have access to the serial console, you can configure it manually by editing /etc/ttys file and changing the line:</li><br />
tty00 "/usr/libexec/getty std.9600" dialup off secure<br />
to:<br />
tty00 "/usr/libexec/getty std.9600" vt100 on secure<br />
Reboot the system after modifying the ttys file.<br />
<li>Loogin as root in the console and configure the network with DHCP:</li><br />
dhclient re0<br />
<li>Configure the environment variable with network repository:</li><br />
export PKG_PATH=http://ftp.eu.openbsd.org/pub/OpenBSD/`uname -r`/packages/`machine -a`/<br />
<li>Install bash and change package repository (change ftp.es.freebsd.org to your nearest mirror):</li><br />
pkg_add -r bash <br />
usermod -s /usr/local/bin/bash root<br />
usermod -s /usr/local/bin/bash vnx<br />
echo "export PKG_PATH=http://ftp.eu.openbsd.org/pub/OpenBSD/`uname -r`/packages/`machine -a`/" > ~/.bash_profile<br />
Note: in case pkg_add does not progress, try using another openbsd mirror in PKG_PATH variable.<br />
<li>Install XML::LibXML and NetAddr-IP perl libraries:</li><br />
pkg_add -r p5-XML-LibXML p5-NetAddr-IP <br />
<li>Install VNX autoconfiguration daemon:</li><br />
mount_msdos /dev/wd1i /mnt # if virtio=no in vnx.conf<br />
mount_msdos /dev/sd1c /mnt # if virtio=yes in vnx.conf<br />
perl /mnt/vnxaced-lf/install_vnxaced<br />
sed -i 's#bin/sh#bin/ksh#' /etc/rc.d/vnxaced<br />
<li>Create a file /etc/vnx_rootfs_version to store version number and informacion about modification:</li><br />
VER=v0.25<br />
OS=OpenBSD 7.2<br />
DESC=Basic OpenBSD 7.2 root filesystem without GUI<br />
<li>Configure interface em0 so that it does not get configured with DHCP. To do that, if file /etc/hostname.em0 exists, edit it and delete or comment the line with "dhcp".</li><br />
<li>Stop the machine with that script:</li><br />
vnx_halt<br />
<br />
</ul><br />
<br />
If everything went well, your root filesystem will be ready to be used with VNX. You can make a simple test using the simple_openbsd64.xml scenario distributed with VNX.<br />
<br />
== Installing additional software ==<br />
<br />
To install additional software or to modify your root file system, you just have to: <br />
<ul><br />
<li>Start a virtual machine from it:</li><br />
vnx --modify-rootfs vnx_rootfs_kvm_openbsd64-7.2.qcow2 --arch x86_64<br />
<li>Check network connectivity. Maybe you have to activate the network interface by hand:</li><br />
dhclient re0<br />
<li>Do the modifications you want.</li><br />
<li>Finally, halt the system using:</li><br />
vnx_halt<br />
</ul><br />
<br />
== Known problems ==<br />
<br />
== OpenBSD tips ==<br />
<br />
To upgrade OpenBSD to the next release, the OpenBSD site provides useful hints. For instance, to upgrade from 7.1 to 7.2, you can follow the instructions provided in http://www.openbsd.org/faq/upgrade72.html</div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Vnx-rootfsopenbsd&diff=2670
Vnx-rootfsopenbsd
2023-12-02T12:45:41Z
<p>David: </p>
<hr />
<div>{{Title|How to create a KVM OpenBSD root filesystem for VNX}}<br />
<br />
Follow this procedure to create a KVM OpenBSD based root filesystem for VNX. The procedure has been tested with OpenBSD 7.2. <br />
<br />
== Basic installation ==<br />
<ul><br />
<li>Create the filesystem disk image:</li><br />
qemu-img create -f qcow2 vnx_rootfs_kvm_openbsd64-7.4.qcow2 20G<br />
<li>Get OpenBSD installation CD. For example:</li><br />
wget http://ftp.eu.openbsd.org/pub/OpenBSD/7.4/amd64/install74.iso<br />
mv install74.iso /almacen/iso/openbsd-install74.iso<br />
<li>Create the virtual machine with:</li><br />
vnx --create-rootfs vnx_rootfs_kvm_openbsd64-7.4.qcow2 --install-media /almacen/iso/openbsd-install74.iso --mem 2G --arch=x86_64 --vcpu 2<br />
<li>Follow OpenBSD installation menus to install a basic system:</li><br />
<ul><br />
<li>When asked about the network interface, answer "done" to not configure the network now.</li><br />
<li>Answer 'yes' to the question "Change the default console to com0" to enable serial console.</li><br />
<li>Add a user named "vnx".</li><br />
<li>Use the whole "wd0" or "sd0" disk and "Auto layout".</li><br />
<li>Choose cd0 for the "location of sets". Choose the default "sets".</li><br />
</ul><br />
<li>After ending installation, but before shutting down the virtual machine, you have to disable mpbios, as follows:</li><br />
chroot /mnt<br />
config -ef /bsd<br />
disable mpbios<br />
quit<br />
<li>Finally, halt the system:</li><br />
halt -p<br />
</ul><br />
<br />
The OS installer will offer to reboot, but do not do that. Instead, close the VM console window and then, from the host OS, destroy the virtual machine:<br />
# virsh list<br />
Id Nombre Estado<br />
----------------------------------------------------<br />
33 vnx_rootfs_kvm_openbsd64-7.4.qcow2-7440 ejecutando<br />
<br />
# virsh destroy 33<br />
<br />
== Configuration ==<br />
<br />
<ul><br />
<li>Start the system with the following command:</li><br />
vnx --modify-rootfs vnx_rootfs_kvm_openbsd64-7.4.qcow2 --update-aced --mem 2G --arch x86_64 --vcpu 2<br />
Note: ignore the errors "timeout waiting for response on VM socket".<br />
<li>Access the system through the text console to easy the copy-paste of commands:</li><br />
# virsh list<br />
Id Name State<br />
----------------------------------------------------<br />
31 vnx_rootfs_kvm_openbsd64-7.4.qcow2-912 running<br />
<br />
# virsh console 31<br />
<li>In case you do not have access to the serial console, you can configure it manually by editing /etc/ttys file and changing the line:</li><br />
tty00 "/usr/libexec/getty std.9600" dialup off secure<br />
to:<br />
tty00 "/usr/libexec/getty std.9600" vt100 on secure<br />
Reboot the system after modifying the ttys file.<br />
<li>Loogin as root in the console and configure the network with DHCP:</li><br />
dhclient re0<br />
<li>Configure the environment variable with network repository:</li><br />
export PKG_PATH=http://ftp.eu.openbsd.org/pub/OpenBSD/`uname -r`/packages/`machine -a`/<br />
<li>Install bash and change package repository (change ftp.es.freebsd.org to your nearest mirror):</li><br />
pkg_add -r bash <br />
usermod -s /usr/local/bin/bash root<br />
usermod -s /usr/local/bin/bash vnx<br />
echo "export PKG_PATH=ftp://ftp.eu.openbsd.org/pub/OpenBSD/`uname -r`/packages/`machine -a`/" > ~/.bash_profile<br />
Note: in case pkg_add does not progress, try using another openbsd mirror in PKG_PATH variable.<br />
<li>Install XML::LibXML and NetAddr-IP perl libraries:</li><br />
pkg_add -r p5-XML-LibXML p5-NetAddr-IP <br />
<li>Install VNX autoconfiguration daemon:</li><br />
mount_msdos /dev/wd1i /mnt # if virtio=no in vnx.conf<br />
mount_msdos /dev/sd1c /mnt # if virtio=yes in vnx.conf<br />
perl /mnt/vnxaced-lf/install_vnxaced<br />
sed -i 's#bin/sh#bin/ksh#' /etc/rc.d/vnxaced<br />
<li>Create a file /etc/vnx_rootfs_version to store version number and informacion about modification:</li><br />
VER=v0.25<br />
OS=OpenBSD 7.2<br />
DESC=Basic OpenBSD 7.2 root filesystem without GUI<br />
<li>Configure interface em0 so that it does not get configured with DHCP. To do that, if file /etc/hostname.em0 exists, edit it and delete or comment the line with "dhcp".</li><br />
<li>Stop the machine with that script:</li><br />
vnx_halt<br />
<br />
</ul><br />
<br />
If everything went well, your root filesystem will be ready to be used with VNX. You can make a simple test using the simple_openbsd64.xml scenario distributed with VNX.<br />
<br />
== Installing additional software ==<br />
<br />
To install additional software or to modify your root file system, you just have to: <br />
<ul><br />
<li>Start a virtual machine from it:</li><br />
vnx --modify-rootfs vnx_rootfs_kvm_openbsd64-7.2.qcow2 --arch x86_64<br />
<li>Check network connectivity. Maybe you have to activate the network interface by hand:</li><br />
dhclient re0<br />
<li>Do the modifications you want.</li><br />
<li>Finally, halt the system using:</li><br />
vnx_halt<br />
</ul><br />
<br />
== Known problems ==<br />
<br />
== OpenBSD tips ==<br />
<br />
To upgrade OpenBSD to the next release, the OpenBSD site provides useful hints. For instance, to upgrade from 7.1 to 7.2, you can follow the instructions provided in http://www.openbsd.org/faq/upgrade72.html</div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Vnx-rootfsopenbsd&diff=2669
Vnx-rootfsopenbsd
2023-12-02T12:32:38Z
<p>David: </p>
<hr />
<div>{{Title|How to create a KVM OpenBSD root filesystem for VNX}}<br />
<br />
Follow this procedure to create a KVM OpenBSD based root filesystem for VNX. The procedure has been tested with OpenBSD 7.2. <br />
<br />
== Basic installation ==<br />
<ul><br />
<li>Create the filesystem disk image:</li><br />
qemu-img create -f qcow2 vnx_rootfs_kvm_openbsd64-7.4.qcow2 20G<br />
<li>Get OpenBSD installation CD. For example:</li><br />
wget http://ftp.eu.openbsd.org/pub/OpenBSD/7.4/amd64/install74.iso<br />
mv install74.iso /almacen/iso/openbsd-install74.iso<br />
<li>Create the virtual machine with:</li><br />
vnx --create-rootfs vnx_rootfs_kvm_openbsd64-7.4.qcow2 --install-media /almacen/iso/openbsd-install74.iso --mem 2G --arch=x86_64 --vcpu 2<br />
<li>Follow OpenBSD installation menus to install a basic system:</li><br />
<ul><br />
<li>When asked about the network interface, answer "done" to not configure the network now.</li><br />
<li>Answer 'yes' to the question "Change the default console to com0" to enable serial console.</li><br />
<li>Add a user named "vnx".</li><br />
<li>Use the whole "wd0" or "sd0" disk and "Auto layout".</li><br />
<li>Choose cd0 for the "location of sets". Choose the default "sets".</li><br />
</ul><br />
<li>After ending installation, but before shutting down the virtual machine, you have to disable mpbios, as follows:</li><br />
chroot /mnt<br />
config -ef /bsd<br />
disable mpbios<br />
quit<br />
<li>Finally, halt the system:</li><br />
halt -p<br />
</ul><br />
<br />
The OS installer will offer to reboot, but do not do that. Instead, close the VM console window and then, from the host OS, destroy the virtual machine:<br />
# virsh list<br />
Id Nombre Estado<br />
----------------------------------------------------<br />
33 vnx_rootfs_kvm_openbsd64-7.2.qcow2-7440 ejecutando<br />
<br />
# virsh destroy 33<br />
<br />
== Configuration ==<br />
<br />
<ul><br />
<li>Start the system with the following command:</li><br />
vnx --modify-rootfs vnx_rootfs_kvm_openbsd64-7.2.qcow2 --update-aced --mem 2G --arch x86_64 --vcpu 2<br />
Note: ignore the errors "timeout waiting for response on VM socket".<br />
<li>Access the system through the text console to easy the copy-paste of commands:</li><br />
# virsh list<br />
Id Name State<br />
----------------------------------------------------<br />
31 vnx_rootfs_kvm_openbsd64-7.2.qcow2-912 running<br />
<br />
# virsh console 31<br />
<li>In case you do not have access to the serial console, you can configure it manually by editing /etc/ttys file and changing the line:</li><br />
tty00 "/usr/libexec/getty std.9600" dialup off secure<br />
to:<br />
tty00 "/usr/libexec/getty std.9600" vt100 on secure<br />
Reboot the system after modifying the ttys file.<br />
<li>Loogin as root in the console and configure the network with DHCP:</li><br />
dhclient re0<br />
<li>Configure the environment variable with network repository:</li><br />
export PKG_PATH=ftp://ftp.eu.openbsd.org/pub/OpenBSD/`uname -r`/packages/`machine -a`/<br />
<li>Install bash and change package repository (change ftp.es.freebsd.org to your nearest mirror):</li><br />
pkg_add -r bash <br />
usermod -s /usr/local/bin/bash root<br />
usermod -s /usr/local/bin/bash vnx<br />
echo "export PKG_PATH=ftp://ftp.eu.openbsd.org/pub/OpenBSD/`uname -r`/packages/`machine -a`/" > ~/.bash_profile<br />
Note: in case pkg_add does not progress, try using another openbsd mirror in PKG_PATH variable.<br />
<li>Install XML::LibXML and NetAddr-IP perl libraries:</li><br />
pkg_add -r p5-XML-LibXML p5-NetAddr-IP <br />
<li>Install VNX autoconfiguration daemon:</li><br />
mount_msdos /dev/wd1i /mnt # if virtio=no in vnx.conf<br />
mount_msdos /dev/sd1c /mnt # if virtio=yes in vnx.conf<br />
perl /mnt/vnxaced-lf/install_vnxaced<br />
sed -i 's#bin/sh#bin/ksh#' /etc/rc.d/vnxaced<br />
<li>Create a file /etc/vnx_rootfs_version to store version number and informacion about modification:</li><br />
VER=v0.25<br />
OS=OpenBSD 7.2<br />
DESC=Basic OpenBSD 7.2 root filesystem without GUI<br />
<li>Configure interface em0 so that it does not get configured with DHCP. To do that, if file /etc/hostname.em0 exists, edit it and delete or comment the line with "dhcp".</li><br />
<li>Stop the machine with that script:</li><br />
vnx_halt<br />
<br />
</ul><br />
<br />
If everything went well, your root filesystem will be ready to be used with VNX. You can make a simple test using the simple_openbsd64.xml scenario distributed with VNX.<br />
<br />
== Installing additional software ==<br />
<br />
To install additional software or to modify your root file system, you just have to: <br />
<ul><br />
<li>Start a virtual machine from it:</li><br />
vnx --modify-rootfs vnx_rootfs_kvm_openbsd64-7.2.qcow2 --arch x86_64<br />
<li>Check network connectivity. Maybe you have to activate the network interface by hand:</li><br />
dhclient re0<br />
<li>Do the modifications you want.</li><br />
<li>Finally, halt the system using:</li><br />
vnx_halt<br />
</ul><br />
<br />
== Known problems ==<br />
<br />
== OpenBSD tips ==<br />
<br />
To upgrade OpenBSD to the next release, the OpenBSD site provides useful hints. For instance, to upgrade from 7.1 to 7.2, you can follow the instructions provided in http://www.openbsd.org/faq/upgrade72.html</div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Vnx-labo-openstack&diff=2668
Vnx-labo-openstack
2023-09-18T08:42:12Z
<p>David: </p>
<hr />
<div>{{Title|VNX Openstack laboratories}}<br />
<br />
This is a set of Openstack tutorial scenarios designed to experiment with [http://openstack.org Openstack] free and open-source software platform for cloud-computing.<br />
<br />
Several tutorial scenarios are available covering Stein, Ocata, Mitaka, Liberty and Kilo Openstack versions and several deployment configurations:<br />
<br />
<ul><br />
<li>'''Openstack Antelope:'''</li><br />
<ul><br />
<li>[[Vnx-labo-openstack-4nodes-classic-ovs-antelope|Four-nodes-classic-openvswitch]]. A basic scenario using Openstack Antelope (2023.1) made of four virtual machines: a controller, a network node and two compute nodes all based on LXC. </li><br />
</ul><br />
<br />
<li>'''Openstack Stein:'''</li><br />
<ul><br />
<li>[[Vnx-labo-openstack-4nodes-classic-ovs-stein|Four-nodes-classic-openvswitch]]. A basic scenario using Openstack Stein (April 2019) made of four virtual machines: a controller, a network node and two compute nodes all based on LXC. </li><br />
</ul><br />
<br />
<li>'''Openstack Ocata:'''</li><br />
<ul><br />
<li>[[Vnx-labo-openstack-4nodes-classic-ovs-ocata|Four-nodes-classic-openvswitch]]. A basic scenario using Openstack Ocata made of four virtual machines: a controller, a network node and two compute nodes all based on LXC. The deployement scenario used is [http://docs.openstack.org/mitaka/networking-guide/scenario-classic-ovs.html Classic with Open vSwitch]</li><br />
</ul><br />
<br />
<li>'''Openstack Mitaka:'''</li><br />
<ul><br />
<li>[[Vnx-labo-openstack-4nodes-classic-ovs-mitaka|Four-nodes-classic-openvswitch]]. A basic scenario using Openstack Mitaka made of four virtual machines: a controller based on LXC and a network and two compute nodes based on KVM. The deployement scenario used is [http://docs.openstack.org/mitaka/networking-guide/scenario-classic-ovs.html Classic with Open vSwitch]</li><br />
</ul><br />
<br />
<li>'''Openstack Liberty:'''</li><br />
<ul><br />
<li>[[Vnx-labo-openstack-3nodes-basic-liberty|Liberty 3-nodes-basic]]. A basic scenario using Openstack Liberty made of three KVM virtual machines: a controller with networking capabilities and two compute nodes.</li><br />
<li>[[Vnx-labo-openstack-4nodes-basic-liberty|Liberty 4-nodes-legacy-openvswitch]]: a basic scenario using Openstack Liberty made of four virtual machines: a controller based on LXC and a network and two compute nodes based on KVM. The deployement scenario used is [http://docs.openstack.org/networking-guide/scenario_legacy_ovs.html Legacy with Open vSwitch]</li><br />
</ul><br />
<br />
<li>'''Openstack Kilo:'''</li><br />
<ul><br />
<li>[[Vnx-labo-openstack-4nodes-basic-kilo|Kilo 4-nodes-basic]]. A basic scenario using Openstack Kilo made of four virtual machines: a controller based on LXC and a network node and two compute nodes based on KVM.</li><br />
</ul><br />
<br />
</ul></div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Vnx-labo-openstack-4nodes-classic-ovs-antelope&diff=2667
Vnx-labo-openstack-4nodes-classic-ovs-antelope
2023-09-18T08:40:39Z
<p>David: </p>
<hr />
<div>Being edited...<br />
<br />
{{Title|VNX Openstack Antelope four nodes classic scenario using Open vSwitch}}<br />
<br />
== Introduction ==<br />
<br />
This is an Openstack tutorial scenario designed to experiment with Openstack free and open-source software platform for cloud-computing.<br />
<br />
The scenario is made of four virtual machines: a controller node, a network node and two compute nodes, all based on LXC. Optionally, new compute nodes can be added by starting additional VNX scenarios.<br />
<br />
Openstack version used is Antelope (2023.1) over Ubuntu 22.04 LTS. The deployment scenario is the one that was named "Classic with Open vSwitch" and was described in previous versions of Openstack documentation (https://docs.openstack.org/liberty/networking-guide/scenario-classic-ovs.html). It is prepared to experiment either with the deployment of virtual machines using "Self Service Networks" provided by Openstack, or with the use of external "Provider networks".<br />
<br />
The configuration has been developed integrating into the VNX scenario all the installation and configuration commands described in [https://docs.openstack.org/2023.1/install/ Openstack Antelope installation recipes.]<br />
<br />
[[File:Openstack_tutorial-stein.png|center|thumb|600px|<div align=center><br />
'''Figure 1: Openstack tutorial scenario'''</div>]]<br />
<br />
== Requirements ==<br />
<br />
To use the scenario you need a Linux computer (Ubuntu 20.04 or later recommended) with VNX software installed. At least 12GB of memory are needed to execute the scenario. <br />
<br />
See how to install VNX here: http://vnx.dit.upm.es/vnx/index.php/Vnx-install<br />
<br />
If already installed, update VNX to the latest version with:<br />
<br />
vnx_update<br />
<br />
== Installation ==<br />
<br />
Download the scenario with the virtual machines images included and unpack it:<br />
<br />
wget http://idefix.dit.upm.es/vnx/examples/openstack/openstack_lab-antelope_4n_classic_ovs-v01-with-rootfs.tgz<br />
sudo vnx --unpack openstack_lab-antelope_4n_classic_ovs-v01-with-rootfs.tgz<br />
<br />
== Starting the scenario ==<br />
<br />
Start the scenario and configure it and load an example cirros and ubuntu images with:<br />
cd openstack_lab-antelope_4n_classic_ovs-v01<br />
# Start the scenario<br />
sudo vnx -f openstack_lab.xml -v --create<br />
# Wait for all consoles to have started and configure all Openstack services<br />
vnx -f openstack_lab.xml -v -x start-all<br />
# Load vm images in GLANCE<br />
vnx -f openstack_lab.xml -v -x load-img<br />
<br />
<br />
[[File:Tutorial-openstack-antelope-4n-classic-ovs.png|center|thumb|600px|<div align=center><br />
'''Figure 2: Openstack tutorial detailed topology'''</div>]]<br />
<br />
Once started, you can connect to Openstack Skyline Dashboard (default/admin/xxxx) starting a browser and pointing it to the controller horizon page. For example:<br />
<br />
firefox 10.0.10.11/<br />
<br />
Note: the classic Horizon Dashboard is accesible in port 8080 (http://10.0.10.11:8080).<br />
<br />
== Self Service networks example ==<br />
<br />
Access the network topology Dashboard page (Project->Network->Network topology) and create a simple demo scenario inside Openstack:<br />
<br />
vnx -f openstack_lab.xml -v -x create-demo-scenario<br />
<br />
You should see the simple scenario as it is being created through the Dashboard.<br />
<br />
Once created you should be able to access vm1 console, to ping or ssh from the host to the vm1 or the opposite (see the floating IP assigned to vm1 in the Dashboard, probably 10.0.10.102).<br />
<br />
You can create a second Cirros virtual machine (vm2) or a third Ubuntu virtual machine (vm3) and test connectivity among all virtual machines with: <br />
<br />
vnx -f openstack_lab.xml -v -x create-demo-vm2<br />
vnx -f openstack_lab.xml -v -x create-demo-vm3<br />
<br />
To allow external Internet access from vm1 you hace to configure a NAT in the host. You can easily do it using '''vnx_config_nat''' command distributed with VNX. Just find out the name of the public network interface of your host (i.e eth0) and execute:<br />
<br />
vnx_config_nat ExtNet eth0<br />
<br />
Besides, you can access the Openstack controller by ssh from the host and execute management commands directly:<br />
slogin root@controller # root/xxxx<br />
source bin/admin-openrc.sh # Load admin credentials<br />
<br />
For example, to show the virtual machines started:<br />
openstack server list<br />
<br />
You can also execute that commands from the host where the virtual scenario is started. For that purpose you need to install openstack client first:<br />
<br />
pip install python-openstackclient<br />
source bin/admin-openrc.sh # Load admin credentials<br />
openstack server list<br />
<br />
== Provider networks example ==<br />
<br />
Compute nodes in the Openstack virtual lab scenario have two network interfaces for internal and external connections: <br />
* '''eth2''', connected to Tunnels network and used to connect with VMs in other compute nodes or routers in the network node<br />
* '''eth3''', connected to VLANs network and used to connect with VMs in other compute nodes and also to connect to external systems through the Providers networks infrastructure. <br />
<br />
To demonstrate how Openstack VMs can be connected with external systems though the VLAN network switches, an additional demo scenario is included. Just execute:<br />
vnx -f openstack_lab.xml -v -x create-vlan-demo-scenario<br />
<br />
That scenario will create two new networks and subnetworks associated with VLANs 1000 and 1001, and two VMs, vmA1 and vmB1 connected to that networks. You can see the scenario created through the openstack Dashboard.<br />
<br />
[[File:Openstack-provider-networks-example.png|center|thumb|600px|<div align=center><br />
'''Figure 3: Provider networks testing scenario'''</div>]]<br />
<br />
The commands used to create that networks and vms are the following (can be consulted also in the scenario XML file):<br />
<br />
<pre><br />
# Networks<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1000 vlan1000<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1001 vlan1001<br />
openstack subnet create --network vlan1000 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.2.0/24 --allocation-pool start=10.1.2.2,end=10.1.2.99 subvlan1000<br />
openstack subnet create --network vlan1001 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.3.0/24 --allocation-pool start=10.1.3.2,end=10.1.3.99 subvlan1001<br />
<br />
# VMs<br />
mkdir -p tmp<br />
openstack keypair create vmA1 > tmp/vmA1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vmA1 --nic net-id=vlan1000 --key-name vmA1<br />
openstack keypair create vmB1 > tmp/vmB1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vmB1 --nic net-id=vlan1001 --key-name vmB1<br />
</pre><br />
<br />
To demonstrate the connectivity of vmA1 and vmB1 to external systems connected on VLANs 1000/1001, you can start and additional virtual scenario which creates three additional systems: vmA2 (vlan 1000), vmB2 (vlan 1001) and vlan-router (connected to both vlans). To start it just execute:<br />
vnx -f openstack_lab-vms-vlan.xml -v -t<br />
<br />
[[File:Openstack_lab-vms-vlan-map.png|center|thumb|600px|<div align=center><br />
'''Figure 4: Provider networks demo external scenario'''</div>]]<br />
<br />
Once the scenario is started, you should be able to ping, traceroute and ssh among vmA1, vmB1, vmA2 and vmB2 using the following IP addresses:<br />
<ul><br />
<li>Virtual machines inside Openstack:</li><br />
<ul><br />
<li>vmA1 and vmB1: dynamic addresses assigned from 10.1.2.0/24. You can consult the addresses from Horizon or using the command:</li><br />
openstack server list<br />
</ul><br />
<li>Virtual machines outside Openstack:</li><br />
<ul><br />
<li>vmA2: 10.1.2.100/24</li><br />
<li>vmB2: 10.1.3.100/24</li><br />
</ul><br />
</ul><br />
<br />
Take into account that the pings from the exterior virtual machines to the internal ones is not allowed by the default security groups filters applied by Openstack.<br />
<br />
You can have a look at the virtual switch that supports the Openstack VLAN Network executing the following command in the host:<br />
ovs-vsctl show<br />
<br />
<br />
[[File:Vnx-demo-scenario-openstack-stein.png|center|thumb|600px|<div align=center><br />
'''Figure 5: Openstack Dashboard view of the demo virtual scenarios created'''</div>]]<br />
<br />
== Adding additional compute nodes ==<br />
<br />
Three additional VNX scenarios are provided to add new compute nodes to the scenario. <br />
<br />
For example, to start compute nodes 3 and 4, just:<br />
vnx -f openstack_lab-cmp34.xml -v -t<br />
# Wait for consoles to start<br />
vnx -f openstack_lab-cmp34.xml -v -x start-all<br />
<br />
After that, you can see the new compute nodes added <br />
by going to "Admin->Compute->Hypervisors->Compute host" option. However, the new compute nodes are not added yet to the list of Hypervisors in "Admin->Compute->Hypervisors->Hypervisor" option.<br />
<br />
To add them, just execute:<br />
vnx -f openstack_lab.xml -v -x discover-hosts<br />
<br />
The same procedure can be used to start nodes 5 and 6 (openstack_lab-cmp56.xml) and nodes 7 and 8 (openstack_lab-cmp78.xml).<br />
<br />
== Stopping or releasing the scenario ==<br />
<br />
To stop the scenario preserving the configuration and the changes made:<br />
<br />
vnx -f openstack_lab.xml -v --shutdown<br />
<br />
To start it again use:<br />
<br />
vnx -f openstack_lab.xml -v --start<br />
<br />
To stop the scenario destroying all the configuration and changes made:<br />
<br />
vnx -f openstack_lab.xml -v --destroy<br />
<br />
To unconfigure the NAT, just execute (change eth0 by the name of your external interface):<br />
<br />
vnx_config_nat -d ExtNet eth0<br />
<br />
== Other useful information ==<br />
<br />
To pack the scenario in a tgz file:<br />
<br />
bin/pack-scenario-with-rootfs # including rootfs<br />
bin/pack-scenario # without rootfs<br />
<br />
== Other Openstack Dashboard screen captures ==<br />
<br />
[[File:Vnx-demo-scenario-openstack-mitaka-compute-overview.png|center|thumb|600px|<div align=center><br />
'''Figure 6: Openstack Dashboard compute overview'''</div>]]<br />
<br />
[[File:Vnx-demo-scenario-openstack-mitaka-instances.png|center|thumb|600px|<div align=center><br />
'''Figure 7: Openstack Dashboard view of the demo virtual machines created'''</div>]]<br />
<br />
== XML specification of Openstack tutorial scenario ==<br />
<br />
<pre><br />
<?xml version="1.0" encoding="UTF-8"?><br />
<br />
<!--<br />
~~~~~~~~~~~~~~~~~~~~~~<br />
VNX Sample scenarios<br />
~~~~~~~~~~~~~~~~~~~~~~<br />
<br />
Name: openstack_lab-antelope<br />
<br />
Description: This is an Openstack tutorial scenario designed to experiment with Openstack free and open-source<br />
software platform for cloud-computing. It is made of four LXC containers:<br />
- one controller<br />
- one network node<br />
- two compute nodes<br />
Openstack version used: Antelope<br />
The network configuration is based on the one named "Classic with Open vSwitch" described here:<br />
http://docs.openstack.org/liberty/networking-guide/scenario-classic-ovs.html<br />
<br />
Author: David Fernandez (david.fernandez@upm.es)<br />
<br />
This file is part of the Virtual Networks over LinuX (VNX) Project distribution.<br />
(www: http://www.dit.upm.es/vnx - e-mail: vnx@dit.upm.es)<br />
<br />
Copyright(C) 2023 Departamento de Ingenieria de Sistemas Telematicos (DIT)<br />
Universidad Politecnica de Madrid (UPM)<br />
SPAIN<br />
--><br />
<br />
<vnx xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"<br />
xsi:noNamespaceSchemaLocation="/usr/share/xml/vnx/vnx-2.00.xsd"><br />
<global><br />
<version>2.0</version><br />
<scenario_name>openstack_lab-antelope</scenario_name><br />
<!--ssh_key>~/.ssh/id_rsa.pub</ssh_key--><br />
<automac offset="0"/><br />
<!--vm_mgmt type="none"/--><br />
<vm_mgmt type="private" network="10.20.0.0" mask="24" offset="0"><br />
<host_mapping /><br />
</vm_mgmt><br />
<vm_defaults><br />
<console id="0" display="no"/><br />
<console id="1" display="yes"/><br />
</vm_defaults><br />
<cmd-seq seq="step1-6">step00,step1,step2,step3,step3b,step4,step5,step54,step6</cmd-seq><br />
<cmd-seq seq="step1-8">step1-6,step8</cmd-seq><br />
<cmd-seq seq="step4">step41,step42,step43,step44</cmd-seq><br />
<cmd-seq seq="step5">step51,step52,step53</cmd-seq><br />
<cmd-seq seq="step10">step100,step101,step102</cmd-seq><br />
<cmd-seq seq="step11">step111,step112,step113</cmd-seq><br />
<cmd-seq seq="step12">step121,step122,step123,step124</cmd-seq><br />
<cmd-seq seq="step13">step130,step131</cmd-seq><br />
<!--cmd-seq seq="start-all-from-scratch">step1-8,step10,step12,step11</cmd-seq--><br />
<cmd-seq seq="start-all-from-scratch">step00,step1,step2,step3,step3b,step41,step51,step6,step8,step10,step121,step11</cmd-seq><br />
<cmd-seq seq="start-all">step01,step42,step43,step44,step52,step53,step54,step122,step123,step124,step999</cmd-seq><br />
<cmd-seq seq="discover-hosts">step44</cmd-seq><br />
</global><br />
<br />
<net name="MgmtNet" mode="openvswitch" mtu="1450"/><br />
<net name="TunnNet" mode="openvswitch" mtu="1450"/><br />
<net name="ExtNet" mode="openvswitch" mtu="1450"/><br />
<net name="VlanNet" mode="openvswitch" /><br />
<net name="virbr0" mode="virtual_bridge" managed="no"/><br />
<br />
<!--<br />
~~<br />
~~ C O N T R O L L E R N O D E<br />
~~<br />
--><br />
<vm name="controller" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-controller</filesystem><br />
<mem>1G</mem><br />
<shareddir root="/root/shared">shared</shareddir><br />
<!--console id="0" display="yes"/--><br />
<br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.11/24</ipv4><br />
</if><br />
<if id="2" net="ExtNet"><br />
<ipv4>10.0.10.11/24</ipv4><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --><br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-controller.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<filetree seq="on_boot" root="/root/">conf/controller/bin</filetree><br />
<filetree seq="on_boot" root="/root/.ssh/">conf/controller/ssh/id_rsa</filetree><br />
<filetree seq="on_boot" root="/root/.ssh/">conf/controller/ssh/id_rsa.pub</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
chmod +x /root/bin/*<br />
</exec><br />
<exec seq="on_boot" type="verbatim"><br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
<br />
# Change owner of secret_key to horizon to avoid a 500 error when<br />
# accessing horizon (new problem arosed in v04)<br />
# See: https://ask.openstack.org/en/question/30059/getting-500-internal-server-error-while-accessing-horizon-dashboard-in-ubuntu-icehouse/<br />
chown -f horizon /var/lib/openstack-dashboard/secret_key<br />
<br />
# Stop nova services. Before being configured, they consume a lot of CPU<br />
service nova-scheduler stop<br />
service nova-api stop<br />
service nova-conductor stop<br />
<br />
# Add an html redirection to openstack page from index.html<br />
echo '&lt;meta http-equiv="refresh" content="0; url=/horizon" /&gt;' > /var/www/html/index.html<br />
<br />
dhclient eth9 # just in case the Internet connection is not active...<br />
</exec><br />
<br />
<exec seq="step00" type="verbatim"><br />
sed -i '/^network/d' /root/.ssh/known_hosts<br />
ssh-keyscan -t rsa network >> /root/.ssh/known_hosts<br />
sed -i '/^compute1/d' /root/.ssh/known_hosts<br />
ssh-keyscan -t rsa compute1 >> /root/.ssh/known_hosts<br />
sed -i '/^compute2/d' /root/.ssh/known_hosts<br />
ssh-keyscan -t rsa compute2 >> /root/.ssh/known_hosts<br />
dhclient eth9<br />
ping -c 3 www.dit.upm.es<br />
</exec><br />
<br />
<exec seq="step01" type="verbatim"><br />
sed -i '/^network/d' /root/.ssh/known_hosts<br />
ssh-keyscan -t rsa network >> /root/.ssh/known_hosts<br />
sed -i '/^compute1/d' /root/.ssh/known_hosts<br />
ssh-keyscan -t rsa compute1 >> /root/.ssh/known_hosts<br />
sed -i '/^compute2/d' /root/.ssh/known_hosts<br />
ssh-keyscan -t rsa compute2 >> /root/.ssh/known_hosts<br />
# Restart nova services<br />
systemctl restart nova-scheduler<br />
systemctl restart nova-api<br />
systemctl restart nova-conductor<br />
dhclient eth9<br />
ping -c 3 www.dit.upm.es<br />
#systemctl restart memcached<br />
</exec><br />
<br />
<!--<br />
STEP 1: Basic services<br />
--><br />
<filetree seq="step1" root="/etc/mysql/mariadb.conf.d/">conf/controller/mysql/99-openstack.cnf</filetree><br />
<filetree seq="step1" root="/etc/">conf/controller/memcached/memcached.conf</filetree><br />
<filetree seq="step1" root="/etc/default/">conf/controller/etcd/etcd</filetree><br />
<!--filetree seq="step1" root="/etc/mongodb.conf">conf/controller/mongodb/mongodb.conf</filetree!--><br />
<exec seq="step1" type="verbatim"><br />
<br />
# mariadb<br />
systemctl enable mariadb<br />
systemctl start mariadb<br />
<br />
# rabbitmqctl<br />
systemctl enable rabbitmq-server<br />
systemctl start rabbitmq-server<br />
rabbitmqctl add_user openstack xxxx<br />
rabbitmqctl set_permissions openstack ".*" ".*" ".*"<br />
<br />
# memcached<br />
sed -i -e 's/-l 127.0.0.1/-l 10.0.0.11/' /etc/memcached.conf<br />
systemctl enable memcached<br />
systemctl start memcached<br />
<br />
# etcd<br />
systemctl enable etcd<br />
systemctl start etcd<br />
<br />
echo "Services status"<br />
echo "etcd " $( systemctl show -p SubState etcd )<br />
echo "mariadb " $( systemctl show -p SubState mariadb )<br />
echo "memcached " $( systemctl show -p SubState memcached )<br />
echo "rabbitmq-server " $( systemctl show -p SubState rabbitmq-server )<br />
</exec><br />
<br />
<!--<br />
STEP 2: Identity service<br />
--><br />
<filetree seq="step2" root="/etc/keystone/">conf/controller/keystone/keystone.conf</filetree><br />
<filetree seq="step2" root="/root/bin/">conf/controller/keystone/admin-openrc.sh</filetree><br />
<filetree seq="step2" root="/root/bin/">conf/controller/keystone/demo-openrc.sh</filetree><br />
<filetree seq="step2" root="/root/bin/">conf/controller/keystone/octavia-openrc.sh</filetree><br />
<exec seq="step2" type="verbatim"><br />
count=1; while ! mysqladmin ping ; do echo -n $count; echo ": waiting for mysql ..."; ((count++)) &amp;&amp; ((count==6)) &amp;&amp; echo "--" &amp;&amp; echo "-- ERROR: database not ready." &amp;&amp; echo "--" &amp;&amp; break; sleep 2; done<br />
</exec><br />
<exec seq="step2" type="verbatim"><br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE keystone;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
su -s /bin/sh -c "keystone-manage db_sync" keystone<br />
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone<br />
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone<br />
<br />
keystone-manage bootstrap --bootstrap-password xxxx \<br />
--bootstrap-admin-url http://controller:5000/v3/ \<br />
--bootstrap-internal-url http://controller:5000/v3/ \<br />
--bootstrap-public-url http://controller:5000/v3/ \<br />
--bootstrap-region-id RegionOne<br />
<br />
echo "ServerName controller" >> /etc/apache2/apache2.conf<br />
systemctl restart apache2<br />
#rm -f /var/lib/keystone/keystone.db<br />
sleep 5<br />
<br />
export OS_USERNAME=admin<br />
export OS_PASSWORD=xxxx<br />
export OS_PROJECT_NAME=admin<br />
export OS_USER_DOMAIN_NAME=Default<br />
export OS_PROJECT_DOMAIN_NAME=Default<br />
export OS_AUTH_URL=http://controller:5000/v3<br />
export OS_IDENTITY_API_VERSION=3<br />
<br />
# Create users and projects<br />
openstack project create --domain default --description "Service Project" service<br />
openstack project create --domain default --description "Demo Project" demo<br />
openstack user create --domain default --password=xxxx demo<br />
openstack role create user<br />
openstack role add --project demo --user demo user<br />
</exec><br />
<br />
<!--<br />
STEP 3: Image service (Glance)<br />
--><br />
<filetree seq="step3" root="/etc/glance/">conf/controller/glance/glance-api.conf</filetree><br />
<!--filetree seq="step3" root="/etc/glance/">conf/controller/glance/glance-registry.conf</filetree--><br />
<exec seq="step3" type="verbatim"><br />
systemctl enable glance-api<br />
systemctl start glance-api<br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE glance;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx glance<br />
openstack role add --project service --user glance admin<br />
openstack service create --name glance --description "OpenStack Image" image<br />
openstack endpoint create --region RegionOne image public http://controller:9292<br />
openstack endpoint create --region RegionOne image internal http://controller:9292<br />
openstack endpoint create --region RegionOne image admin http://controller:9292<br />
<br />
su -s /bin/sh -c "glance-manage db_sync" glance<br />
systemctl restart glance-api<br />
</exec><br />
<br />
<!--<br />
STEP 3B: Placement service API<br />
--><br />
<filetree seq="step3b" root="/etc/placement/">conf/controller/placement/placement.conf</filetree><br />
<exec seq="step3b" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE placement;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx placement<br />
openstack role add --project service --user placement admin<br />
openstack service create --name placement --description "Placement API" placement<br />
openstack endpoint create --region RegionOne placement public http://controller:8778<br />
openstack endpoint create --region RegionOne placement internal http://controller:8778<br />
openstack endpoint create --region RegionOne placement admin http://controller:8778<br />
su -s /bin/sh -c "placement-manage db sync" placement<br />
systemctl restart apache2<br />
</exec><br />
<br />
<!--<br />
STEP 4: Compute service (Nova)<br />
--><br />
<filetree seq="step41" root="/etc/nova/">conf/controller/nova/nova.conf</filetree><br />
<exec seq="step41" type="verbatim"><br />
# Enable and start services<br />
systemctl enable nova-api<br />
systemctl enable nova-scheduler<br />
systemctl enable nova-conductor<br />
systemctl enable nova-novncproxy<br />
systemctl start nova-api<br />
systemctl start nova-scheduler<br />
systemctl start nova-conductor<br />
systemctl start nova-novncproxy<br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova_api;"<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova;"<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova_cell0;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
<br />
openstack user create --domain default --password=xxxx nova<br />
openstack role add --project service --user nova admin<br />
openstack service create --name nova --description "OpenStack Compute" compute<br />
<br />
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1<br />
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1<br />
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1<br />
<br />
su -s /bin/sh -c "nova-manage api_db sync" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova<br />
su -s /bin/sh -c "nova-manage db sync" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova<br />
<br />
service nova-api restart<br />
service nova-scheduler restart<br />
service nova-conductor restart<br />
service nova-novncproxy restart<br />
</exec><br />
<br />
<exec seq="step43" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Wait for compute1 hypervisor to be up<br />
HOST=compute1<br />
i=5; while ! $( openstack host list | grep $HOST > /dev/null ); do echo "$i - waiting for $HOST to be registered..."; i=$(( i - 1 )); if ((i == 0)); then echo "ERROR: timeout waiting for $HOST"; break; else sleep 5; fi done<br />
</exec><br />
<exec seq="step43" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Wait for compute2 hypervisor to be up<br />
HOST=compute2<br />
i=5; while ! $( openstack host list | grep $HOST > /dev/null ); do echo "$i - waiting for $HOST to be registered..."; i=$(( i - 1 )); if ((i == 0)); then echo "ERROR: timeout waiting for $HOST"; break; else sleep 5; fi done<br />
</exec><br />
<exec seq="step44,discover-hosts" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova<br />
openstack hypervisor list<br />
</exec><br />
<br />
<!--<br />
STEP 5: Network service (Neutron)<br />
--><br />
<filetree seq="step51" root="/etc/neutron/">conf/controller/neutron/neutron.conf</filetree><br />
<filetree seq="step51" root="/etc/neutron/">conf/controller/neutron/metadata_agent.ini</filetree><br />
<!--filetree seq="step51" root="/etc/neutron/">conf/controller/neutron/neutron_lbaas.conf</filetree--><br />
<!--filetree seq="step51" root="/etc/neutron/">conf/controller/neutron/lbaas_agent.ini</filetree--><br />
<filetree seq="step51" root="/etc/neutron/plugins/ml2/">conf/controller/neutron/ml2_conf.ini</filetree><br />
<exec seq="step51" type="verbatim"><br />
systemctl enable neutron-server<br />
systemctl restart neutron-server<br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE neutron;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx neutron<br />
openstack role add --project service --user neutron admin<br />
openstack service create --name neutron --description "OpenStack Networking" network<br />
openstack endpoint create --region RegionOne network public http://controller:9696<br />
openstack endpoint create --region RegionOne network internal http://controller:9696<br />
openstack endpoint create --region RegionOne network admin http://controller:9696<br />
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron<br />
<br />
# LBaaS<br />
# Installation based on recipe:<br />
# - Configure Neutron LBaaS (Load-Balancer-as-a-Service) V2 in www.server-world.info.<br />
#neutron-db-manage --subproject neutron-lbaas upgrade head<br />
#su -s /bin/bash neutron -c "neutron-db-manage --subproject neutron-lbaas --config-file /etc/neutron/neutron.conf upgrade head"<br />
<br />
# FwaaS v2<br />
# https://tinyurl.com/2qk7729b<br />
neutron-db-manage --subproject neutron-fwaas upgrade head<br />
<br />
# Octavia Dashboard panels<br />
# Based on https://opendev.org/openstack/octavia-dashboard<br />
git clone -b stable/2023.1 https://opendev.org/openstack/octavia-dashboard.git<br />
cd octavia-dashboard/<br />
python setup.py sdist<br />
cp -a octavia_dashboard/enabled/_1482_project_load_balancer_panel.py /usr/share/openstack-dashboard/openstack_dashboard/local/enabled/<br />
pip3 install octavia-dashboard<br />
chmod +x manage.py<br />
./manage.py collectstatic --noinput<br />
./manage.py compress<br />
systemctl restart apache2<br />
<br />
systemctl restart nova-api<br />
systemctl restart neutron-server<br />
</exec><br />
<br />
<exec seq="step54" type="verbatim"><br />
# Create external network<br />
source /root/bin/admin-openrc.sh<br />
openstack network create --share --external --provider-physical-network provider --provider-network-type flat ExtNet<br />
openstack subnet create --network ExtNet --gateway 10.0.10.1 --dns-nameserver 10.0.10.1 --subnet-range 10.0.10.0/24 --allocation-pool start=10.0.10.100,end=10.0.10.200 ExtSubNet<br />
</exec><br />
<br />
<!--<br />
STEP 6: Dashboard service<br />
--><br />
<filetree seq="step6" root="/etc/openstack-dashboard/">conf/controller/dashboard/local_settings.py</filetree><br />
<exec seq="step6" type="verbatim"><br />
# FWaaS Dashboard<br />
# https://docs.openstack.org/neutron-fwaas-dashboard/latest/doc-neutron-fwaas-dashboard.pdf<br />
git clone https://opendev.org/openstack/neutron-fwaas-dashboard<br />
cd neutron-fwaas-dashboard<br />
sudo pip install .<br />
cp neutron_fwaas_dashboard/enabled/_701* /usr/share/openstack-dashboard/openstack_dashboard/local/enabled/<br />
./manage.py compilemessages<br />
DJANGO_SETTINGS_MODULE=openstack_dashboard.settings python manage.py collectstatic --noinput<br />
DJANGO_SETTINGS_MODULE=openstack_dashboard.settings python manage.py compress --force<br />
<br />
systemctl enable apache2<br />
systemctl restart apache2<br />
</exec><br />
<br />
<!--<br />
STEP 7: Trove service<br />
--><br />
<cmd-seq seq="step7">step71,step72,step73</cmd-seq><br />
<exec seq="step71" type="verbatim"><br />
apt-get -y install python-trove python-troveclient python-glanceclient trove-common trove-api trove-taskmanager trove-conductor python-pip<br />
pip install trove-dashboard==7.0.0.0b2<br />
</exec><br />
<br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-conductor.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-taskmanager.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-guestagent.conf</filetree><br />
<exec seq="step72" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE trove;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
<br />
openstack user create --domain default --password xxxx trove<br />
openstack role add --project service --user trove admin<br />
openstack service create --name trove --description "Database" database<br />
openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s<br />
<br />
su -s /bin/sh -c "trove-manage db_sync" trove<br />
<br />
service trove-api restart<br />
service trove-taskmanager restart<br />
service trove-conductor restart<br />
<br />
# Install trove_dashboard<br />
cp -a /usr/local/lib/python2.7/dist-packages/trove_dashboard/enabled/* /usr/share/openstack-dashboard/openstack_dashboard/local/enabled/<br />
service apache2 restart<br />
<br />
</exec><br />
<br />
<exec seq="step73" type="verbatim"><br />
#wget -P /tmp/images http://tarballs.openstack.org/trove/images/ubuntu/mariadb.qcow2<br />
wget -P /tmp/images/ http://vnx.dit.upm.es/vnx/filesystems/ostack-images/trove/mariadb.qcow2<br />
glance image-create --name "trove-mariadb" --file /tmp/images/mariadb.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/mariadb.qcow2<br />
su -s /bin/sh -c "trove-manage --config-file /etc/trove/trove.conf datastore_update mysql ''" trove<br />
su -s /bin/sh -c "trove-manage --config-file /etc/trove/trove.conf datastore_version_update mysql mariadb mariadb glance_image_ID '' 1" trove<br />
<br />
# Create example database<br />
openstack flavor show m1.smaller >/dev/null 2>&amp;1 || openstack flavor create m1.smaller --ram 512 --disk 3 --vcpus 1 --id 6<br />
#trove create mysql_instance_1 m1.smaller --size 1 --databases myDB --users userA:xxxx --datastore_version mariadb --datastore mysql<br />
</exec><br />
<br />
<!--<br />
STEP 8: Heat service<br />
--><br />
<!--cmd-seq seq="step8">step81,step82</cmd-seq--><br />
<filetree seq="step8" root="/etc/heat/">conf/controller/heat/heat.conf</filetree><br />
<filetree seq="step8" root="/root/heat/">conf/controller/heat/examples</filetree><br />
<exec seq="step8" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE heat;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password xxxx heat<br />
openstack role add --project service --user heat admin<br />
openstack service create --name heat --description "Orchestration" orchestration<br />
openstack service create --name heat-cfn --description "Orchestration" cloudformation<br />
openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1<br />
openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1<br />
openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1<br />
openstack domain create --description "Stack projects and users" heat<br />
openstack user create --domain heat --password xxxx heat_domain_admin<br />
openstack role add --domain heat --user-domain heat --user heat_domain_admin admin<br />
openstack role create heat_stack_owner<br />
openstack role add --project demo --user demo heat_stack_owner<br />
openstack role create heat_stack_user<br />
<br />
su -s /bin/sh -c "heat-manage db_sync" heat<br />
systemctl enable heat-api<br />
systemctl enable heat-api-cfn<br />
systemctl enable heat-engine<br />
systemctl restart heat-api<br />
systemctl restart heat-api-cfn<br />
systemctl restart heat-engine<br />
<br />
# Install Orchestration interface in Dashboard<br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get install -y gettext<br />
pip3 install heat-dashboard<br />
<br />
cd /root<br />
git clone https://github.com/openstack/heat-dashboard.git<br />
cd heat-dashboard/<br />
git checkout stable/stein<br />
cp heat_dashboard/enabled/_[1-9]*.py /usr/share/openstack-dashboard/openstack_dashboard/local/enabled<br />
python3 ./manage.py compilemessages<br />
cd /usr/share/openstack-dashboard<br />
DJANGO_SETTINGS_MODULE=openstack_dashboard.settings python3 manage.py collectstatic --noinput<br />
DJANGO_SETTINGS_MODULE=openstack_dashboard.settings python3 manage.py compress --force<br />
#rm -f /var/lib/openstack-dashboard/secret_key<br />
systemctl restart apache2<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-heat" type="verbatim"><br />
#source /root/bin/demo-openrc.sh<br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create internal network<br />
openstack network create net-heat<br />
openstack subnet create --network net-heat --gateway 10.1.10.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.10.0/24 --allocation-pool start=10.1.10.8,end=10.1.10.100 subnet-heat<br />
<br />
mkdir -p /root/keys<br />
openstack keypair create key-heat > /root/keys/key-heat<br />
export NET_ID=$( openstack network list --name net-heat -f value -c ID )<br />
openstack stack create -t /root/heat/examples/demo-template.yml --parameter "NetID=$NET_ID" stack<br />
<br />
</exec><br />
<br />
<br />
<!--<br />
STEP 9: Tacker service<br />
--><br />
<cmd-seq seq="step9">step91,step92</cmd-seq><br />
<br />
<exec seq="step91" type="verbatim"><br />
apt-get -y install python-pip git<br />
pip install --upgrade pip<br />
</exec><br />
<br />
<filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/default-vim-config.yaml</filetree><br />
<filetree seq="step92" root="/root/tacker/">conf/controller/tacker/examples</filetree><br />
<exec seq="step92" type="verbatim"><br />
sed -i -e 's/.*"resource_types:OS::Nova::Flavor":.*/ "resource_types:OS::Nova::Flavor": "role:admin",/' /etc/heat/policy.json<br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE tacker;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password xxxx tacker<br />
openstack role add --project service --user tacker admin<br />
openstack service create --name tacker --description "Tacker Project" nfv-orchestration<br />
openstack endpoint create --region RegionOne nfv-orchestration public http://controller:9890/<br />
openstack endpoint create --region RegionOne nfv-orchestration internal http://controller:9890/<br />
openstack endpoint create --region RegionOne nfv-orchestration admin http://controller:9890/<br />
<br />
mkdir -p /root/tacker<br />
cd /root/tacker<br />
git clone https://github.com/openstack/tacker<br />
cd tacker<br />
git checkout stable/ocata<br />
pip install -r requirements.txt<br />
pip install tosca-parser<br />
python setup.py install<br />
mkdir -p /var/log/tacker<br />
<br />
/usr/local/bin/tacker-db-manage --config-file /usr/local/etc/tacker/tacker.conf upgrade head<br />
<br />
# Tacker client<br />
cd /root/tacker<br />
git clone https://github.com/openstack/python-tackerclient<br />
cd python-tackerclient<br />
git checkout stable/ocata<br />
python setup.py install<br />
<br />
# Tacker horizon<br />
cd /root/tacker<br />
git clone https://github.com/openstack/tacker-horizon<br />
cd tacker-horizon<br />
git checkout stable/ocata<br />
python setup.py install<br />
cp tacker_horizon/enabled/* /usr/share/openstack-dashboard/openstack_dashboard/enabled/<br />
service apache2 restart<br />
<br />
# Start tacker server<br />
mkdir -p /var/log/tacker<br />
nohup python /usr/local/bin/tacker-server \<br />
--config-file /usr/local/etc/tacker/tacker.conf \<br />
--log-file /var/log/tacker/tacker.log &amp;<br />
<br />
# Register default VIM<br />
tacker vim-register --is-default --config-file /usr/local/etc/tacker/default-vim-config.yaml \<br />
--description "Default VIM" "Openstack-VIM"<br />
<br />
</exec><br />
<exec seq="step93" type="verbatim"><br />
nohup python /usr/local/bin/tacker-server \<br />
--config-file /usr/local/etc/tacker/tacker.conf \<br />
--log-file /var/log/tacker/tacker.log &amp;<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-tacker" type="verbatim"><br />
source /root/bin/demo-openrc.sh<br />
<br />
# Create internal network<br />
openstack network create net-tacker<br />
openstack subnet create --network net-tacker --gateway 10.1.11.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.11.0/24 --allocation-pool start=10.1.11.8,end=10.1.11.100 subnet-tacker<br />
<br />
cd /root/tacker/examples<br />
tacker vnfd-create --vnfd-file sample-vnfd.yaml testd<br />
<br />
# Falla con error:<br />
# ERROR: Property error: : resources.VDU1.properties.image: : No module named v2.client<br />
tacker vnf-create --vnfd-id $( tacker vnfd-list | awk '/ testd / { print $2 }' ) test<br />
<br />
</exec><br />
<br />
<!--filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<exec seq="step92" type="verbatim"><br />
</exec--><br />
<br />
<!--<br />
STEP 10: Ceilometer service<br />
Based on https://www.server-world.info/en/note?os=Ubuntu_22.04&p=openstack_antelope4&f=8<br />
--><br />
<br />
<exec seq="step100" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
# moved to the rootfs creation script<br />
#apt-get -y install gnocchi-api gnocchi-metricd python3-gnocchiclient<br />
#apt-get -y install ceilometer-agent-central ceilometer-agent-notification<br />
</exec><br />
<br />
<filetree seq="step101" root="/etc/gnocchi/">conf/controller/gnocchi/gnocchi.conf</filetree><br />
<filetree seq="step101" root="/etc/gnocchi/">conf/controller/gnocchi/policy.json</filetree><br />
<exec seq="step101" type="verbatim"><br />
<!-- Install gnocchi --><br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --project service --password xxxx gnocchi<br />
openstack role add --project service --user gnocchi admin<br />
openstack service create --name gnocchi --description "Metric Service" metric<br />
openstack endpoint create --region RegionOne metric public http://controller:8041<br />
openstack endpoint create --region RegionOne metric internal http://controller:8041<br />
openstack endpoint create --region RegionOne metric admin http://controller:8041<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE gnocchi;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
chmod 640 /etc/gnocchi/gnocchi.conf<br />
chgrp gnocchi /etc/gnocchi/gnocchi.conf<br />
<br />
su -s /bin/bash gnocchi -c "gnocchi-upgrade"<br />
a2enmod wsgi<br />
a2ensite gnocchi-api<br />
systemctl restart gnocchi-metricd apache2<br />
systemctl enable gnocchi-metricd<br />
systemctl status gnocchi-metricd<br />
export OS_AUTH_TYPE=password<br />
gnocchi resource list<br />
<br />
</exec><br />
<br />
<filetree seq="step102" root="/etc/ceilometer/">conf/controller/ceilometer/ceilometer.conf</filetree><br />
<!--filetree seq="step102" root="/etc/apache2/sites-available/gnocchi.conf">conf/controller/ceilometer/apache/gnocchi.conf</filetree--><br />
<!--filetree seq="step102" root="/etc/ceilometer/">conf/controller/ceilometer/pipeline.yaml</filetree--><br />
<!--filetree seq="step102" root="/etc/gnocchi/">conf/controller/ceilometer/api-paste.ini</filetree--><br />
<exec seq="step102" type="verbatim"><br />
<br />
<!-- Install Ceilometer --><br />
source /root/bin/admin-openrc.sh<br />
# Ceilometer<br />
# Following https://tinyurl.com/22w6xgm4<br />
openstack user create --domain default --project service --password xxxx ceilometer<br />
openstack role add --project service --user ceilometer admin<br />
openstack service create --name ceilometer --description "OpenStack Telemetry Service" metering<br />
<br />
chmod 640 /etc/ceilometer/ceilometer.conf<br />
chgrp ceilometer /etc/ceilometer/ceilometer.conf<br />
su -s /bin/bash ceilometer -c "ceilometer-upgrade"<br />
systemctl restart ceilometer-agent-central ceilometer-agent-notification<br />
systemctl enable ceilometer-agent-central ceilometer-agent-notification<br />
<br />
#ceilometer-upgrade<br />
#systemctl restart ceilometer-agent-central<br />
#service restart ceilometer-agent-notification<br />
<br />
<br />
# Enable Glance service meters<br />
# https://tinyurl.com/274oe82n<br />
crudini --set /etc/glance/glance-api.conf oslo_messaging_notifications driver messagingv2<br />
crudini --set /etc/glance/glance-api.conf oslo_messaging_notifications transport_url rabbit://openstack:xxxx@controller<br />
systemctl restart glance-api<br />
openstack metric resource list<br />
<br />
# Enable Neutron service meters<br />
crudini --set /etc/neutron/neutron.conf oslo_messaging_notifications driver messagingv2<br />
service neutron-server restart<br />
<br />
# Enable Heat service meters<br />
crudini --set /etc/heat/heat.conf oslo_messaging_notifications driver messagingv2<br />
systemctl restart heat-api<br />
systemctl restart heat-api-cfn<br />
systemctl restart heat-engine<br />
<br />
# Enable Networking service meters<br />
crudini --set /etc/neutron/neutron.conf oslo_messaging_notifications driver messagingv2<br />
systemctl restart neutron-server<br />
</exec><br />
<br />
<!-- STEP 11: SKYLINE --><br />
<!-- Adapted from https://tinyurl.com/245v6q73 --><br />
<exec seq="step111" type="verbatim"><br />
#pip3 install skyline-apiserver<br />
#apt-get -y install npm python-is-python3 nginx<br />
#npm install -g yarn<br />
</exec><br />
<br />
<filetree seq="step112" root="/etc/systemd/system/">conf/controller/skyline/skyline-apiserver.service</filetree><br />
<exec seq="step112" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --project service --password xxxx skyline<br />
openstack role add --project service --user skyline admin<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE skyline;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON skyline.* TO 'skyline'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON skyline.* TO 'skyline'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
#groupadd -g 64080 skyline<br />
#useradd -u 64080 -g skyline -d /var/lib/skyline -s /sbin/nologin skyline<br />
pip3 install skyline-apiserver<br />
#mkdir -p /etc/skyline /var/lib/skyline /var/log/skyline<br />
mkdir -p /etc/skyline /var/log/skyline<br />
#chmod 750 /etc/skyline /var/lib/skyline /var/log/skyline<br />
cd /root<br />
git clone -b stable/2023.1 https://opendev.org/openstack/skyline-apiserver.git<br />
#cp ./skyline-apiserver/etc/gunicorn.py /etc/skyline/gunicorn.py<br />
#cp ./skyline-apiserver/etc/skyline.yaml.sample /etc/skyline/skyline.yaml<br />
</exec><br />
<br />
<filetree seq="step113" root="/etc/skyline/">conf/controller/skyline/gunicorn.py</filetree><br />
<filetree seq="step113" root="/etc/skyline/">conf/controller/skyline/skyline.yaml</filetree><br />
<filetree seq="step113" root="/etc/systemd/system/">conf/controller/skyline/skyline-apiserver.service</filetree><br />
<exec seq="step113" type="verbatim"><br />
cd /root/skyline-apiserver<br />
make db_sync<br />
cd ..<br />
#chown -R skyline. /etc/skyline /var/lib/skyline /var/log/skyline<br />
systemctl daemon-reload<br />
systemctl enable --now skyline-apiserver<br />
apt-get -y install npm python-is-python3 nginx<br />
rm -rf /usr/local/lib/node_modules/yarn/<br />
npm install -g yarn<br />
git clone -b stable/2023.1 https://opendev.org/openstack/skyline-console.git<br />
cd ./skyline-console<br />
make package<br />
pip3 install --force-reinstall ./dist/skyline_console-*.whl<br />
cd ..<br />
skyline-nginx-generator -o /etc/nginx/nginx.conf<br />
sudo sed -i "s/server .* fail_timeout=0;/server 0.0.0.0:28000 fail_timeout=0;/g" /etc/nginx/nginx.conf<br />
sudo systemctl restart skyline-apiserver.service<br />
sudo systemctl enable nginx.service<br />
sudo systemctl restart nginx.service<br />
</exec><br />
<br />
<!-- STEP 12: LOAD BALANCER OCTAVIA --><br />
<!-- Adapted from https://tinyurl.com/245v6q73 --><br />
<exec seq="step121" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE octavia;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON octavia.* TO 'octavia'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON octavia.* TO 'octavia'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
#openstack user create --domain default --project service --password xxxx octavia<br />
openstack user create --domain default --password xxxx octavia<br />
openstack role add --project service --user octavia admin<br />
openstack service create --name octavia --description "OpenStack LBaaS" load-balancer<br />
export octavia_api=network<br />
openstack endpoint create --region RegionOne load-balancer public http://$octavia_api:9876<br />
openstack endpoint create --region RegionOne load-balancer internal http://$octavia_api:9876<br />
openstack endpoint create --region RegionOne load-balancer admin http://$octavia_api:9876<br />
<br />
source /root/bin/octavia-openrc.sh<br />
# Load Balancer (Octavia)<br />
#openstack flavor show m1.octavia >/dev/null 2>&amp;1 || openstack flavor create --id 100 --vcpus 1 --ram 1024 --disk 5 m1.octavia --private --project service<br />
openstack flavor show amphora >/dev/null 2>&amp;1 || openstack flavor create --id 200 --vcpus 1 --ram 1024 --disk 5 amphora --private<br />
wget -P /tmp/images http://vnx.dit.upm.es/vnx/filesystems/ostack-images/ubuntu-amphora-haproxy-amd64.qcow2<br />
#openstack image create "Amphora" --tag "Amphora" --file /tmp/images/ubuntu-amphora-haproxy-amd64.qcow2 --disk-format qcow2 --container-format bare --private --project service<br />
openstack image create --disk-format qcow2 --container-format bare --private --tag amphora --file /tmp/images/ubuntu-amphora-haproxy-amd64.qcow2 amphora-x64-haproxy<br />
rm /tmp/images/ubuntu-amphora-haproxy-amd64.qcow2<br />
<br />
</exec><br />
<br />
<!-- STEP 13: TELEMETRY ALARM SERVICE --><br />
<!-- See: https://docs.openstack.org/aodh/latest/install/install-ubuntu.html --><br />
<exec seq="step130" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get install -y aodh-api aodh-evaluator aodh-notifier aodh-listener aodh-expirer python3-aodhclient<br />
</exec><br />
<br />
<filetree seq="step131" root="/etc/aodh/">conf/controller/aodh/aodh.conf</filetree><br />
<exec seq="step131" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE aodh;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password xxxx aodh<br />
openstack role add --project service --user aodh admin<br />
openstack service create --name aodh --description "Telemetry" alarming<br />
openstack endpoint create --region RegionOne alarming public http://controller:8042<br />
openstack endpoint create --region RegionOne alarming internal http://controller:8042<br />
openstack endpoint create --region RegionOne alarming admin http://controller:8042<br />
<br />
aodh-dbsync<br />
<br />
# aodh-api no funciona desde wsgi. Hay que arrancarlo manualmente<br />
rm /etc/apache2/sites-enabled/aodh-api.conf<br />
systemctl restart apache2<br />
#service aodh-api restart<br />
nohup aodh-api --port 8042 -- --config-file /etc/aodh/aodh.conf &amp;<br />
systemctl restart aodh-evaluator<br />
systemctl restart aodh-notifier<br />
systemctl restart aodh-listener<br />
<br />
</exec><br />
<br />
<exec seq="step999" type="verbatim"><br />
# Change horizon port to 8080<br />
sed -i 's/Listen 80/Listen 8080/' /etc/apache2/ports.conf<br />
sed -i 's/VirtualHost \*:80/VirtualHost *:8080/' /etc/apache2/sites-enabled/000-default.conf<br />
systemctl restart apache2<br />
# Change Skyline to port 80<br />
sed -i 's/0.0.0.0:9999/0.0.0.0:80/' /etc/nginx/nginx.conf<br />
systemctl restart nginx<br />
systemctl restart skyline-apiserver<br />
</exec><br />
<br />
<!--<br />
LOAD IMAGES TO GLANCE<br />
--><br />
<exec seq="load-img" type="verbatim"><br />
dhclient eth9 # just in case the Internet connection is not active...<br />
<br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create flavors if not created<br />
openstack flavor show m1.nano >/dev/null 2>&amp;1 || openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano<br />
openstack flavor show m1.tiny >/dev/null 2>&amp;1 || openstack flavor create --id 1 --vcpus 1 --ram 512 --disk 1 m1.tiny<br />
openstack flavor show m1.smaller >/dev/null 2>&amp;1 || openstack flavor create --id 6 --vcpus 1 --ram 512 --disk 3 m1.smaller<br />
#openstack flavor show m1.octavia >/dev/null 2>&amp;1 || openstack flavor create --id 100 --vcpus 1 --ram 1024 --disk 5 m1.octavia --private --project service<br />
<br />
# CentOS image<br />
# Cirros image<br />
#wget -P /tmp/images http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img<br />
wget -P /tmp/images http://vnx.dit.upm.es/vnx/filesystems/ostack-images/cirros-0.3.4-x86_64-disk-vnx.qcow2<br />
glance image-create --name "cirros-0.3.4-x86_64-vnx" --file /tmp/images/cirros-0.3.4-x86_64-disk-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/cirros-0.3.4-x86_64-disk*.qcow2<br />
<br />
# Ubuntu image (trusty)<br />
#wget -P /tmp/images http://vnx.dit.upm.es/vnx/filesystems/ostack-images/trusty-server-cloudimg-amd64-disk1-vnx.qcow2<br />
#glance image-create --name "trusty-server-cloudimg-amd64-vnx" --file /tmp/images/trusty-server-cloudimg-amd64-disk1-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
#rm /tmp/images/trusty-server-cloudimg-amd64-disk1*.qcow2<br />
<br />
# Ubuntu image (xenial)<br />
#wget -P /tmp/images http://vnx.dit.upm.es/vnx/filesystems/ostack-images/xenial-server-cloudimg-amd64-disk1-vnx.qcow2<br />
#glance image-create --name "xenial-server-cloudimg-amd64-vnx" --file /tmp/images/xenial-server-cloudimg-amd64-disk1-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
#rm /tmp/images/xenial-server-cloudimg-amd64-disk1*.qcow2<br />
<br />
# Ubuntu image (focal,20.04)<br />
rm -f/tmp/images/focal-server-cloudimg-amd64-vnx.qcow2<br />
wget -P /tmp/images http://vnx.dit.upm.es/vnx/filesystems/ostack-images/focal-server-cloudimg-amd64-vnx.qcow2<br />
openstack image create "focal-server-cloudimg-amd64-vnx" --file /tmp/images/focal-server-cloudimg-amd64-vnx.qcow2 --disk-format qcow2 --container-format bare --public --progress<br />
rm /tmp/images/focal-server-cloudimg-amd64-vnx.qcow2<br />
<br />
# Ubuntu image (jammy,22.04)<br />
rm -f/tmp/images/jammy-server-cloudimg-amd64-vnx.qcow2<br />
wget -P /tmp/images http://vnx.dit.upm.es/vnx/filesystems/ostack-images/jammy-server-cloudimg-amd64-vnx.qcow2<br />
openstack image create "jammy-server-cloudimg-amd64-vnx" --file /tmp/images/jammy-server-cloudimg-amd64-vnx.qcow2 --disk-format qcow2 --container-format bare --public --progress<br />
rm /tmp/images/jammy-server-cloudimg-amd64-vnx.qcow2<br />
<br />
# CentOS-7<br />
#wget -P /tmp/images http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2<br />
#glance image-create --name "CentOS-7-x86_64" --file /tmp/images/CentOS-7-x86_64-GenericCloud.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
#rm /tmp/images/CentOS-7-x86_64-GenericCloud.qcow2<br />
<br />
# Load Balancer (Octavia)<br />
#wget -P /tmp/images http://vnx.dit.upm.es/vnx/filesystems/ostack-images/ubuntu-amphora-haproxy-amd64.qcow2<br />
#openstack image create "Amphora" --tag "Amphora" --file /tmp/images/ubuntu-amphora-haproxy-amd64.qcow2 --disk-format qcow2 --container-format bare --private --project service<br />
#rm /tmp/images/ubuntu-amphora-haproxy-amd64.qcow2<br />
<br />
</exec><br />
<br />
<!--<br />
CREATE DEMO SCENARIO<br />
--><br />
<exec seq="create-demo-scenario" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create security group rules to allow ICMP, SSH and WWW access<br />
admin_project_id=$(openstack project show admin -c id -f value)<br />
default_secgroup_id=$(openstack security group list -f value | grep default | grep $admin_project_id | cut -d " " -f1)<br />
openstack security group rule create --proto icmp --dst-port 0 $default_secgroup_id<br />
openstack security group rule create --proto tcp --dst-port 80 $default_secgroup_id<br />
openstack security group rule create --proto tcp --dst-port 22 $default_secgroup_id<br />
<br />
# Create internal network<br />
openstack network create net0<br />
openstack subnet create --network net0 --gateway 10.1.1.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.1.0/24 --allocation-pool start=10.1.1.8,end=10.1.1.100 subnet0<br />
<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm1 > /root/keys/vm1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm1 --nic net-id=net0 --key-name vm1<br />
<br />
# Create external network<br />
#openstack network create --share --external --provider-physical-network provider --provider-network-type flat ExtNet<br />
#openstack subnet create --network ExtNet --gateway 10.0.10.1 --dns-nameserver 10.0.10.1 --subnet-range 10.0.10.0/24 --allocation-pool start=10.0.10.100,end=10.0.10.200 ExtSubNet<br />
openstack router create r0<br />
openstack router set r0 --external-gateway ExtNet<br />
openstack router add subnet r0 subnet0<br />
<br />
# Assign floating IP address to vm1<br />
openstack server add floating ip vm1 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-vm2" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm2 > /root/keys/vm2<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm2 --nic net-id=net0 --key-name vm2<br />
# Assign floating IP address to vm2<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm2<br />
openstack server add floating ip vm2 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
</exec><br />
<br />
<exec seq="create-demo-vm3" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm3 > /root/keys/vm3<br />
openstack server create --flavor m1.smaller --image focal-server-cloudimg-amd64-vnx vm3 --nic net-id=net0 --key-name vm3<br />
# Assign floating IP address to vm3<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm3<br />
openstack server add floating ip vm3 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
</exec><br />
<br />
<exec seq="create-demo-vm4" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm4 > /root/keys/vm4<br />
openstack server create --flavor m1.smaller --image jammy-server-cloudimg-amd64-vnx vm4 --nic net-id=net0 --key-name vm4 --property VAR1=2 --property VAR2=3<br />
# Assign floating IP address to vm4<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm4<br />
openstack server add floating ip vm4 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
</exec><br />
<br />
<exec seq="create-vlan-demo-scenario" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create security group rules to allow ICMP, SSH and WWW access<br />
admin_project_id=$(openstack project show admin -c id -f value)<br />
default_secgroup_id=$(openstack security group list -f value | grep $admin_project_id | cut -d " " -f1)<br />
openstack security group rule create --proto icmp --dst-port 0 $default_secgroup_id<br />
openstack security group rule create --proto tcp --dst-port 80 $default_secgroup_id<br />
openstack security group rule create --proto tcp --dst-port 22 $default_secgroup_id<br />
<br />
# Create vlan based networks and subnetworks<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1000 vlan1000<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1001 vlan1001<br />
openstack subnet create --network vlan1000 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.2.0/24 --allocation-pool start=10.1.2.2,end=10.1.2.99 subvlan1000<br />
openstack subnet create --network vlan1001 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.3.0/24 --allocation-pool start=10.1.3.2,end=10.1.3.99 subvlan1001<br />
<br />
# Create virtual machine<br />
mkdir -p tmp<br />
openstack keypair create vmA1 > tmp/vmA1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vmA1 --nic net-id=vlan1000 --key-name vmA1<br />
openstack keypair create vmB1 > tmp/vmB1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vmB1 --nic net-id=vlan1001 --key-name vmB1<br />
<br />
</exec><br />
<br />
<!--<br />
VERIFY<br />
--><br />
<exec seq="verify" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
echo "--"<br />
echo "-- Keystone (identity)"<br />
echo "--"<br />
echo "Command: openstack --os-auth-url http://controller:5000/v3 --os-project-domain-name Default --os-user-domain-name Default --os-project-name admin --os-username admin token issue"<br />
openstack --os-auth-url http://controller:5000/v3 \<br />
--os-project-domain-name Default --os-user-domain-name Default \<br />
--os-project-name admin --os-username admin token issue<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
echo "--"<br />
echo "-- Glance (images)"<br />
echo "--"<br />
echo "Command: openstack image list"<br />
openstack image list<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
echo "--"<br />
echo "-- Nova (compute)"<br />
echo "--"<br />
echo "Command: openstack compute service list"<br />
openstack compute service list<br />
echo "Command: openstack hypervisor list"<br />
openstack hypervisor service list<br />
echo "Command: openstack catalog list"<br />
openstack catalog list<br />
echo "Command: nova-status upgrade check"<br />
nova-status upgrade check<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
echo "--"<br />
echo "-- Neutron (network)"<br />
echo "--"<br />
echo "Command: openstack extension list --network"<br />
openstack extension list --network<br />
echo "Command: openstack network agent list"<br />
openstack network agent list<br />
echo "Command: openstack security group list"<br />
openstack security group list<br />
echo "Command: openstack security group rule list"<br />
openstack security group rule list<br />
</exec><br />
<br />
</vm><br />
<br />
<!--<br />
~~<br />
~~ N E T W O R K N O D E<br />
~~<br />
--><br />
<vm name="network" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-network</filesystem><br />
<mem>1G</mem><br />
<shareddir root="/root/shared">shared</shareddir><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.21/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.21/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="4" net="ExtNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<forwarding type="ip" /><br />
<forwarding type="ipv6" /><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/controller/bin</filetree><br />
<filetree seq="on_boot" root="/root/bin/">conf/controller/keystone/admin-openrc.sh</filetree><br />
<filetree seq="on_boot" root="/root/bin/">conf/controller/keystone/demo-openrc.sh</filetree><br />
<filetree seq="on_boot" root="/root/bin/">conf/controller/keystone/octavia-openrc.sh</filetree><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<filetree seq="on_boot" root="/tmp/">conf/controller/ssh/id_rsa.pub</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts<br />
rm /root/hosts<br />
</exec><br />
<exec seq="on_boot" type="verbatim"><br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
mkdir /root/.ssh<br />
cat /tmp/id_rsa.pub >> /root/.ssh/authorized_keys<br />
dhclient eth9 # just in case the Internet connection is not active...<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --><br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<filetree seq="on_boot" root="/root/">conf/network/bin</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
chmod +x /root/bin/*<br />
</exec><br />
<br />
<exec seq="step00,step01" type="verbatim"><br />
dhclient eth9<br />
ping -c 3 www.dit.upm.es<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) --><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/neutron.conf</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/metadata_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/plugins/ml2/">conf/network/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step52" root="/etc/neutron/plugins/ml2/">conf/network/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/l3_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/dhcp_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/dnsmasq-neutron.conf</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/fwaas_driver.ini</filetree><br />
<!--filetree seq="step52" root="/etc/neutron/">conf/network/neutron/lbaas_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/neutron_lbaas.conf</filetree--><br />
<exec seq="step52" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
ovs-vsctl add-br br-provider<br />
ovs-vsctl add-port br-provider eth4<br />
<br />
#service neutron-lbaasv2-agent restart<br />
#systemctl restart neutron-lbaasv2-agent<br />
#systemctl enable neutron-lbaasv2-agent<br />
#service openvswitch-switch restart<br />
<br />
systemctl enable neutron-openvswitch-agent<br />
systemctl enable neutron-dhcp-agent<br />
systemctl enable neutron-metadata-agent<br />
systemctl enable neutron-l3-agent<br />
systemctl start neutron-openvswitch-agent<br />
systemctl start neutron-dhcp-agent<br />
systemctl start neutron-metadata-agent<br />
systemctl start neutron-l3-agent<br />
</exec><br />
<br />
<!-- STEP 12: LOAD BALANCER OCTAVIA --><br />
<!-- Official recipe in: https://github.com/openstack/octavia/blob/master/doc/source/install/install-ubuntu.rst --><br />
<!-- Adapted from https://tinyurl.com/245v6q73 --><br />
<exec seq="step122" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
#source /root/bin/admin-openrc.sh<br />
source /root/bin/octavia-openrc.sh<br />
#apt -y install octavia-api octavia-health-manager octavia-housekeeping octavia-worker python3-ovn-octavia-provider<br />
#apt -y install octavia-api octavia-health-manager octavia-housekeeping octavia-worker python3-octavia python3-octaviaclient<br />
mkdir -p /etc/octavia/certs/private<br />
sudo chmod 755 /etc/octavia -R<br />
mkdir ~/work<br />
cd ~/work<br />
git clone https://opendev.org/openstack/octavia.git<br />
cd octavia/bin<br />
sed -i 's/not-secure-passphrase/$1/' create_dual_intermediate_CA.sh<br />
source create_dual_intermediate_CA.sh 01234567890123456789012345678901<br />
#cp -p ./dual_ca/etc/octavia/certs/server_ca.cert.pem /etc/octavia/certs<br />
#cp -p ./dual_ca/etc/octavia/certs/server_ca-chain.cert.pem /etc/octavia/certs<br />
#cp -p ./dual_ca/etc/octavia/certs/server_ca.key.pem /etc/octavia/certs/private<br />
#cp -p ./dual_ca/etc/octavia/certs/client_ca.cert.pem /etc/octavia/certs<br />
#cp -p ./dual_ca/etc/octavia/certs/client.cert-and-key.pem /etc/octavia/certs/private<br />
#chown -R octavia /etc/octavia/certs<br />
cp -p etc/octavia/certs/server_ca.cert.pem /etc/octavia/certs<br />
cp -p etc/octavia/certs/server_ca-chain.cert.pem /etc/octavia/certs<br />
cp -p etc/octavia/certs/server_ca.key.pem /etc/octavia/certs/private<br />
cp -p etc/octavia/certs/client_ca.cert.pem /etc/octavia/certs<br />
cp -p etc/octavia/certs/client.cert-and-key.pem /etc/octavia/certs/private<br />
chown -R octavia.octavia /etc/octavia/certs<br />
</exec><br />
<br />
<filetree seq="step123" root="/etc/octavia/">conf/network/octavia/octavia.conf</filetree><br />
<filetree seq="step123" root="/etc/octavia/">conf/network/octavia/policy.yaml</filetree><br />
<exec seq="step123" type="verbatim"><br />
#chmod 640 /etc/octavia/{octavia.conf,policy.yaml}<br />
#chgrp octavia /etc/octavia/{octavia.conf,policy.yaml}<br />
#su -s /bin/bash octavia -c "octavia-db-manage --config-file /etc/octavia/octavia.conf upgrade head"<br />
#systemctl restart octavia-api octavia-health-manager octavia-housekeeping octavia-worker<br />
#systemctl enable octavia-api octavia-health-manager octavia-housekeeping octavia-worker<br />
<br />
#source /root/bin/admin-openrc.sh<br />
source /root/bin/octavia-openrc.sh<br />
#openstack security group create lb-mgmt-sec-group --project service<br />
#openstack security group rule create --protocol icmp --ingress lb-mgmt-sec-group<br />
#openstack security group rule create --protocol tcp --dst-port 22:22 lb-mgmt-sec-group<br />
#openstack security group rule create --protocol tcp --dst-port 80:80 lb-mgmt-sec-group<br />
#openstack security group rule create --protocol tcp --dst-port 443:443 lb-mgmt-sec-group<br />
#openstack security group rule create --protocol tcp --dst-port 9443:9443 lb-mgmt-sec-group<br />
<br />
openstack security group create lb-mgmt-sec-grp<br />
openstack security group rule create --protocol icmp lb-mgmt-sec-grp<br />
openstack security group rule create --protocol tcp --dst-port 22 lb-mgmt-sec-grp<br />
openstack security group rule create --protocol tcp --dst-port 9443 lb-mgmt-sec-grp<br />
openstack security group create lb-health-mgr-sec-grp<br />
openstack security group rule create --protocol udp --dst-port 5555 lb-health-mgr-sec-grp<br />
<br />
ssh-keygen -t rsa -N "" -f /root/.ssh/id_rsa<br />
openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey<br />
<br />
mkdir -m755 -p /etc/dhcp/octavia<br />
cp ~/work/octavia/etc/dhcp/dhclient.conf /etc/dhcp/octavia<br />
</exec><br />
<br />
<br />
<br />
<exec seq="step124" type="verbatim"><br />
<br />
source /root/bin/octavia-openrc.sh<br />
<br />
OCTAVIA_MGMT_SUBNET=172.16.0.0/12<br />
OCTAVIA_MGMT_SUBNET_START=172.16.0.100<br />
OCTAVIA_MGMT_SUBNET_END=172.16.31.254<br />
OCTAVIA_MGMT_PORT_IP=172.16.0.2<br />
<br />
openstack network create lb-mgmt-net<br />
openstack subnet create --subnet-range $OCTAVIA_MGMT_SUBNET --allocation-pool \<br />
start=$OCTAVIA_MGMT_SUBNET_START,end=$OCTAVIA_MGMT_SUBNET_END \<br />
--network lb-mgmt-net lb-mgmt-subnet<br />
<br />
SUBNET_ID=$(openstack subnet show lb-mgmt-subnet -f value -c id)<br />
PORT_FIXED_IP="--fixed-ip subnet=$SUBNET_ID,ip-address=$OCTAVIA_MGMT_PORT_IP"<br />
<br />
MGMT_PORT_ID=$(openstack port create --security-group \<br />
lb-health-mgr-sec-grp --device-owner Octavia:health-mgr \<br />
--host=$(hostname) -c id -f value --network lb-mgmt-net \<br />
$PORT_FIXED_IP octavia-health-manager-listen-port)<br />
<br />
MGMT_PORT_MAC=$(openstack port show -c mac_address -f value \<br />
$MGMT_PORT_ID)<br />
<br />
#ip link add o-hm0 type veth peer name o-bhm0<br />
#ovs-vsctl -- --may-exist add-port br-int o-hm0 -- \<br />
# set Interface o-hm0 type=internal -- \<br />
# set Interface o-hm0 external-ids:iface-status=active -- \<br />
# set Interface o-hm0 external-ids:attached-mac=fa:16:3e:51:e9:c3 -- \<br />
# set Interface o-hm0 external-ids:iface-id=6fb13c3f-469e-4a81-a504-a161c6848654 -- \<br />
# set Interface o-hm0 external-ids:skip_cleanup=true<br />
<br />
ovs-vsctl -- --may-exist add-port br-int o-hm0 -- \<br />
set Interface o-hm0 type=internal -- \<br />
set Interface o-hm0 external-ids:iface-status=active -- \<br />
set Interface o-hm0 external-ids:attached-mac=$MGMT_PORT_MAC -- \<br />
set Interface o-hm0 external-ids:iface-id=$MGMT_PORT_ID -- \<br />
set Interface o-hm0 external-ids:skip_cleanup=true<br />
<br />
#NETID=$(openstack network show lb-mgmt-net -c id -f value)<br />
#BRNAME=brq$(echo $NETID|cut -c 1-11)<br />
#brctl addif $BRNAME o-bhm0<br />
ip link set o-bhm0 up<br />
<br />
ip link set dev o-hm0 address $MGMT_PORT_MAC<br />
iptables -I INPUT -i o-hm0 -p udp --dport 5555 -j ACCEPT<br />
dhclient -v o-hm0 -cf /etc/dhcp/octavia<br />
<br />
<br />
SECGRPID=$( openstack security group show lb-mgmt-sec-grp -c id -f value )<br />
LBMGMTNETID=$( openstack network show lb-mgmt-net -c id -f value )<br />
FLVRID=$( openstack flavor show amphora -c id -f value )<br />
#FLVRID=$( openstack flavor show m1.octavia -c id -f value )<br />
SERVICEPROJECTID=$( openstack project show service -c id -f value )<br />
<br />
#crudini --set /etc/octavia/octavia.conf controller_worker amp_image_tag Amphora<br />
crudini --set /etc/octavia/octavia.conf controller_worker amp_image_owner_id $SERVICEPROJECTID<br />
crudini --set /etc/octavia/octavia.conf controller_worker amp_image_tag amphora<br />
crudini --set /etc/octavia/octavia.conf controller_worker amp_ssh_key_name mykey<br />
crudini --set /etc/octavia/octavia.conf controller_worker amp_secgroup_list $SECGRPID<br />
crudini --set /etc/octavia/octavia.conf controller_worker amp_boot_network_list $LBMGMTNETID<br />
crudini --set /etc/octavia/octavia.conf controller_worker amp_flavor_id $FLVRID<br />
crudini --set /etc/octavia/octavia.conf controller_worker network_driver allowed_address_pairs_driver<br />
crudini --set /etc/octavia/octavia.conf controller_worker compute_driver compute_nova_driver<br />
crudini --set /etc/octavia/octavia.conf controller_worker amphora_driver amphora_haproxy_rest_driver<br />
crudini --set /etc/octavia/octavia.conf controller_worker client_ca /etc/octavia/certs/client_ca.cert.pem<br />
<br />
octavia-db-manage --config-file /etc/octavia/octavia.conf upgrade head<br />
systemctl restart octavia-api octavia-health-manager octavia-housekeeping octavia-worker<br />
</exec><br />
<br />
</vm><br />
<br />
<!--<br />
~~<br />
~~ C O M P U T E 1 N O D E<br />
~~<br />
--><br />
<vm name="compute1" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-compute</filesystem><br />
<mem>2G</mem><br />
<shareddir root="/root/shared">shared</shareddir><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.31/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.31/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<filetree seq="on_boot" root="/tmp/">conf/controller/ssh/id_rsa.pub</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
# Create /dev/net/tun device<br />
#mkdir -p /dev/net/<br />
#mknod -m 666 /dev/net/tun c 10 200<br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
mkdir /root/.ssh<br />
cat /tmp/id_rsa.pub >> /root/.ssh/authorized_keys<br />
dhclient eth9 # just in case the Internet connection is not active...<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --><br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<exec seq="step00,step01" type="verbatim"><br />
dhclient eth9<br />
ping -c 3 www.dit.upm.es<br />
</exec><br />
<br />
<!-- STEP 42: Compute service (Nova) --><br />
<filetree seq="step42" root="/etc/nova/">conf/compute1/nova/nova.conf</filetree><br />
<filetree seq="step42" root="/etc/nova/">conf/compute1/nova/nova-compute.conf</filetree><br />
<exec seq="step42" type="verbatim"><br />
systemctl enable nova-compute<br />
systemctl start nova-compute<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron) --><br />
<filetree seq="step53" root="/etc/neutron/">conf/compute1/neutron/neutron.conf</filetree><br />
<filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/openvswitch_agent.ini</filetree><br />
<exec seq="step53" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
systemctl enable openvswitch-switch<br />
systemctl enable neutron-openvswitch-agent<br />
systemctl enable libvirtd.service libvirt-guests.service<br />
systemctl enable nova-compute<br />
systemctl start openvswitch-switch<br />
systemctl start neutron-openvswitch-agent<br />
systemctl restart libvirtd.service libvirt-guests.service<br />
systemctl restart nova-compute<br />
</exec><br />
<br />
<!--filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<exec seq="step92" type="verbatim"><br />
</exec--><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
#export DEBIAN_FRONTEND=noninteractive<br />
#apt-get -y install ceilometer-agent-compute<br />
</exec><br />
<filetree seq="step102" root="/etc/ceilometer/">conf/compute1/ceilometer/ceilometer.conf</filetree><br />
<exec seq="step102" type="verbatim"><br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit True<br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit_period hour<br />
crudini --set /etc/nova/nova.conf notifications notify_on_state_change vm_and_task_state<br />
crudini --set /etc/nova/nova.conf oslo_messaging_notifications driver messagingv2<br />
systemctl restart ceilometer-agent-compute<br />
systemctl enable ceilometer-agent-compute<br />
systemctl restart nova-compute<br />
</exec><br />
<br />
</vm><br />
<br />
<!--<br />
~~~<br />
~~~ C O M P U T E 2 N O D E<br />
~~~<br />
--><br />
<vm name="compute2" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-compute</filesystem><br />
<mem>2G</mem><br />
<shareddir root="/root/shared">shared</shareddir><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.32/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.32/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<filetree seq="on_boot" root="/tmp/">conf/controller/ssh/id_rsa.pub</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
# Create /dev/net/tun device<br />
#mkdir -p /dev/net/<br />
#mknod -m 666 /dev/net/tun c 10 200<br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
mkdir /root/.ssh<br />
cat /tmp/id_rsa.pub >> /root/.ssh/authorized_keys<br />
dhclient eth9 # just in case the Internet connection is not active...<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --><br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<exec seq="step00,step01" type="verbatim"><br />
dhclient eth9<br />
ping -c 3 www.dit.upm.es<br />
</exec><br />
<br />
<!-- STEP 42: Compute service (Nova) --><br />
<filetree seq="step42" root="/etc/nova/">conf/compute2/nova/nova.conf</filetree><br />
<filetree seq="step42" root="/etc/nova/">conf/compute2/nova/nova-compute.conf</filetree><br />
<exec seq="step42" type="verbatim"><br />
systemctl enable nova-compute<br />
systemctl start nova-compute<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) --><br />
<filetree seq="step53" root="/etc/neutron/">conf/compute2/neutron/neutron.conf</filetree><br />
<filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/openvswitch_agent.ini</filetree><br />
<exec seq="step53" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
systemctl enable openvswitch-switch<br />
systemctl enable neutron-openvswitch-agent<br />
systemctl enable libvirtd.service libvirt-guests.service<br />
systemctl enable nova-compute<br />
systemctl start openvswitch-switch<br />
systemctl start neutron-openvswitch-agent<br />
systemctl restart libvirtd.service libvirt-guests.service<br />
systemctl restart nova-compute<br />
</exec><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
#export DEBIAN_FRONTEND=noninteractive<br />
#apt-get -y install ceilometer-agent-compute<br />
</exec><br />
<filetree seq="step102" root="/etc/ceilometer/">conf/compute2/ceilometer/ceilometer.conf</filetree><br />
<exec seq="step102" type="verbatim"><br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit True<br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit_period hour<br />
crudini --set /etc/nova/nova.conf notifications notify_on_state_change vm_and_task_state<br />
crudini --set /etc/nova/nova.conf oslo_messaging_notifications driver messagingv2<br />
systemctl restart ceilometer-agent-compute<br />
systemctl restart nova-compute<br />
</exec><br />
<br />
</vm><br />
<br />
<!--<br />
~~<br />
~~ H O S T N O D E<br />
~~<br />
--><br />
<host><br />
<hostif net="ExtNet"><br />
<ipv4>10.0.10.1/24</ipv4><br />
</hostif><br />
<hostif net="MgmtNet"><br />
<ipv4>10.0.0.1/24</ipv4><br />
</hostif><br />
<exec seq="step00" type="verbatim"><br />
echo "--\n-- Waiting for all VMs to be ssh ready...\n--"<br />
</exec><br />
<exec seq="step00" type="verbatim"><br />
# Wait till ssh is accesible in all VMs<br />
while ! $( nc -z controller 22 ); do sleep 1; done<br />
while ! $( nc -z network 22 ); do sleep 1; done<br />
while ! $( nc -z compute1 22 ); do sleep 1; done<br />
while ! $( nc -z compute2 22 ); do sleep 1; done<br />
</exec><br />
<exec seq="step00" type="verbatim"><br />
echo "-- ...OK\n--"<br />
</exec><br />
</host><br />
<br />
</vnx><br />
</pre></div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=File:Tutorial-openstack-antelope-4n-classic-ovs.png&diff=2666
File:Tutorial-openstack-antelope-4n-classic-ovs.png
2023-09-18T08:38:12Z
<p>David: </p>
<hr />
<div></div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Vnx-labo-openstack-4nodes-classic-ovs-antelope&diff=2665
Vnx-labo-openstack-4nodes-classic-ovs-antelope
2023-09-18T08:32:05Z
<p>David: </p>
<hr />
<div>Being edited...<br />
<br />
{{Title|VNX Openstack Antelope four nodes classic scenario using Open vSwitch}}<br />
<br />
== Introduction ==<br />
<br />
This is an Openstack tutorial scenario designed to experiment with Openstack free and open-source software platform for cloud-computing.<br />
<br />
The scenario is made of four virtual machines: a controller node, a network node and two compute nodes, all based on LXC. Optionally, new compute nodes can be added by starting additional VNX scenarios.<br />
<br />
Openstack version used is Antelope (2023.1) over Ubuntu 22.04 LTS. The deployment scenario is the one that was named "Classic with Open vSwitch" and was described in previous versions of Openstack documentation (https://docs.openstack.org/liberty/networking-guide/scenario-classic-ovs.html). It is prepared to experiment either with the deployment of virtual machines using "Self Service Networks" provided by Openstack, or with the use of external "Provider networks".<br />
<br />
The configuration has been developed integrating into the VNX scenario all the installation and configuration commands described in [https://docs.openstack.org/2023.1/install/ Openstack Antelope installation recipes.]<br />
<br />
[[File:Openstack_tutorial-stein.png|center|thumb|600px|<div align=center><br />
'''Figure 1: Openstack tutorial scenario'''</div>]]<br />
<br />
== Requirements ==<br />
<br />
To use the scenario you need a Linux computer (Ubuntu 20.04 or later recommended) with VNX software installed. At least 12GB of memory are needed to execute the scenario. <br />
<br />
See how to install VNX here: http://vnx.dit.upm.es/vnx/index.php/Vnx-install<br />
<br />
If already installed, update VNX to the latest version with:<br />
<br />
vnx_update<br />
<br />
== Installation ==<br />
<br />
Download the scenario with the virtual machines images included and unpack it:<br />
<br />
wget http://idefix.dit.upm.es/vnx/examples/openstack/openstack_lab-antelope_4n_classic_ovs-v01-with-rootfs.tgz<br />
sudo vnx --unpack openstack_lab-antelope_4n_classic_ovs-v01-with-rootfs.tgz<br />
<br />
== Starting the scenario ==<br />
<br />
Start the scenario and configure it and load an example cirros and ubuntu images with:<br />
cd openstack_lab-antelope_4n_classic_ovs-v01<br />
# Start the scenario<br />
sudo vnx -f openstack_lab.xml -v --create<br />
# Wait for all consoles to have started and configure all Openstack services<br />
vnx -f openstack_lab.xml -v -x start-all<br />
# Load vm images in GLANCE<br />
vnx -f openstack_lab.xml -v -x load-img<br />
<br />
<br />
[[File:Tutorial-openstack-antelope-4n-classic-ovs.png|center|thumb|600px|<div align=center><br />
'''Figure 2: Openstack tutorial detailed topology'''</div>]]<br />
<br />
Once started, you can connect to Openstack Dashboard (default/admin/xxxx) starting a browser and pointing it to the controller horizon page. For example:<br />
<br />
firefox 10.0.10.11/horizon<br />
<br />
== Self Service networks example ==<br />
<br />
Access the network topology Dashboard page (Project->Network->Network topology) and create a simple demo scenario inside Openstack:<br />
<br />
vnx -f openstack_lab.xml -v -x create-demo-scenario<br />
<br />
You should see the simple scenario as it is being created through the Dashboard.<br />
<br />
Once created you should be able to access vm1 console, to ping or ssh from the host to the vm1 or the opposite (see the floating IP assigned to vm1 in the Dashboard, probably 10.0.10.102).<br />
<br />
You can create a second Cirros virtual machine (vm2) or a third Ubuntu virtual machine (vm3) and test connectivity among all virtual machines with: <br />
<br />
vnx -f openstack_lab.xml -v -x create-demo-vm2<br />
vnx -f openstack_lab.xml -v -x create-demo-vm3<br />
<br />
To allow external Internet access from vm1 you hace to configure a NAT in the host. You can easily do it using '''vnx_config_nat''' command distributed with VNX. Just find out the name of the public network interface of your host (i.e eth0) and execute:<br />
<br />
vnx_config_nat ExtNet eth0<br />
<br />
Besides, you can access the Openstack controller by ssh from the host and execute management commands directly:<br />
slogin root@controller # root/xxxx<br />
source bin/admin-openrc.sh # Load admin credentials<br />
<br />
For example, to show the virtual machines started:<br />
openstack server list<br />
<br />
You can also execute that commands from the host where the virtual scenario is started. For that purpose you need to install openstack client first:<br />
<br />
pip install python-openstackclient<br />
source bin/admin-openrc.sh # Load admin credentials<br />
openstack server list<br />
<br />
== Provider networks example ==<br />
<br />
Compute nodes in the Openstack virtual lab scenario have two network interfaces for internal and external connections: <br />
* '''eth2''', connected to Tunnels network and used to connect with VMs in other compute nodes or routers in the network node<br />
* '''eth3''', connected to VLANs network and used to connect with VMs in other compute nodes and also to connect to external systems through the Providers networks infrastructure. <br />
<br />
To demonstrate how Openstack VMs can be connected with external systems though the VLAN network switches, an additional demo scenario is included. Just execute:<br />
vnx -f openstack_lab.xml -v -x create-vlan-demo-scenario<br />
<br />
That scenario will create two new networks and subnetworks associated with VLANs 1000 and 1001, and two VMs, vmA1 and vmB1 connected to that networks. You can see the scenario created through the openstack Dashboard.<br />
<br />
[[File:Openstack-provider-networks-example.png|center|thumb|600px|<div align=center><br />
'''Figure 3: Provider networks testing scenario'''</div>]]<br />
<br />
The commands used to create that networks and vms are the following (can be consulted also in the scenario XML file):<br />
<br />
<pre><br />
# Networks<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1000 vlan1000<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1001 vlan1001<br />
openstack subnet create --network vlan1000 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.2.0/24 --allocation-pool start=10.1.2.2,end=10.1.2.99 subvlan1000<br />
openstack subnet create --network vlan1001 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.3.0/24 --allocation-pool start=10.1.3.2,end=10.1.3.99 subvlan1001<br />
<br />
# VMs<br />
mkdir -p tmp<br />
openstack keypair create vmA1 > tmp/vmA1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vmA1 --nic net-id=vlan1000 --key-name vmA1<br />
openstack keypair create vmB1 > tmp/vmB1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vmB1 --nic net-id=vlan1001 --key-name vmB1<br />
</pre><br />
<br />
To demonstrate the connectivity of vmA1 and vmB1 to external systems connected on VLANs 1000/1001, you can start and additional virtual scenario which creates three additional systems: vmA2 (vlan 1000), vmB2 (vlan 1001) and vlan-router (connected to both vlans). To start it just execute:<br />
vnx -f openstack_lab-vms-vlan.xml -v -t<br />
<br />
[[File:Openstack_lab-vms-vlan-map.png|center|thumb|600px|<div align=center><br />
'''Figure 4: Provider networks demo external scenario'''</div>]]<br />
<br />
Once the scenario is started, you should be able to ping, traceroute and ssh among vmA1, vmB1, vmA2 and vmB2 using the following IP addresses:<br />
<ul><br />
<li>Virtual machines inside Openstack:</li><br />
<ul><br />
<li>vmA1 and vmB1: dynamic addresses assigned from 10.1.2.0/24. You can consult the addresses from Horizon or using the command:</li><br />
openstack server list<br />
</ul><br />
<li>Virtual machines outside Openstack:</li><br />
<ul><br />
<li>vmA2: 10.1.2.100/24</li><br />
<li>vmB2: 10.1.3.100/24</li><br />
</ul><br />
</ul><br />
<br />
Take into account that the pings from the exterior virtual machines to the internal ones is not allowed by the default security groups filters applied by Openstack.<br />
<br />
You can have a look at the virtual switch that supports the Openstack VLAN Network executing the following command in the host:<br />
ovs-vsctl show<br />
<br />
<br />
[[File:Vnx-demo-scenario-openstack-stein.png|center|thumb|600px|<div align=center><br />
'''Figure 5: Openstack Dashboard view of the demo virtual scenarios created'''</div>]]<br />
<br />
== Adding additional compute nodes ==<br />
<br />
Three additional VNX scenarios are provided to add new compute nodes to the scenario. <br />
<br />
For example, to start compute nodes 3 and 4, just:<br />
vnx -f openstack_lab-cmp34.xml -v -t<br />
# Wait for consoles to start<br />
vnx -f openstack_lab-cmp34.xml -v -x start-all<br />
<br />
After that, you can see the new compute nodes added <br />
by going to "Admin->Compute->Hypervisors->Compute host" option. However, the new compute nodes are not added yet to the list of Hypervisors in "Admin->Compute->Hypervisors->Hypervisor" option.<br />
<br />
To add them, just execute:<br />
vnx -f openstack_lab.xml -v -x discover-hosts<br />
<br />
The same procedure can be used to start nodes 5 and 6 (openstack_lab-cmp56.xml) and nodes 7 and 8 (openstack_lab-cmp78.xml).<br />
<br />
== Stopping or releasing the scenario ==<br />
<br />
To stop the scenario preserving the configuration and the changes made:<br />
<br />
vnx -f openstack_lab.xml -v --shutdown<br />
<br />
To start it again use:<br />
<br />
vnx -f openstack_lab.xml -v --start<br />
<br />
To stop the scenario destroying all the configuration and changes made:<br />
<br />
vnx -f openstack_lab.xml -v --destroy<br />
<br />
To unconfigure the NAT, just execute (change eth0 by the name of your external interface):<br />
<br />
vnx_config_nat -d ExtNet eth0<br />
<br />
== Other useful information ==<br />
<br />
To pack the scenario in a tgz file:<br />
<br />
bin/pack-scenario-with-rootfs # including rootfs<br />
bin/pack-scenario # without rootfs<br />
<br />
== Other Openstack Dashboard screen captures ==<br />
<br />
[[File:Vnx-demo-scenario-openstack-mitaka-compute-overview.png|center|thumb|600px|<div align=center><br />
'''Figure 6: Openstack Dashboard compute overview'''</div>]]<br />
<br />
[[File:Vnx-demo-scenario-openstack-mitaka-instances.png|center|thumb|600px|<div align=center><br />
'''Figure 7: Openstack Dashboard view of the demo virtual machines created'''</div>]]<br />
<br />
== XML specification of Openstack tutorial scenario ==<br />
<br />
<pre><br />
<?xml version="1.0" encoding="UTF-8"?><br />
<br />
<!--<br />
~~~~~~~~~~~~~~~~~~~~~~<br />
VNX Sample scenarios<br />
~~~~~~~~~~~~~~~~~~~~~~<br />
<br />
Name: openstack_lab-antelope<br />
<br />
Description: This is an Openstack tutorial scenario designed to experiment with Openstack free and open-source<br />
software platform for cloud-computing. It is made of four LXC containers:<br />
- one controller<br />
- one network node<br />
- two compute nodes<br />
Openstack version used: Antelope<br />
The network configuration is based on the one named "Classic with Open vSwitch" described here:<br />
http://docs.openstack.org/liberty/networking-guide/scenario-classic-ovs.html<br />
<br />
Author: David Fernandez (david.fernandez@upm.es)<br />
<br />
This file is part of the Virtual Networks over LinuX (VNX) Project distribution.<br />
(www: http://www.dit.upm.es/vnx - e-mail: vnx@dit.upm.es)<br />
<br />
Copyright(C) 2023 Departamento de Ingenieria de Sistemas Telematicos (DIT)<br />
Universidad Politecnica de Madrid (UPM)<br />
SPAIN<br />
--><br />
<br />
<vnx xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"<br />
xsi:noNamespaceSchemaLocation="/usr/share/xml/vnx/vnx-2.00.xsd"><br />
<global><br />
<version>2.0</version><br />
<scenario_name>openstack_lab-antelope</scenario_name><br />
<!--ssh_key>~/.ssh/id_rsa.pub</ssh_key--><br />
<automac offset="0"/><br />
<!--vm_mgmt type="none"/--><br />
<vm_mgmt type="private" network="10.20.0.0" mask="24" offset="0"><br />
<host_mapping /><br />
</vm_mgmt><br />
<vm_defaults><br />
<console id="0" display="no"/><br />
<console id="1" display="yes"/><br />
</vm_defaults><br />
<cmd-seq seq="step1-6">step00,step1,step2,step3,step3b,step4,step5,step54,step6</cmd-seq><br />
<cmd-seq seq="step1-8">step1-6,step8</cmd-seq><br />
<cmd-seq seq="step4">step41,step42,step43,step44</cmd-seq><br />
<cmd-seq seq="step5">step51,step52,step53</cmd-seq><br />
<cmd-seq seq="step10">step100,step101,step102</cmd-seq><br />
<cmd-seq seq="step11">step111,step112,step113</cmd-seq><br />
<cmd-seq seq="step12">step121,step122,step123,step124</cmd-seq><br />
<cmd-seq seq="step13">step130,step131</cmd-seq><br />
<!--cmd-seq seq="start-all-from-scratch">step1-8,step10,step12,step11</cmd-seq--><br />
<cmd-seq seq="start-all-from-scratch">step00,step1,step2,step3,step3b,step41,step51,step6,step8,step10,step121,step11</cmd-seq><br />
<cmd-seq seq="start-all">step01,step42,step43,step44,step52,step53,step54,step122,step123,step124,step999</cmd-seq><br />
<cmd-seq seq="discover-hosts">step44</cmd-seq><br />
</global><br />
<br />
<net name="MgmtNet" mode="openvswitch" mtu="1450"/><br />
<net name="TunnNet" mode="openvswitch" mtu="1450"/><br />
<net name="ExtNet" mode="openvswitch" mtu="1450"/><br />
<net name="VlanNet" mode="openvswitch" /><br />
<net name="virbr0" mode="virtual_bridge" managed="no"/><br />
<br />
<!--<br />
~~<br />
~~ C O N T R O L L E R N O D E<br />
~~<br />
--><br />
<vm name="controller" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-controller</filesystem><br />
<mem>1G</mem><br />
<shareddir root="/root/shared">shared</shareddir><br />
<!--console id="0" display="yes"/--><br />
<br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.11/24</ipv4><br />
</if><br />
<if id="2" net="ExtNet"><br />
<ipv4>10.0.10.11/24</ipv4><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --><br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-controller.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<filetree seq="on_boot" root="/root/">conf/controller/bin</filetree><br />
<filetree seq="on_boot" root="/root/.ssh/">conf/controller/ssh/id_rsa</filetree><br />
<filetree seq="on_boot" root="/root/.ssh/">conf/controller/ssh/id_rsa.pub</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
chmod +x /root/bin/*<br />
</exec><br />
<exec seq="on_boot" type="verbatim"><br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
<br />
# Change owner of secret_key to horizon to avoid a 500 error when<br />
# accessing horizon (new problem arosed in v04)<br />
# See: https://ask.openstack.org/en/question/30059/getting-500-internal-server-error-while-accessing-horizon-dashboard-in-ubuntu-icehouse/<br />
chown -f horizon /var/lib/openstack-dashboard/secret_key<br />
<br />
# Stop nova services. Before being configured, they consume a lot of CPU<br />
service nova-scheduler stop<br />
service nova-api stop<br />
service nova-conductor stop<br />
<br />
# Add an html redirection to openstack page from index.html<br />
echo '&lt;meta http-equiv="refresh" content="0; url=/horizon" /&gt;' > /var/www/html/index.html<br />
<br />
dhclient eth9 # just in case the Internet connection is not active...<br />
</exec><br />
<br />
<exec seq="step00" type="verbatim"><br />
sed -i '/^network/d' /root/.ssh/known_hosts<br />
ssh-keyscan -t rsa network >> /root/.ssh/known_hosts<br />
sed -i '/^compute1/d' /root/.ssh/known_hosts<br />
ssh-keyscan -t rsa compute1 >> /root/.ssh/known_hosts<br />
sed -i '/^compute2/d' /root/.ssh/known_hosts<br />
ssh-keyscan -t rsa compute2 >> /root/.ssh/known_hosts<br />
dhclient eth9<br />
ping -c 3 www.dit.upm.es<br />
</exec><br />
<br />
<exec seq="step01" type="verbatim"><br />
sed -i '/^network/d' /root/.ssh/known_hosts<br />
ssh-keyscan -t rsa network >> /root/.ssh/known_hosts<br />
sed -i '/^compute1/d' /root/.ssh/known_hosts<br />
ssh-keyscan -t rsa compute1 >> /root/.ssh/known_hosts<br />
sed -i '/^compute2/d' /root/.ssh/known_hosts<br />
ssh-keyscan -t rsa compute2 >> /root/.ssh/known_hosts<br />
# Restart nova services<br />
systemctl restart nova-scheduler<br />
systemctl restart nova-api<br />
systemctl restart nova-conductor<br />
dhclient eth9<br />
ping -c 3 www.dit.upm.es<br />
#systemctl restart memcached<br />
</exec><br />
<br />
<!--<br />
STEP 1: Basic services<br />
--><br />
<filetree seq="step1" root="/etc/mysql/mariadb.conf.d/">conf/controller/mysql/99-openstack.cnf</filetree><br />
<filetree seq="step1" root="/etc/">conf/controller/memcached/memcached.conf</filetree><br />
<filetree seq="step1" root="/etc/default/">conf/controller/etcd/etcd</filetree><br />
<!--filetree seq="step1" root="/etc/mongodb.conf">conf/controller/mongodb/mongodb.conf</filetree!--><br />
<exec seq="step1" type="verbatim"><br />
<br />
# mariadb<br />
systemctl enable mariadb<br />
systemctl start mariadb<br />
<br />
# rabbitmqctl<br />
systemctl enable rabbitmq-server<br />
systemctl start rabbitmq-server<br />
rabbitmqctl add_user openstack xxxx<br />
rabbitmqctl set_permissions openstack ".*" ".*" ".*"<br />
<br />
# memcached<br />
sed -i -e 's/-l 127.0.0.1/-l 10.0.0.11/' /etc/memcached.conf<br />
systemctl enable memcached<br />
systemctl start memcached<br />
<br />
# etcd<br />
systemctl enable etcd<br />
systemctl start etcd<br />
<br />
echo "Services status"<br />
echo "etcd " $( systemctl show -p SubState etcd )<br />
echo "mariadb " $( systemctl show -p SubState mariadb )<br />
echo "memcached " $( systemctl show -p SubState memcached )<br />
echo "rabbitmq-server " $( systemctl show -p SubState rabbitmq-server )<br />
</exec><br />
<br />
<!--<br />
STEP 2: Identity service<br />
--><br />
<filetree seq="step2" root="/etc/keystone/">conf/controller/keystone/keystone.conf</filetree><br />
<filetree seq="step2" root="/root/bin/">conf/controller/keystone/admin-openrc.sh</filetree><br />
<filetree seq="step2" root="/root/bin/">conf/controller/keystone/demo-openrc.sh</filetree><br />
<filetree seq="step2" root="/root/bin/">conf/controller/keystone/octavia-openrc.sh</filetree><br />
<exec seq="step2" type="verbatim"><br />
count=1; while ! mysqladmin ping ; do echo -n $count; echo ": waiting for mysql ..."; ((count++)) &amp;&amp; ((count==6)) &amp;&amp; echo "--" &amp;&amp; echo "-- ERROR: database not ready." &amp;&amp; echo "--" &amp;&amp; break; sleep 2; done<br />
</exec><br />
<exec seq="step2" type="verbatim"><br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE keystone;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
su -s /bin/sh -c "keystone-manage db_sync" keystone<br />
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone<br />
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone<br />
<br />
keystone-manage bootstrap --bootstrap-password xxxx \<br />
--bootstrap-admin-url http://controller:5000/v3/ \<br />
--bootstrap-internal-url http://controller:5000/v3/ \<br />
--bootstrap-public-url http://controller:5000/v3/ \<br />
--bootstrap-region-id RegionOne<br />
<br />
echo "ServerName controller" >> /etc/apache2/apache2.conf<br />
systemctl restart apache2<br />
#rm -f /var/lib/keystone/keystone.db<br />
sleep 5<br />
<br />
export OS_USERNAME=admin<br />
export OS_PASSWORD=xxxx<br />
export OS_PROJECT_NAME=admin<br />
export OS_USER_DOMAIN_NAME=Default<br />
export OS_PROJECT_DOMAIN_NAME=Default<br />
export OS_AUTH_URL=http://controller:5000/v3<br />
export OS_IDENTITY_API_VERSION=3<br />
<br />
# Create users and projects<br />
openstack project create --domain default --description "Service Project" service<br />
openstack project create --domain default --description "Demo Project" demo<br />
openstack user create --domain default --password=xxxx demo<br />
openstack role create user<br />
openstack role add --project demo --user demo user<br />
</exec><br />
<br />
<!--<br />
STEP 3: Image service (Glance)<br />
--><br />
<filetree seq="step3" root="/etc/glance/">conf/controller/glance/glance-api.conf</filetree><br />
<!--filetree seq="step3" root="/etc/glance/">conf/controller/glance/glance-registry.conf</filetree--><br />
<exec seq="step3" type="verbatim"><br />
systemctl enable glance-api<br />
systemctl start glance-api<br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE glance;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx glance<br />
openstack role add --project service --user glance admin<br />
openstack service create --name glance --description "OpenStack Image" image<br />
openstack endpoint create --region RegionOne image public http://controller:9292<br />
openstack endpoint create --region RegionOne image internal http://controller:9292<br />
openstack endpoint create --region RegionOne image admin http://controller:9292<br />
<br />
su -s /bin/sh -c "glance-manage db_sync" glance<br />
systemctl restart glance-api<br />
</exec><br />
<br />
<!--<br />
STEP 3B: Placement service API<br />
--><br />
<filetree seq="step3b" root="/etc/placement/">conf/controller/placement/placement.conf</filetree><br />
<exec seq="step3b" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE placement;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx placement<br />
openstack role add --project service --user placement admin<br />
openstack service create --name placement --description "Placement API" placement<br />
openstack endpoint create --region RegionOne placement public http://controller:8778<br />
openstack endpoint create --region RegionOne placement internal http://controller:8778<br />
openstack endpoint create --region RegionOne placement admin http://controller:8778<br />
su -s /bin/sh -c "placement-manage db sync" placement<br />
systemctl restart apache2<br />
</exec><br />
<br />
<!--<br />
STEP 4: Compute service (Nova)<br />
--><br />
<filetree seq="step41" root="/etc/nova/">conf/controller/nova/nova.conf</filetree><br />
<exec seq="step41" type="verbatim"><br />
# Enable and start services<br />
systemctl enable nova-api<br />
systemctl enable nova-scheduler<br />
systemctl enable nova-conductor<br />
systemctl enable nova-novncproxy<br />
systemctl start nova-api<br />
systemctl start nova-scheduler<br />
systemctl start nova-conductor<br />
systemctl start nova-novncproxy<br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova_api;"<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova;"<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova_cell0;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
<br />
openstack user create --domain default --password=xxxx nova<br />
openstack role add --project service --user nova admin<br />
openstack service create --name nova --description "OpenStack Compute" compute<br />
<br />
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1<br />
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1<br />
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1<br />
<br />
su -s /bin/sh -c "nova-manage api_db sync" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova<br />
su -s /bin/sh -c "nova-manage db sync" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova<br />
<br />
service nova-api restart<br />
service nova-scheduler restart<br />
service nova-conductor restart<br />
service nova-novncproxy restart<br />
</exec><br />
<br />
<exec seq="step43" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Wait for compute1 hypervisor to be up<br />
HOST=compute1<br />
i=5; while ! $( openstack host list | grep $HOST > /dev/null ); do echo "$i - waiting for $HOST to be registered..."; i=$(( i - 1 )); if ((i == 0)); then echo "ERROR: timeout waiting for $HOST"; break; else sleep 5; fi done<br />
</exec><br />
<exec seq="step43" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Wait for compute2 hypervisor to be up<br />
HOST=compute2<br />
i=5; while ! $( openstack host list | grep $HOST > /dev/null ); do echo "$i - waiting for $HOST to be registered..."; i=$(( i - 1 )); if ((i == 0)); then echo "ERROR: timeout waiting for $HOST"; break; else sleep 5; fi done<br />
</exec><br />
<exec seq="step44,discover-hosts" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova<br />
openstack hypervisor list<br />
</exec><br />
<br />
<!--<br />
STEP 5: Network service (Neutron)<br />
--><br />
<filetree seq="step51" root="/etc/neutron/">conf/controller/neutron/neutron.conf</filetree><br />
<filetree seq="step51" root="/etc/neutron/">conf/controller/neutron/metadata_agent.ini</filetree><br />
<!--filetree seq="step51" root="/etc/neutron/">conf/controller/neutron/neutron_lbaas.conf</filetree--><br />
<!--filetree seq="step51" root="/etc/neutron/">conf/controller/neutron/lbaas_agent.ini</filetree--><br />
<filetree seq="step51" root="/etc/neutron/plugins/ml2/">conf/controller/neutron/ml2_conf.ini</filetree><br />
<exec seq="step51" type="verbatim"><br />
systemctl enable neutron-server<br />
systemctl restart neutron-server<br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE neutron;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx neutron<br />
openstack role add --project service --user neutron admin<br />
openstack service create --name neutron --description "OpenStack Networking" network<br />
openstack endpoint create --region RegionOne network public http://controller:9696<br />
openstack endpoint create --region RegionOne network internal http://controller:9696<br />
openstack endpoint create --region RegionOne network admin http://controller:9696<br />
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron<br />
<br />
# LBaaS<br />
# Installation based on recipe:<br />
# - Configure Neutron LBaaS (Load-Balancer-as-a-Service) V2 in www.server-world.info.<br />
#neutron-db-manage --subproject neutron-lbaas upgrade head<br />
#su -s /bin/bash neutron -c "neutron-db-manage --subproject neutron-lbaas --config-file /etc/neutron/neutron.conf upgrade head"<br />
<br />
# FwaaS v2<br />
# https://tinyurl.com/2qk7729b<br />
neutron-db-manage --subproject neutron-fwaas upgrade head<br />
<br />
# Octavia Dashboard panels<br />
# Based on https://opendev.org/openstack/octavia-dashboard<br />
git clone -b stable/2023.1 https://opendev.org/openstack/octavia-dashboard.git<br />
cd octavia-dashboard/<br />
python setup.py sdist<br />
cp -a octavia_dashboard/enabled/_1482_project_load_balancer_panel.py /usr/share/openstack-dashboard/openstack_dashboard/local/enabled/<br />
pip3 install octavia-dashboard<br />
chmod +x manage.py<br />
./manage.py collectstatic --noinput<br />
./manage.py compress<br />
systemctl restart apache2<br />
<br />
systemctl restart nova-api<br />
systemctl restart neutron-server<br />
</exec><br />
<br />
<exec seq="step54" type="verbatim"><br />
# Create external network<br />
source /root/bin/admin-openrc.sh<br />
openstack network create --share --external --provider-physical-network provider --provider-network-type flat ExtNet<br />
openstack subnet create --network ExtNet --gateway 10.0.10.1 --dns-nameserver 10.0.10.1 --subnet-range 10.0.10.0/24 --allocation-pool start=10.0.10.100,end=10.0.10.200 ExtSubNet<br />
</exec><br />
<br />
<!--<br />
STEP 6: Dashboard service<br />
--><br />
<filetree seq="step6" root="/etc/openstack-dashboard/">conf/controller/dashboard/local_settings.py</filetree><br />
<exec seq="step6" type="verbatim"><br />
# FWaaS Dashboard<br />
# https://docs.openstack.org/neutron-fwaas-dashboard/latest/doc-neutron-fwaas-dashboard.pdf<br />
git clone https://opendev.org/openstack/neutron-fwaas-dashboard<br />
cd neutron-fwaas-dashboard<br />
sudo pip install .<br />
cp neutron_fwaas_dashboard/enabled/_701* /usr/share/openstack-dashboard/openstack_dashboard/local/enabled/<br />
./manage.py compilemessages<br />
DJANGO_SETTINGS_MODULE=openstack_dashboard.settings python manage.py collectstatic --noinput<br />
DJANGO_SETTINGS_MODULE=openstack_dashboard.settings python manage.py compress --force<br />
<br />
systemctl enable apache2<br />
systemctl restart apache2<br />
</exec><br />
<br />
<!--<br />
STEP 7: Trove service<br />
--><br />
<cmd-seq seq="step7">step71,step72,step73</cmd-seq><br />
<exec seq="step71" type="verbatim"><br />
apt-get -y install python-trove python-troveclient python-glanceclient trove-common trove-api trove-taskmanager trove-conductor python-pip<br />
pip install trove-dashboard==7.0.0.0b2<br />
</exec><br />
<br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-conductor.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-taskmanager.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-guestagent.conf</filetree><br />
<exec seq="step72" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE trove;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
<br />
openstack user create --domain default --password xxxx trove<br />
openstack role add --project service --user trove admin<br />
openstack service create --name trove --description "Database" database<br />
openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s<br />
<br />
su -s /bin/sh -c "trove-manage db_sync" trove<br />
<br />
service trove-api restart<br />
service trove-taskmanager restart<br />
service trove-conductor restart<br />
<br />
# Install trove_dashboard<br />
cp -a /usr/local/lib/python2.7/dist-packages/trove_dashboard/enabled/* /usr/share/openstack-dashboard/openstack_dashboard/local/enabled/<br />
service apache2 restart<br />
<br />
</exec><br />
<br />
<exec seq="step73" type="verbatim"><br />
#wget -P /tmp/images http://tarballs.openstack.org/trove/images/ubuntu/mariadb.qcow2<br />
wget -P /tmp/images/ http://vnx.dit.upm.es/vnx/filesystems/ostack-images/trove/mariadb.qcow2<br />
glance image-create --name "trove-mariadb" --file /tmp/images/mariadb.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/mariadb.qcow2<br />
su -s /bin/sh -c "trove-manage --config-file /etc/trove/trove.conf datastore_update mysql ''" trove<br />
su -s /bin/sh -c "trove-manage --config-file /etc/trove/trove.conf datastore_version_update mysql mariadb mariadb glance_image_ID '' 1" trove<br />
<br />
# Create example database<br />
openstack flavor show m1.smaller >/dev/null 2>&amp;1 || openstack flavor create m1.smaller --ram 512 --disk 3 --vcpus 1 --id 6<br />
#trove create mysql_instance_1 m1.smaller --size 1 --databases myDB --users userA:xxxx --datastore_version mariadb --datastore mysql<br />
</exec><br />
<br />
<!--<br />
STEP 8: Heat service<br />
--><br />
<!--cmd-seq seq="step8">step81,step82</cmd-seq--><br />
<filetree seq="step8" root="/etc/heat/">conf/controller/heat/heat.conf</filetree><br />
<filetree seq="step8" root="/root/heat/">conf/controller/heat/examples</filetree><br />
<exec seq="step8" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE heat;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password xxxx heat<br />
openstack role add --project service --user heat admin<br />
openstack service create --name heat --description "Orchestration" orchestration<br />
openstack service create --name heat-cfn --description "Orchestration" cloudformation<br />
openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1<br />
openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1<br />
openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1<br />
openstack domain create --description "Stack projects and users" heat<br />
openstack user create --domain heat --password xxxx heat_domain_admin<br />
openstack role add --domain heat --user-domain heat --user heat_domain_admin admin<br />
openstack role create heat_stack_owner<br />
openstack role add --project demo --user demo heat_stack_owner<br />
openstack role create heat_stack_user<br />
<br />
su -s /bin/sh -c "heat-manage db_sync" heat<br />
systemctl enable heat-api<br />
systemctl enable heat-api-cfn<br />
systemctl enable heat-engine<br />
systemctl restart heat-api<br />
systemctl restart heat-api-cfn<br />
systemctl restart heat-engine<br />
<br />
# Install Orchestration interface in Dashboard<br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get install -y gettext<br />
pip3 install heat-dashboard<br />
<br />
cd /root<br />
git clone https://github.com/openstack/heat-dashboard.git<br />
cd heat-dashboard/<br />
git checkout stable/stein<br />
cp heat_dashboard/enabled/_[1-9]*.py /usr/share/openstack-dashboard/openstack_dashboard/local/enabled<br />
python3 ./manage.py compilemessages<br />
cd /usr/share/openstack-dashboard<br />
DJANGO_SETTINGS_MODULE=openstack_dashboard.settings python3 manage.py collectstatic --noinput<br />
DJANGO_SETTINGS_MODULE=openstack_dashboard.settings python3 manage.py compress --force<br />
#rm -f /var/lib/openstack-dashboard/secret_key<br />
systemctl restart apache2<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-heat" type="verbatim"><br />
#source /root/bin/demo-openrc.sh<br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create internal network<br />
openstack network create net-heat<br />
openstack subnet create --network net-heat --gateway 10.1.10.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.10.0/24 --allocation-pool start=10.1.10.8,end=10.1.10.100 subnet-heat<br />
<br />
mkdir -p /root/keys<br />
openstack keypair create key-heat > /root/keys/key-heat<br />
export NET_ID=$( openstack network list --name net-heat -f value -c ID )<br />
openstack stack create -t /root/heat/examples/demo-template.yml --parameter "NetID=$NET_ID" stack<br />
<br />
</exec><br />
<br />
<br />
<!--<br />
STEP 9: Tacker service<br />
--><br />
<cmd-seq seq="step9">step91,step92</cmd-seq><br />
<br />
<exec seq="step91" type="verbatim"><br />
apt-get -y install python-pip git<br />
pip install --upgrade pip<br />
</exec><br />
<br />
<filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/default-vim-config.yaml</filetree><br />
<filetree seq="step92" root="/root/tacker/">conf/controller/tacker/examples</filetree><br />
<exec seq="step92" type="verbatim"><br />
sed -i -e 's/.*"resource_types:OS::Nova::Flavor":.*/ "resource_types:OS::Nova::Flavor": "role:admin",/' /etc/heat/policy.json<br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE tacker;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password xxxx tacker<br />
openstack role add --project service --user tacker admin<br />
openstack service create --name tacker --description "Tacker Project" nfv-orchestration<br />
openstack endpoint create --region RegionOne nfv-orchestration public http://controller:9890/<br />
openstack endpoint create --region RegionOne nfv-orchestration internal http://controller:9890/<br />
openstack endpoint create --region RegionOne nfv-orchestration admin http://controller:9890/<br />
<br />
mkdir -p /root/tacker<br />
cd /root/tacker<br />
git clone https://github.com/openstack/tacker<br />
cd tacker<br />
git checkout stable/ocata<br />
pip install -r requirements.txt<br />
pip install tosca-parser<br />
python setup.py install<br />
mkdir -p /var/log/tacker<br />
<br />
/usr/local/bin/tacker-db-manage --config-file /usr/local/etc/tacker/tacker.conf upgrade head<br />
<br />
# Tacker client<br />
cd /root/tacker<br />
git clone https://github.com/openstack/python-tackerclient<br />
cd python-tackerclient<br />
git checkout stable/ocata<br />
python setup.py install<br />
<br />
# Tacker horizon<br />
cd /root/tacker<br />
git clone https://github.com/openstack/tacker-horizon<br />
cd tacker-horizon<br />
git checkout stable/ocata<br />
python setup.py install<br />
cp tacker_horizon/enabled/* /usr/share/openstack-dashboard/openstack_dashboard/enabled/<br />
service apache2 restart<br />
<br />
# Start tacker server<br />
mkdir -p /var/log/tacker<br />
nohup python /usr/local/bin/tacker-server \<br />
--config-file /usr/local/etc/tacker/tacker.conf \<br />
--log-file /var/log/tacker/tacker.log &amp;<br />
<br />
# Register default VIM<br />
tacker vim-register --is-default --config-file /usr/local/etc/tacker/default-vim-config.yaml \<br />
--description "Default VIM" "Openstack-VIM"<br />
<br />
</exec><br />
<exec seq="step93" type="verbatim"><br />
nohup python /usr/local/bin/tacker-server \<br />
--config-file /usr/local/etc/tacker/tacker.conf \<br />
--log-file /var/log/tacker/tacker.log &amp;<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-tacker" type="verbatim"><br />
source /root/bin/demo-openrc.sh<br />
<br />
# Create internal network<br />
openstack network create net-tacker<br />
openstack subnet create --network net-tacker --gateway 10.1.11.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.11.0/24 --allocation-pool start=10.1.11.8,end=10.1.11.100 subnet-tacker<br />
<br />
cd /root/tacker/examples<br />
tacker vnfd-create --vnfd-file sample-vnfd.yaml testd<br />
<br />
# Falla con error:<br />
# ERROR: Property error: : resources.VDU1.properties.image: : No module named v2.client<br />
tacker vnf-create --vnfd-id $( tacker vnfd-list | awk '/ testd / { print $2 }' ) test<br />
<br />
</exec><br />
<br />
<!--filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<exec seq="step92" type="verbatim"><br />
</exec--><br />
<br />
<!--<br />
STEP 10: Ceilometer service<br />
Based on https://www.server-world.info/en/note?os=Ubuntu_22.04&p=openstack_antelope4&f=8<br />
--><br />
<br />
<exec seq="step100" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
# moved to the rootfs creation script<br />
#apt-get -y install gnocchi-api gnocchi-metricd python3-gnocchiclient<br />
#apt-get -y install ceilometer-agent-central ceilometer-agent-notification<br />
</exec><br />
<br />
<filetree seq="step101" root="/etc/gnocchi/">conf/controller/gnocchi/gnocchi.conf</filetree><br />
<filetree seq="step101" root="/etc/gnocchi/">conf/controller/gnocchi/policy.json</filetree><br />
<exec seq="step101" type="verbatim"><br />
<!-- Install gnocchi --><br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --project service --password xxxx gnocchi<br />
openstack role add --project service --user gnocchi admin<br />
openstack service create --name gnocchi --description "Metric Service" metric<br />
openstack endpoint create --region RegionOne metric public http://controller:8041<br />
openstack endpoint create --region RegionOne metric internal http://controller:8041<br />
openstack endpoint create --region RegionOne metric admin http://controller:8041<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE gnocchi;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
chmod 640 /etc/gnocchi/gnocchi.conf<br />
chgrp gnocchi /etc/gnocchi/gnocchi.conf<br />
<br />
su -s /bin/bash gnocchi -c "gnocchi-upgrade"<br />
a2enmod wsgi<br />
a2ensite gnocchi-api<br />
systemctl restart gnocchi-metricd apache2<br />
systemctl enable gnocchi-metricd<br />
systemctl status gnocchi-metricd<br />
export OS_AUTH_TYPE=password<br />
gnocchi resource list<br />
<br />
</exec><br />
<br />
<filetree seq="step102" root="/etc/ceilometer/">conf/controller/ceilometer/ceilometer.conf</filetree><br />
<!--filetree seq="step102" root="/etc/apache2/sites-available/gnocchi.conf">conf/controller/ceilometer/apache/gnocchi.conf</filetree--><br />
<!--filetree seq="step102" root="/etc/ceilometer/">conf/controller/ceilometer/pipeline.yaml</filetree--><br />
<!--filetree seq="step102" root="/etc/gnocchi/">conf/controller/ceilometer/api-paste.ini</filetree--><br />
<exec seq="step102" type="verbatim"><br />
<br />
<!-- Install Ceilometer --><br />
source /root/bin/admin-openrc.sh<br />
# Ceilometer<br />
# Following https://tinyurl.com/22w6xgm4<br />
openstack user create --domain default --project service --password xxxx ceilometer<br />
openstack role add --project service --user ceilometer admin<br />
openstack service create --name ceilometer --description "OpenStack Telemetry Service" metering<br />
<br />
chmod 640 /etc/ceilometer/ceilometer.conf<br />
chgrp ceilometer /etc/ceilometer/ceilometer.conf<br />
su -s /bin/bash ceilometer -c "ceilometer-upgrade"<br />
systemctl restart ceilometer-agent-central ceilometer-agent-notification<br />
systemctl enable ceilometer-agent-central ceilometer-agent-notification<br />
<br />
#ceilometer-upgrade<br />
#systemctl restart ceilometer-agent-central<br />
#service restart ceilometer-agent-notification<br />
<br />
<br />
# Enable Glance service meters<br />
# https://tinyurl.com/274oe82n<br />
crudini --set /etc/glance/glance-api.conf oslo_messaging_notifications driver messagingv2<br />
crudini --set /etc/glance/glance-api.conf oslo_messaging_notifications transport_url rabbit://openstack:xxxx@controller<br />
systemctl restart glance-api<br />
openstack metric resource list<br />
<br />
# Enable Neutron service meters<br />
crudini --set /etc/neutron/neutron.conf oslo_messaging_notifications driver messagingv2<br />
service neutron-server restart<br />
<br />
# Enable Heat service meters<br />
crudini --set /etc/heat/heat.conf oslo_messaging_notifications driver messagingv2<br />
systemctl restart heat-api<br />
systemctl restart heat-api-cfn<br />
systemctl restart heat-engine<br />
<br />
# Enable Networking service meters<br />
crudini --set /etc/neutron/neutron.conf oslo_messaging_notifications driver messagingv2<br />
systemctl restart neutron-server<br />
</exec><br />
<br />
<!-- STEP 11: SKYLINE --><br />
<!-- Adapted from https://tinyurl.com/245v6q73 --><br />
<exec seq="step111" type="verbatim"><br />
#pip3 install skyline-apiserver<br />
#apt-get -y install npm python-is-python3 nginx<br />
#npm install -g yarn<br />
</exec><br />
<br />
<filetree seq="step112" root="/etc/systemd/system/">conf/controller/skyline/skyline-apiserver.service</filetree><br />
<exec seq="step112" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --project service --password xxxx skyline<br />
openstack role add --project service --user skyline admin<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE skyline;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON skyline.* TO 'skyline'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON skyline.* TO 'skyline'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
#groupadd -g 64080 skyline<br />
#useradd -u 64080 -g skyline -d /var/lib/skyline -s /sbin/nologin skyline<br />
pip3 install skyline-apiserver<br />
#mkdir -p /etc/skyline /var/lib/skyline /var/log/skyline<br />
mkdir -p /etc/skyline /var/log/skyline<br />
#chmod 750 /etc/skyline /var/lib/skyline /var/log/skyline<br />
cd /root<br />
git clone -b stable/2023.1 https://opendev.org/openstack/skyline-apiserver.git<br />
#cp ./skyline-apiserver/etc/gunicorn.py /etc/skyline/gunicorn.py<br />
#cp ./skyline-apiserver/etc/skyline.yaml.sample /etc/skyline/skyline.yaml<br />
</exec><br />
<br />
<filetree seq="step113" root="/etc/skyline/">conf/controller/skyline/gunicorn.py</filetree><br />
<filetree seq="step113" root="/etc/skyline/">conf/controller/skyline/skyline.yaml</filetree><br />
<filetree seq="step113" root="/etc/systemd/system/">conf/controller/skyline/skyline-apiserver.service</filetree><br />
<exec seq="step113" type="verbatim"><br />
cd /root/skyline-apiserver<br />
make db_sync<br />
cd ..<br />
#chown -R skyline. /etc/skyline /var/lib/skyline /var/log/skyline<br />
systemctl daemon-reload<br />
systemctl enable --now skyline-apiserver<br />
apt-get -y install npm python-is-python3 nginx<br />
rm -rf /usr/local/lib/node_modules/yarn/<br />
npm install -g yarn<br />
git clone -b stable/2023.1 https://opendev.org/openstack/skyline-console.git<br />
cd ./skyline-console<br />
make package<br />
pip3 install --force-reinstall ./dist/skyline_console-*.whl<br />
cd ..<br />
skyline-nginx-generator -o /etc/nginx/nginx.conf<br />
sudo sed -i "s/server .* fail_timeout=0;/server 0.0.0.0:28000 fail_timeout=0;/g" /etc/nginx/nginx.conf<br />
sudo systemctl restart skyline-apiserver.service<br />
sudo systemctl enable nginx.service<br />
sudo systemctl restart nginx.service<br />
</exec><br />
<br />
<!-- STEP 12: LOAD BALANCER OCTAVIA --><br />
<!-- Adapted from https://tinyurl.com/245v6q73 --><br />
<exec seq="step121" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE octavia;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON octavia.* TO 'octavia'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON octavia.* TO 'octavia'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
#openstack user create --domain default --project service --password xxxx octavia<br />
openstack user create --domain default --password xxxx octavia<br />
openstack role add --project service --user octavia admin<br />
openstack service create --name octavia --description "OpenStack LBaaS" load-balancer<br />
export octavia_api=network<br />
openstack endpoint create --region RegionOne load-balancer public http://$octavia_api:9876<br />
openstack endpoint create --region RegionOne load-balancer internal http://$octavia_api:9876<br />
openstack endpoint create --region RegionOne load-balancer admin http://$octavia_api:9876<br />
<br />
source /root/bin/octavia-openrc.sh<br />
# Load Balancer (Octavia)<br />
#openstack flavor show m1.octavia >/dev/null 2>&amp;1 || openstack flavor create --id 100 --vcpus 1 --ram 1024 --disk 5 m1.octavia --private --project service<br />
openstack flavor show amphora >/dev/null 2>&amp;1 || openstack flavor create --id 200 --vcpus 1 --ram 1024 --disk 5 amphora --private<br />
wget -P /tmp/images http://vnx.dit.upm.es/vnx/filesystems/ostack-images/ubuntu-amphora-haproxy-amd64.qcow2<br />
#openstack image create "Amphora" --tag "Amphora" --file /tmp/images/ubuntu-amphora-haproxy-amd64.qcow2 --disk-format qcow2 --container-format bare --private --project service<br />
openstack image create --disk-format qcow2 --container-format bare --private --tag amphora --file /tmp/images/ubuntu-amphora-haproxy-amd64.qcow2 amphora-x64-haproxy<br />
rm /tmp/images/ubuntu-amphora-haproxy-amd64.qcow2<br />
<br />
</exec><br />
<br />
<!-- STEP 13: TELEMETRY ALARM SERVICE --><br />
<!-- See: https://docs.openstack.org/aodh/latest/install/install-ubuntu.html --><br />
<exec seq="step130" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get install -y aodh-api aodh-evaluator aodh-notifier aodh-listener aodh-expirer python3-aodhclient<br />
</exec><br />
<br />
<filetree seq="step131" root="/etc/aodh/">conf/controller/aodh/aodh.conf</filetree><br />
<exec seq="step131" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE aodh;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password xxxx aodh<br />
openstack role add --project service --user aodh admin<br />
openstack service create --name aodh --description "Telemetry" alarming<br />
openstack endpoint create --region RegionOne alarming public http://controller:8042<br />
openstack endpoint create --region RegionOne alarming internal http://controller:8042<br />
openstack endpoint create --region RegionOne alarming admin http://controller:8042<br />
<br />
aodh-dbsync<br />
<br />
# aodh-api no funciona desde wsgi. Hay que arrancarlo manualmente<br />
rm /etc/apache2/sites-enabled/aodh-api.conf<br />
systemctl restart apache2<br />
#service aodh-api restart<br />
nohup aodh-api --port 8042 -- --config-file /etc/aodh/aodh.conf &amp;<br />
systemctl restart aodh-evaluator<br />
systemctl restart aodh-notifier<br />
systemctl restart aodh-listener<br />
<br />
</exec><br />
<br />
<exec seq="step999" type="verbatim"><br />
# Change horizon port to 8080<br />
sed -i 's/Listen 80/Listen 8080/' /etc/apache2/ports.conf<br />
sed -i 's/VirtualHost \*:80/VirtualHost *:8080/' /etc/apache2/sites-enabled/000-default.conf<br />
systemctl restart apache2<br />
# Change Skyline to port 80<br />
sed -i 's/0.0.0.0:9999/0.0.0.0:80/' /etc/nginx/nginx.conf<br />
systemctl restart nginx<br />
systemctl restart skyline-apiserver<br />
</exec><br />
<br />
<!--<br />
LOAD IMAGES TO GLANCE<br />
--><br />
<exec seq="load-img" type="verbatim"><br />
dhclient eth9 # just in case the Internet connection is not active...<br />
<br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create flavors if not created<br />
openstack flavor show m1.nano >/dev/null 2>&amp;1 || openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano<br />
openstack flavor show m1.tiny >/dev/null 2>&amp;1 || openstack flavor create --id 1 --vcpus 1 --ram 512 --disk 1 m1.tiny<br />
openstack flavor show m1.smaller >/dev/null 2>&amp;1 || openstack flavor create --id 6 --vcpus 1 --ram 512 --disk 3 m1.smaller<br />
#openstack flavor show m1.octavia >/dev/null 2>&amp;1 || openstack flavor create --id 100 --vcpus 1 --ram 1024 --disk 5 m1.octavia --private --project service<br />
<br />
# CentOS image<br />
# Cirros image<br />
#wget -P /tmp/images http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img<br />
wget -P /tmp/images http://vnx.dit.upm.es/vnx/filesystems/ostack-images/cirros-0.3.4-x86_64-disk-vnx.qcow2<br />
glance image-create --name "cirros-0.3.4-x86_64-vnx" --file /tmp/images/cirros-0.3.4-x86_64-disk-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/cirros-0.3.4-x86_64-disk*.qcow2<br />
<br />
# Ubuntu image (trusty)<br />
#wget -P /tmp/images http://vnx.dit.upm.es/vnx/filesystems/ostack-images/trusty-server-cloudimg-amd64-disk1-vnx.qcow2<br />
#glance image-create --name "trusty-server-cloudimg-amd64-vnx" --file /tmp/images/trusty-server-cloudimg-amd64-disk1-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
#rm /tmp/images/trusty-server-cloudimg-amd64-disk1*.qcow2<br />
<br />
# Ubuntu image (xenial)<br />
#wget -P /tmp/images http://vnx.dit.upm.es/vnx/filesystems/ostack-images/xenial-server-cloudimg-amd64-disk1-vnx.qcow2<br />
#glance image-create --name "xenial-server-cloudimg-amd64-vnx" --file /tmp/images/xenial-server-cloudimg-amd64-disk1-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
#rm /tmp/images/xenial-server-cloudimg-amd64-disk1*.qcow2<br />
<br />
# Ubuntu image (focal,20.04)<br />
rm -f/tmp/images/focal-server-cloudimg-amd64-vnx.qcow2<br />
wget -P /tmp/images http://vnx.dit.upm.es/vnx/filesystems/ostack-images/focal-server-cloudimg-amd64-vnx.qcow2<br />
openstack image create "focal-server-cloudimg-amd64-vnx" --file /tmp/images/focal-server-cloudimg-amd64-vnx.qcow2 --disk-format qcow2 --container-format bare --public --progress<br />
rm /tmp/images/focal-server-cloudimg-amd64-vnx.qcow2<br />
<br />
# Ubuntu image (jammy,22.04)<br />
rm -f/tmp/images/jammy-server-cloudimg-amd64-vnx.qcow2<br />
wget -P /tmp/images http://vnx.dit.upm.es/vnx/filesystems/ostack-images/jammy-server-cloudimg-amd64-vnx.qcow2<br />
openstack image create "jammy-server-cloudimg-amd64-vnx" --file /tmp/images/jammy-server-cloudimg-amd64-vnx.qcow2 --disk-format qcow2 --container-format bare --public --progress<br />
rm /tmp/images/jammy-server-cloudimg-amd64-vnx.qcow2<br />
<br />
# CentOS-7<br />
#wget -P /tmp/images http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2<br />
#glance image-create --name "CentOS-7-x86_64" --file /tmp/images/CentOS-7-x86_64-GenericCloud.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
#rm /tmp/images/CentOS-7-x86_64-GenericCloud.qcow2<br />
<br />
# Load Balancer (Octavia)<br />
#wget -P /tmp/images http://vnx.dit.upm.es/vnx/filesystems/ostack-images/ubuntu-amphora-haproxy-amd64.qcow2<br />
#openstack image create "Amphora" --tag "Amphora" --file /tmp/images/ubuntu-amphora-haproxy-amd64.qcow2 --disk-format qcow2 --container-format bare --private --project service<br />
#rm /tmp/images/ubuntu-amphora-haproxy-amd64.qcow2<br />
<br />
</exec><br />
<br />
<!--<br />
CREATE DEMO SCENARIO<br />
--><br />
<exec seq="create-demo-scenario" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create security group rules to allow ICMP, SSH and WWW access<br />
admin_project_id=$(openstack project show admin -c id -f value)<br />
default_secgroup_id=$(openstack security group list -f value | grep default | grep $admin_project_id | cut -d " " -f1)<br />
openstack security group rule create --proto icmp --dst-port 0 $default_secgroup_id<br />
openstack security group rule create --proto tcp --dst-port 80 $default_secgroup_id<br />
openstack security group rule create --proto tcp --dst-port 22 $default_secgroup_id<br />
<br />
# Create internal network<br />
openstack network create net0<br />
openstack subnet create --network net0 --gateway 10.1.1.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.1.0/24 --allocation-pool start=10.1.1.8,end=10.1.1.100 subnet0<br />
<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm1 > /root/keys/vm1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm1 --nic net-id=net0 --key-name vm1<br />
<br />
# Create external network<br />
#openstack network create --share --external --provider-physical-network provider --provider-network-type flat ExtNet<br />
#openstack subnet create --network ExtNet --gateway 10.0.10.1 --dns-nameserver 10.0.10.1 --subnet-range 10.0.10.0/24 --allocation-pool start=10.0.10.100,end=10.0.10.200 ExtSubNet<br />
openstack router create r0<br />
openstack router set r0 --external-gateway ExtNet<br />
openstack router add subnet r0 subnet0<br />
<br />
# Assign floating IP address to vm1<br />
openstack server add floating ip vm1 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-vm2" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm2 > /root/keys/vm2<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm2 --nic net-id=net0 --key-name vm2<br />
# Assign floating IP address to vm2<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm2<br />
openstack server add floating ip vm2 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
</exec><br />
<br />
<exec seq="create-demo-vm3" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm3 > /root/keys/vm3<br />
openstack server create --flavor m1.smaller --image focal-server-cloudimg-amd64-vnx vm3 --nic net-id=net0 --key-name vm3<br />
# Assign floating IP address to vm3<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm3<br />
openstack server add floating ip vm3 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
</exec><br />
<br />
<exec seq="create-demo-vm4" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm4 > /root/keys/vm4<br />
openstack server create --flavor m1.smaller --image jammy-server-cloudimg-amd64-vnx vm4 --nic net-id=net0 --key-name vm4 --property VAR1=2 --property VAR2=3<br />
# Assign floating IP address to vm4<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm4<br />
openstack server add floating ip vm4 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
</exec><br />
<br />
<exec seq="create-vlan-demo-scenario" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create security group rules to allow ICMP, SSH and WWW access<br />
admin_project_id=$(openstack project show admin -c id -f value)<br />
default_secgroup_id=$(openstack security group list -f value | grep $admin_project_id | cut -d " " -f1)<br />
openstack security group rule create --proto icmp --dst-port 0 $default_secgroup_id<br />
openstack security group rule create --proto tcp --dst-port 80 $default_secgroup_id<br />
openstack security group rule create --proto tcp --dst-port 22 $default_secgroup_id<br />
<br />
# Create vlan based networks and subnetworks<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1000 vlan1000<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1001 vlan1001<br />
openstack subnet create --network vlan1000 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.2.0/24 --allocation-pool start=10.1.2.2,end=10.1.2.99 subvlan1000<br />
openstack subnet create --network vlan1001 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.3.0/24 --allocation-pool start=10.1.3.2,end=10.1.3.99 subvlan1001<br />
<br />
# Create virtual machine<br />
mkdir -p tmp<br />
openstack keypair create vmA1 > tmp/vmA1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vmA1 --nic net-id=vlan1000 --key-name vmA1<br />
openstack keypair create vmB1 > tmp/vmB1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vmB1 --nic net-id=vlan1001 --key-name vmB1<br />
<br />
</exec><br />
<br />
<!--<br />
VERIFY<br />
--><br />
<exec seq="verify" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
echo "--"<br />
echo "-- Keystone (identity)"<br />
echo "--"<br />
echo "Command: openstack --os-auth-url http://controller:5000/v3 --os-project-domain-name Default --os-user-domain-name Default --os-project-name admin --os-username admin token issue"<br />
openstack --os-auth-url http://controller:5000/v3 \<br />
--os-project-domain-name Default --os-user-domain-name Default \<br />
--os-project-name admin --os-username admin token issue<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
echo "--"<br />
echo "-- Glance (images)"<br />
echo "--"<br />
echo "Command: openstack image list"<br />
openstack image list<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
echo "--"<br />
echo "-- Nova (compute)"<br />
echo "--"<br />
echo "Command: openstack compute service list"<br />
openstack compute service list<br />
echo "Command: openstack hypervisor list"<br />
openstack hypervisor service list<br />
echo "Command: openstack catalog list"<br />
openstack catalog list<br />
echo "Command: nova-status upgrade check"<br />
nova-status upgrade check<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
echo "--"<br />
echo "-- Neutron (network)"<br />
echo "--"<br />
echo "Command: openstack extension list --network"<br />
openstack extension list --network<br />
echo "Command: openstack network agent list"<br />
openstack network agent list<br />
echo "Command: openstack security group list"<br />
openstack security group list<br />
echo "Command: openstack security group rule list"<br />
openstack security group rule list<br />
</exec><br />
<br />
</vm><br />
<br />
<!--<br />
~~<br />
~~ N E T W O R K N O D E<br />
~~<br />
--><br />
<vm name="network" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-network</filesystem><br />
<mem>1G</mem><br />
<shareddir root="/root/shared">shared</shareddir><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.21/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.21/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="4" net="ExtNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<forwarding type="ip" /><br />
<forwarding type="ipv6" /><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/controller/bin</filetree><br />
<filetree seq="on_boot" root="/root/bin/">conf/controller/keystone/admin-openrc.sh</filetree><br />
<filetree seq="on_boot" root="/root/bin/">conf/controller/keystone/demo-openrc.sh</filetree><br />
<filetree seq="on_boot" root="/root/bin/">conf/controller/keystone/octavia-openrc.sh</filetree><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<filetree seq="on_boot" root="/tmp/">conf/controller/ssh/id_rsa.pub</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts<br />
rm /root/hosts<br />
</exec><br />
<exec seq="on_boot" type="verbatim"><br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
mkdir /root/.ssh<br />
cat /tmp/id_rsa.pub >> /root/.ssh/authorized_keys<br />
dhclient eth9 # just in case the Internet connection is not active...<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --><br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<filetree seq="on_boot" root="/root/">conf/network/bin</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
chmod +x /root/bin/*<br />
</exec><br />
<br />
<exec seq="step00,step01" type="verbatim"><br />
dhclient eth9<br />
ping -c 3 www.dit.upm.es<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) --><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/neutron.conf</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/metadata_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/plugins/ml2/">conf/network/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step52" root="/etc/neutron/plugins/ml2/">conf/network/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/l3_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/dhcp_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/dnsmasq-neutron.conf</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/fwaas_driver.ini</filetree><br />
<!--filetree seq="step52" root="/etc/neutron/">conf/network/neutron/lbaas_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/neutron_lbaas.conf</filetree--><br />
<exec seq="step52" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
ovs-vsctl add-br br-provider<br />
ovs-vsctl add-port br-provider eth4<br />
<br />
#service neutron-lbaasv2-agent restart<br />
#systemctl restart neutron-lbaasv2-agent<br />
#systemctl enable neutron-lbaasv2-agent<br />
#service openvswitch-switch restart<br />
<br />
systemctl enable neutron-openvswitch-agent<br />
systemctl enable neutron-dhcp-agent<br />
systemctl enable neutron-metadata-agent<br />
systemctl enable neutron-l3-agent<br />
systemctl start neutron-openvswitch-agent<br />
systemctl start neutron-dhcp-agent<br />
systemctl start neutron-metadata-agent<br />
systemctl start neutron-l3-agent<br />
</exec><br />
<br />
<!-- STEP 12: LOAD BALANCER OCTAVIA --><br />
<!-- Official recipe in: https://github.com/openstack/octavia/blob/master/doc/source/install/install-ubuntu.rst --><br />
<!-- Adapted from https://tinyurl.com/245v6q73 --><br />
<exec seq="step122" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
#source /root/bin/admin-openrc.sh<br />
source /root/bin/octavia-openrc.sh<br />
#apt -y install octavia-api octavia-health-manager octavia-housekeeping octavia-worker python3-ovn-octavia-provider<br />
#apt -y install octavia-api octavia-health-manager octavia-housekeeping octavia-worker python3-octavia python3-octaviaclient<br />
mkdir -p /etc/octavia/certs/private<br />
sudo chmod 755 /etc/octavia -R<br />
mkdir ~/work<br />
cd ~/work<br />
git clone https://opendev.org/openstack/octavia.git<br />
cd octavia/bin<br />
sed -i 's/not-secure-passphrase/$1/' create_dual_intermediate_CA.sh<br />
source create_dual_intermediate_CA.sh 01234567890123456789012345678901<br />
#cp -p ./dual_ca/etc/octavia/certs/server_ca.cert.pem /etc/octavia/certs<br />
#cp -p ./dual_ca/etc/octavia/certs/server_ca-chain.cert.pem /etc/octavia/certs<br />
#cp -p ./dual_ca/etc/octavia/certs/server_ca.key.pem /etc/octavia/certs/private<br />
#cp -p ./dual_ca/etc/octavia/certs/client_ca.cert.pem /etc/octavia/certs<br />
#cp -p ./dual_ca/etc/octavia/certs/client.cert-and-key.pem /etc/octavia/certs/private<br />
#chown -R octavia /etc/octavia/certs<br />
cp -p etc/octavia/certs/server_ca.cert.pem /etc/octavia/certs<br />
cp -p etc/octavia/certs/server_ca-chain.cert.pem /etc/octavia/certs<br />
cp -p etc/octavia/certs/server_ca.key.pem /etc/octavia/certs/private<br />
cp -p etc/octavia/certs/client_ca.cert.pem /etc/octavia/certs<br />
cp -p etc/octavia/certs/client.cert-and-key.pem /etc/octavia/certs/private<br />
chown -R octavia.octavia /etc/octavia/certs<br />
</exec><br />
<br />
<filetree seq="step123" root="/etc/octavia/">conf/network/octavia/octavia.conf</filetree><br />
<filetree seq="step123" root="/etc/octavia/">conf/network/octavia/policy.yaml</filetree><br />
<exec seq="step123" type="verbatim"><br />
#chmod 640 /etc/octavia/{octavia.conf,policy.yaml}<br />
#chgrp octavia /etc/octavia/{octavia.conf,policy.yaml}<br />
#su -s /bin/bash octavia -c "octavia-db-manage --config-file /etc/octavia/octavia.conf upgrade head"<br />
#systemctl restart octavia-api octavia-health-manager octavia-housekeeping octavia-worker<br />
#systemctl enable octavia-api octavia-health-manager octavia-housekeeping octavia-worker<br />
<br />
#source /root/bin/admin-openrc.sh<br />
source /root/bin/octavia-openrc.sh<br />
#openstack security group create lb-mgmt-sec-group --project service<br />
#openstack security group rule create --protocol icmp --ingress lb-mgmt-sec-group<br />
#openstack security group rule create --protocol tcp --dst-port 22:22 lb-mgmt-sec-group<br />
#openstack security group rule create --protocol tcp --dst-port 80:80 lb-mgmt-sec-group<br />
#openstack security group rule create --protocol tcp --dst-port 443:443 lb-mgmt-sec-group<br />
#openstack security group rule create --protocol tcp --dst-port 9443:9443 lb-mgmt-sec-group<br />
<br />
openstack security group create lb-mgmt-sec-grp<br />
openstack security group rule create --protocol icmp lb-mgmt-sec-grp<br />
openstack security group rule create --protocol tcp --dst-port 22 lb-mgmt-sec-grp<br />
openstack security group rule create --protocol tcp --dst-port 9443 lb-mgmt-sec-grp<br />
openstack security group create lb-health-mgr-sec-grp<br />
openstack security group rule create --protocol udp --dst-port 5555 lb-health-mgr-sec-grp<br />
<br />
ssh-keygen -t rsa -N "" -f /root/.ssh/id_rsa<br />
openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey<br />
<br />
mkdir -m755 -p /etc/dhcp/octavia<br />
cp ~/work/octavia/etc/dhcp/dhclient.conf /etc/dhcp/octavia<br />
</exec><br />
<br />
<br />
<br />
<exec seq="step124" type="verbatim"><br />
<br />
source /root/bin/octavia-openrc.sh<br />
<br />
OCTAVIA_MGMT_SUBNET=172.16.0.0/12<br />
OCTAVIA_MGMT_SUBNET_START=172.16.0.100<br />
OCTAVIA_MGMT_SUBNET_END=172.16.31.254<br />
OCTAVIA_MGMT_PORT_IP=172.16.0.2<br />
<br />
openstack network create lb-mgmt-net<br />
openstack subnet create --subnet-range $OCTAVIA_MGMT_SUBNET --allocation-pool \<br />
start=$OCTAVIA_MGMT_SUBNET_START,end=$OCTAVIA_MGMT_SUBNET_END \<br />
--network lb-mgmt-net lb-mgmt-subnet<br />
<br />
SUBNET_ID=$(openstack subnet show lb-mgmt-subnet -f value -c id)<br />
PORT_FIXED_IP="--fixed-ip subnet=$SUBNET_ID,ip-address=$OCTAVIA_MGMT_PORT_IP"<br />
<br />
MGMT_PORT_ID=$(openstack port create --security-group \<br />
lb-health-mgr-sec-grp --device-owner Octavia:health-mgr \<br />
--host=$(hostname) -c id -f value --network lb-mgmt-net \<br />
$PORT_FIXED_IP octavia-health-manager-listen-port)<br />
<br />
MGMT_PORT_MAC=$(openstack port show -c mac_address -f value \<br />
$MGMT_PORT_ID)<br />
<br />
#ip link add o-hm0 type veth peer name o-bhm0<br />
#ovs-vsctl -- --may-exist add-port br-int o-hm0 -- \<br />
# set Interface o-hm0 type=internal -- \<br />
# set Interface o-hm0 external-ids:iface-status=active -- \<br />
# set Interface o-hm0 external-ids:attached-mac=fa:16:3e:51:e9:c3 -- \<br />
# set Interface o-hm0 external-ids:iface-id=6fb13c3f-469e-4a81-a504-a161c6848654 -- \<br />
# set Interface o-hm0 external-ids:skip_cleanup=true<br />
<br />
ovs-vsctl -- --may-exist add-port br-int o-hm0 -- \<br />
set Interface o-hm0 type=internal -- \<br />
set Interface o-hm0 external-ids:iface-status=active -- \<br />
set Interface o-hm0 external-ids:attached-mac=$MGMT_PORT_MAC -- \<br />
set Interface o-hm0 external-ids:iface-id=$MGMT_PORT_ID -- \<br />
set Interface o-hm0 external-ids:skip_cleanup=true<br />
<br />
#NETID=$(openstack network show lb-mgmt-net -c id -f value)<br />
#BRNAME=brq$(echo $NETID|cut -c 1-11)<br />
#brctl addif $BRNAME o-bhm0<br />
ip link set o-bhm0 up<br />
<br />
ip link set dev o-hm0 address $MGMT_PORT_MAC<br />
iptables -I INPUT -i o-hm0 -p udp --dport 5555 -j ACCEPT<br />
dhclient -v o-hm0 -cf /etc/dhcp/octavia<br />
<br />
<br />
SECGRPID=$( openstack security group show lb-mgmt-sec-grp -c id -f value )<br />
LBMGMTNETID=$( openstack network show lb-mgmt-net -c id -f value )<br />
FLVRID=$( openstack flavor show amphora -c id -f value )<br />
#FLVRID=$( openstack flavor show m1.octavia -c id -f value )<br />
SERVICEPROJECTID=$( openstack project show service -c id -f value )<br />
<br />
#crudini --set /etc/octavia/octavia.conf controller_worker amp_image_tag Amphora<br />
crudini --set /etc/octavia/octavia.conf controller_worker amp_image_owner_id $SERVICEPROJECTID<br />
crudini --set /etc/octavia/octavia.conf controller_worker amp_image_tag amphora<br />
crudini --set /etc/octavia/octavia.conf controller_worker amp_ssh_key_name mykey<br />
crudini --set /etc/octavia/octavia.conf controller_worker amp_secgroup_list $SECGRPID<br />
crudini --set /etc/octavia/octavia.conf controller_worker amp_boot_network_list $LBMGMTNETID<br />
crudini --set /etc/octavia/octavia.conf controller_worker amp_flavor_id $FLVRID<br />
crudini --set /etc/octavia/octavia.conf controller_worker network_driver allowed_address_pairs_driver<br />
crudini --set /etc/octavia/octavia.conf controller_worker compute_driver compute_nova_driver<br />
crudini --set /etc/octavia/octavia.conf controller_worker amphora_driver amphora_haproxy_rest_driver<br />
crudini --set /etc/octavia/octavia.conf controller_worker client_ca /etc/octavia/certs/client_ca.cert.pem<br />
<br />
octavia-db-manage --config-file /etc/octavia/octavia.conf upgrade head<br />
systemctl restart octavia-api octavia-health-manager octavia-housekeeping octavia-worker<br />
</exec><br />
<br />
</vm><br />
<br />
<!--<br />
~~<br />
~~ C O M P U T E 1 N O D E<br />
~~<br />
--><br />
<vm name="compute1" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-compute</filesystem><br />
<mem>2G</mem><br />
<shareddir root="/root/shared">shared</shareddir><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.31/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.31/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<filetree seq="on_boot" root="/tmp/">conf/controller/ssh/id_rsa.pub</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
# Create /dev/net/tun device<br />
#mkdir -p /dev/net/<br />
#mknod -m 666 /dev/net/tun c 10 200<br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
mkdir /root/.ssh<br />
cat /tmp/id_rsa.pub >> /root/.ssh/authorized_keys<br />
dhclient eth9 # just in case the Internet connection is not active...<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --><br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<exec seq="step00,step01" type="verbatim"><br />
dhclient eth9<br />
ping -c 3 www.dit.upm.es<br />
</exec><br />
<br />
<!-- STEP 42: Compute service (Nova) --><br />
<filetree seq="step42" root="/etc/nova/">conf/compute1/nova/nova.conf</filetree><br />
<filetree seq="step42" root="/etc/nova/">conf/compute1/nova/nova-compute.conf</filetree><br />
<exec seq="step42" type="verbatim"><br />
systemctl enable nova-compute<br />
systemctl start nova-compute<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron) --><br />
<filetree seq="step53" root="/etc/neutron/">conf/compute1/neutron/neutron.conf</filetree><br />
<filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/openvswitch_agent.ini</filetree><br />
<exec seq="step53" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
systemctl enable openvswitch-switch<br />
systemctl enable neutron-openvswitch-agent<br />
systemctl enable libvirtd.service libvirt-guests.service<br />
systemctl enable nova-compute<br />
systemctl start openvswitch-switch<br />
systemctl start neutron-openvswitch-agent<br />
systemctl restart libvirtd.service libvirt-guests.service<br />
systemctl restart nova-compute<br />
</exec><br />
<br />
<!--filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<exec seq="step92" type="verbatim"><br />
</exec--><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
#export DEBIAN_FRONTEND=noninteractive<br />
#apt-get -y install ceilometer-agent-compute<br />
</exec><br />
<filetree seq="step102" root="/etc/ceilometer/">conf/compute1/ceilometer/ceilometer.conf</filetree><br />
<exec seq="step102" type="verbatim"><br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit True<br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit_period hour<br />
crudini --set /etc/nova/nova.conf notifications notify_on_state_change vm_and_task_state<br />
crudini --set /etc/nova/nova.conf oslo_messaging_notifications driver messagingv2<br />
systemctl restart ceilometer-agent-compute<br />
systemctl enable ceilometer-agent-compute<br />
systemctl restart nova-compute<br />
</exec><br />
<br />
</vm><br />
<br />
<!--<br />
~~~<br />
~~~ C O M P U T E 2 N O D E<br />
~~~<br />
--><br />
<vm name="compute2" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-compute</filesystem><br />
<mem>2G</mem><br />
<shareddir root="/root/shared">shared</shareddir><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.32/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.32/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<filetree seq="on_boot" root="/tmp/">conf/controller/ssh/id_rsa.pub</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
# Create /dev/net/tun device<br />
#mkdir -p /dev/net/<br />
#mknod -m 666 /dev/net/tun c 10 200<br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
mkdir /root/.ssh<br />
cat /tmp/id_rsa.pub >> /root/.ssh/authorized_keys<br />
dhclient eth9 # just in case the Internet connection is not active...<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --><br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<exec seq="step00,step01" type="verbatim"><br />
dhclient eth9<br />
ping -c 3 www.dit.upm.es<br />
</exec><br />
<br />
<!-- STEP 42: Compute service (Nova) --><br />
<filetree seq="step42" root="/etc/nova/">conf/compute2/nova/nova.conf</filetree><br />
<filetree seq="step42" root="/etc/nova/">conf/compute2/nova/nova-compute.conf</filetree><br />
<exec seq="step42" type="verbatim"><br />
systemctl enable nova-compute<br />
systemctl start nova-compute<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) --><br />
<filetree seq="step53" root="/etc/neutron/">conf/compute2/neutron/neutron.conf</filetree><br />
<filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/openvswitch_agent.ini</filetree><br />
<exec seq="step53" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
systemctl enable openvswitch-switch<br />
systemctl enable neutron-openvswitch-agent<br />
systemctl enable libvirtd.service libvirt-guests.service<br />
systemctl enable nova-compute<br />
systemctl start openvswitch-switch<br />
systemctl start neutron-openvswitch-agent<br />
systemctl restart libvirtd.service libvirt-guests.service<br />
systemctl restart nova-compute<br />
</exec><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
#export DEBIAN_FRONTEND=noninteractive<br />
#apt-get -y install ceilometer-agent-compute<br />
</exec><br />
<filetree seq="step102" root="/etc/ceilometer/">conf/compute2/ceilometer/ceilometer.conf</filetree><br />
<exec seq="step102" type="verbatim"><br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit True<br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit_period hour<br />
crudini --set /etc/nova/nova.conf notifications notify_on_state_change vm_and_task_state<br />
crudini --set /etc/nova/nova.conf oslo_messaging_notifications driver messagingv2<br />
systemctl restart ceilometer-agent-compute<br />
systemctl restart nova-compute<br />
</exec><br />
<br />
</vm><br />
<br />
<!--<br />
~~<br />
~~ H O S T N O D E<br />
~~<br />
--><br />
<host><br />
<hostif net="ExtNet"><br />
<ipv4>10.0.10.1/24</ipv4><br />
</hostif><br />
<hostif net="MgmtNet"><br />
<ipv4>10.0.0.1/24</ipv4><br />
</hostif><br />
<exec seq="step00" type="verbatim"><br />
echo "--\n-- Waiting for all VMs to be ssh ready...\n--"<br />
</exec><br />
<exec seq="step00" type="verbatim"><br />
# Wait till ssh is accesible in all VMs<br />
while ! $( nc -z controller 22 ); do sleep 1; done<br />
while ! $( nc -z network 22 ); do sleep 1; done<br />
while ! $( nc -z compute1 22 ); do sleep 1; done<br />
while ! $( nc -z compute2 22 ); do sleep 1; done<br />
</exec><br />
<exec seq="step00" type="verbatim"><br />
echo "-- ...OK\n--"<br />
</exec><br />
</host><br />
<br />
</vnx><br />
</pre></div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Vnx-labo-openstack-4nodes-classic-ovs-antelope&diff=2664
Vnx-labo-openstack-4nodes-classic-ovs-antelope
2023-09-18T08:31:29Z
<p>David: </p>
<hr />
<div>Being edited...<br />
<br />
{{Title|VNX Openstack Antelope four nodes classic scenario using Open vSwitch}}<br />
<br />
== Introduction ==<br />
<br />
This is an Openstack tutorial scenario designed to experiment with Openstack free and open-source software platform for cloud-computing.<br />
<br />
The scenario is made of four virtual machines: a controller node, a network node and two compute nodes, all based on LXC. Optionally, new compute nodes can be added by starting additional VNX scenarios.<br />
<br />
Openstack version used is Antelope (2023.1) over Ubuntu 22.04 LTS. The deployment scenario is the one that was named "Classic with Open vSwitch" and was described in previous versions of Openstack documentation (https://docs.openstack.org/liberty/networking-guide/scenario-classic-ovs.html). It is prepared to experiment either with the deployment of virtual machines using "Self Service Networks" provided by Openstack, or with the use of external "Provider networks".<br />
<br />
The configuration has been developed integrating into the VNX scenario all the installation and configuration commands described in [https://docs.openstack.org/2023.1/install/ Openstack Antelope installation recipes].<br />
<br />
[[File:Openstack_tutorial-stein.png|center|thumb|600px|<div align=center><br />
'''Figure 1: Openstack tutorial scenario'''</div>]]<br />
<br />
== Requirements ==<br />
<br />
To use the scenario you need a Linux computer (Ubuntu 20.04 or later recommended) with VNX software installed. At least 12GB of memory are needed to execute the scenario. <br />
<br />
See how to install VNX here: http://vnx.dit.upm.es/vnx/index.php/Vnx-install<br />
<br />
If already installed, update VNX to the latest version with:<br />
<br />
vnx_update<br />
<br />
== Installation ==<br />
<br />
Download the scenario with the virtual machines images included and unpack it:<br />
<br />
wget http://idefix.dit.upm.es/vnx/examples/openstack/openstack_lab-antelope_4n_classic_ovs-v01-with-rootfs.tgz<br />
sudo vnx --unpack openstack_lab-antelope_4n_classic_ovs-v01-with-rootfs.tgz<br />
<br />
== Starting the scenario ==<br />
<br />
Start the scenario and configure it and load an example cirros and ubuntu images with:<br />
cd openstack_lab-antelope_4n_classic_ovs-v01<br />
# Start the scenario<br />
sudo vnx -f openstack_lab.xml -v --create<br />
# Wait for all consoles to have started and configure all Openstack services<br />
vnx -f openstack_lab.xml -v -x start-all<br />
# Load vm images in GLANCE<br />
vnx -f openstack_lab.xml -v -x load-img<br />
<br />
<br />
[[File:Tutorial-openstack-antelope-4n-classic-ovs.png|center|thumb|600px|<div align=center><br />
'''Figure 2: Openstack tutorial detailed topology'''</div>]]<br />
<br />
Once started, you can connect to Openstack Dashboard (default/admin/xxxx) starting a browser and pointing it to the controller horizon page. For example:<br />
<br />
firefox 10.0.10.11/horizon<br />
<br />
== Self Service networks example ==<br />
<br />
Access the network topology Dashboard page (Project->Network->Network topology) and create a simple demo scenario inside Openstack:<br />
<br />
vnx -f openstack_lab.xml -v -x create-demo-scenario<br />
<br />
You should see the simple scenario as it is being created through the Dashboard.<br />
<br />
Once created you should be able to access vm1 console, to ping or ssh from the host to the vm1 or the opposite (see the floating IP assigned to vm1 in the Dashboard, probably 10.0.10.102).<br />
<br />
You can create a second Cirros virtual machine (vm2) or a third Ubuntu virtual machine (vm3) and test connectivity among all virtual machines with: <br />
<br />
vnx -f openstack_lab.xml -v -x create-demo-vm2<br />
vnx -f openstack_lab.xml -v -x create-demo-vm3<br />
<br />
To allow external Internet access from vm1 you hace to configure a NAT in the host. You can easily do it using '''vnx_config_nat''' command distributed with VNX. Just find out the name of the public network interface of your host (i.e eth0) and execute:<br />
<br />
vnx_config_nat ExtNet eth0<br />
<br />
Besides, you can access the Openstack controller by ssh from the host and execute management commands directly:<br />
slogin root@controller # root/xxxx<br />
source bin/admin-openrc.sh # Load admin credentials<br />
<br />
For example, to show the virtual machines started:<br />
openstack server list<br />
<br />
You can also execute that commands from the host where the virtual scenario is started. For that purpose you need to install openstack client first:<br />
<br />
pip install python-openstackclient<br />
source bin/admin-openrc.sh # Load admin credentials<br />
openstack server list<br />
<br />
== Provider networks example ==<br />
<br />
Compute nodes in the Openstack virtual lab scenario have two network interfaces for internal and external connections: <br />
* '''eth2''', connected to Tunnels network and used to connect with VMs in other compute nodes or routers in the network node<br />
* '''eth3''', connected to VLANs network and used to connect with VMs in other compute nodes and also to connect to external systems through the Providers networks infrastructure. <br />
<br />
To demonstrate how Openstack VMs can be connected with external systems though the VLAN network switches, an additional demo scenario is included. Just execute:<br />
vnx -f openstack_lab.xml -v -x create-vlan-demo-scenario<br />
<br />
That scenario will create two new networks and subnetworks associated with VLANs 1000 and 1001, and two VMs, vmA1 and vmB1 connected to that networks. You can see the scenario created through the openstack Dashboard.<br />
<br />
[[File:Openstack-provider-networks-example.png|center|thumb|600px|<div align=center><br />
'''Figure 3: Provider networks testing scenario'''</div>]]<br />
<br />
The commands used to create that networks and vms are the following (can be consulted also in the scenario XML file):<br />
<br />
<pre><br />
# Networks<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1000 vlan1000<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1001 vlan1001<br />
openstack subnet create --network vlan1000 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.2.0/24 --allocation-pool start=10.1.2.2,end=10.1.2.99 subvlan1000<br />
openstack subnet create --network vlan1001 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.3.0/24 --allocation-pool start=10.1.3.2,end=10.1.3.99 subvlan1001<br />
<br />
# VMs<br />
mkdir -p tmp<br />
openstack keypair create vmA1 > tmp/vmA1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vmA1 --nic net-id=vlan1000 --key-name vmA1<br />
openstack keypair create vmB1 > tmp/vmB1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vmB1 --nic net-id=vlan1001 --key-name vmB1<br />
</pre><br />
<br />
To demonstrate the connectivity of vmA1 and vmB1 to external systems connected on VLANs 1000/1001, you can start and additional virtual scenario which creates three additional systems: vmA2 (vlan 1000), vmB2 (vlan 1001) and vlan-router (connected to both vlans). To start it just execute:<br />
vnx -f openstack_lab-vms-vlan.xml -v -t<br />
<br />
[[File:Openstack_lab-vms-vlan-map.png|center|thumb|600px|<div align=center><br />
'''Figure 4: Provider networks demo external scenario'''</div>]]<br />
<br />
Once the scenario is started, you should be able to ping, traceroute and ssh among vmA1, vmB1, vmA2 and vmB2 using the following IP addresses:<br />
<ul><br />
<li>Virtual machines inside Openstack:</li><br />
<ul><br />
<li>vmA1 and vmB1: dynamic addresses assigned from 10.1.2.0/24. You can consult the addresses from Horizon or using the command:</li><br />
openstack server list<br />
</ul><br />
<li>Virtual machines outside Openstack:</li><br />
<ul><br />
<li>vmA2: 10.1.2.100/24</li><br />
<li>vmB2: 10.1.3.100/24</li><br />
</ul><br />
</ul><br />
<br />
Take into account that the pings from the exterior virtual machines to the internal ones is not allowed by the default security groups filters applied by Openstack.<br />
<br />
You can have a look at the virtual switch that supports the Openstack VLAN Network executing the following command in the host:<br />
ovs-vsctl show<br />
<br />
<br />
[[File:Vnx-demo-scenario-openstack-stein.png|center|thumb|600px|<div align=center><br />
'''Figure 5: Openstack Dashboard view of the demo virtual scenarios created'''</div>]]<br />
<br />
== Adding additional compute nodes ==<br />
<br />
Three additional VNX scenarios are provided to add new compute nodes to the scenario. <br />
<br />
For example, to start compute nodes 3 and 4, just:<br />
vnx -f openstack_lab-cmp34.xml -v -t<br />
# Wait for consoles to start<br />
vnx -f openstack_lab-cmp34.xml -v -x start-all<br />
<br />
After that, you can see the new compute nodes added <br />
by going to "Admin->Compute->Hypervisors->Compute host" option. However, the new compute nodes are not added yet to the list of Hypervisors in "Admin->Compute->Hypervisors->Hypervisor" option.<br />
<br />
To add them, just execute:<br />
vnx -f openstack_lab.xml -v -x discover-hosts<br />
<br />
The same procedure can be used to start nodes 5 and 6 (openstack_lab-cmp56.xml) and nodes 7 and 8 (openstack_lab-cmp78.xml).<br />
<br />
== Stopping or releasing the scenario ==<br />
<br />
To stop the scenario preserving the configuration and the changes made:<br />
<br />
vnx -f openstack_lab.xml -v --shutdown<br />
<br />
To start it again use:<br />
<br />
vnx -f openstack_lab.xml -v --start<br />
<br />
To stop the scenario destroying all the configuration and changes made:<br />
<br />
vnx -f openstack_lab.xml -v --destroy<br />
<br />
To unconfigure the NAT, just execute (change eth0 by the name of your external interface):<br />
<br />
vnx_config_nat -d ExtNet eth0<br />
<br />
== Other useful information ==<br />
<br />
To pack the scenario in a tgz file:<br />
<br />
bin/pack-scenario-with-rootfs # including rootfs<br />
bin/pack-scenario # without rootfs<br />
<br />
== Other Openstack Dashboard screen captures ==<br />
<br />
[[File:Vnx-demo-scenario-openstack-mitaka-compute-overview.png|center|thumb|600px|<div align=center><br />
'''Figure 6: Openstack Dashboard compute overview'''</div>]]<br />
<br />
[[File:Vnx-demo-scenario-openstack-mitaka-instances.png|center|thumb|600px|<div align=center><br />
'''Figure 7: Openstack Dashboard view of the demo virtual machines created'''</div>]]<br />
<br />
== XML specification of Openstack tutorial scenario ==<br />
<br />
<pre><br />
<?xml version="1.0" encoding="UTF-8"?><br />
<br />
<!--<br />
~~~~~~~~~~~~~~~~~~~~~~<br />
VNX Sample scenarios<br />
~~~~~~~~~~~~~~~~~~~~~~<br />
<br />
Name: openstack_lab-antelope<br />
<br />
Description: This is an Openstack tutorial scenario designed to experiment with Openstack free and open-source<br />
software platform for cloud-computing. It is made of four LXC containers:<br />
- one controller<br />
- one network node<br />
- two compute nodes<br />
Openstack version used: Antelope<br />
The network configuration is based on the one named "Classic with Open vSwitch" described here:<br />
http://docs.openstack.org/liberty/networking-guide/scenario-classic-ovs.html<br />
<br />
Author: David Fernandez (david.fernandez@upm.es)<br />
<br />
This file is part of the Virtual Networks over LinuX (VNX) Project distribution.<br />
(www: http://www.dit.upm.es/vnx - e-mail: vnx@dit.upm.es)<br />
<br />
Copyright(C) 2023 Departamento de Ingenieria de Sistemas Telematicos (DIT)<br />
Universidad Politecnica de Madrid (UPM)<br />
SPAIN<br />
--><br />
<br />
<vnx xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"<br />
xsi:noNamespaceSchemaLocation="/usr/share/xml/vnx/vnx-2.00.xsd"><br />
<global><br />
<version>2.0</version><br />
<scenario_name>openstack_lab-antelope</scenario_name><br />
<!--ssh_key>~/.ssh/id_rsa.pub</ssh_key--><br />
<automac offset="0"/><br />
<!--vm_mgmt type="none"/--><br />
<vm_mgmt type="private" network="10.20.0.0" mask="24" offset="0"><br />
<host_mapping /><br />
</vm_mgmt><br />
<vm_defaults><br />
<console id="0" display="no"/><br />
<console id="1" display="yes"/><br />
</vm_defaults><br />
<cmd-seq seq="step1-6">step00,step1,step2,step3,step3b,step4,step5,step54,step6</cmd-seq><br />
<cmd-seq seq="step1-8">step1-6,step8</cmd-seq><br />
<cmd-seq seq="step4">step41,step42,step43,step44</cmd-seq><br />
<cmd-seq seq="step5">step51,step52,step53</cmd-seq><br />
<cmd-seq seq="step10">step100,step101,step102</cmd-seq><br />
<cmd-seq seq="step11">step111,step112,step113</cmd-seq><br />
<cmd-seq seq="step12">step121,step122,step123,step124</cmd-seq><br />
<cmd-seq seq="step13">step130,step131</cmd-seq><br />
<!--cmd-seq seq="start-all-from-scratch">step1-8,step10,step12,step11</cmd-seq--><br />
<cmd-seq seq="start-all-from-scratch">step00,step1,step2,step3,step3b,step41,step51,step6,step8,step10,step121,step11</cmd-seq><br />
<cmd-seq seq="start-all">step01,step42,step43,step44,step52,step53,step54,step122,step123,step124,step999</cmd-seq><br />
<cmd-seq seq="discover-hosts">step44</cmd-seq><br />
</global><br />
<br />
<net name="MgmtNet" mode="openvswitch" mtu="1450"/><br />
<net name="TunnNet" mode="openvswitch" mtu="1450"/><br />
<net name="ExtNet" mode="openvswitch" mtu="1450"/><br />
<net name="VlanNet" mode="openvswitch" /><br />
<net name="virbr0" mode="virtual_bridge" managed="no"/><br />
<br />
<!--<br />
~~<br />
~~ C O N T R O L L E R N O D E<br />
~~<br />
--><br />
<vm name="controller" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-controller</filesystem><br />
<mem>1G</mem><br />
<shareddir root="/root/shared">shared</shareddir><br />
<!--console id="0" display="yes"/--><br />
<br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.11/24</ipv4><br />
</if><br />
<if id="2" net="ExtNet"><br />
<ipv4>10.0.10.11/24</ipv4><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --><br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-controller.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<filetree seq="on_boot" root="/root/">conf/controller/bin</filetree><br />
<filetree seq="on_boot" root="/root/.ssh/">conf/controller/ssh/id_rsa</filetree><br />
<filetree seq="on_boot" root="/root/.ssh/">conf/controller/ssh/id_rsa.pub</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
chmod +x /root/bin/*<br />
</exec><br />
<exec seq="on_boot" type="verbatim"><br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
<br />
# Change owner of secret_key to horizon to avoid a 500 error when<br />
# accessing horizon (new problem arosed in v04)<br />
# See: https://ask.openstack.org/en/question/30059/getting-500-internal-server-error-while-accessing-horizon-dashboard-in-ubuntu-icehouse/<br />
chown -f horizon /var/lib/openstack-dashboard/secret_key<br />
<br />
# Stop nova services. Before being configured, they consume a lot of CPU<br />
service nova-scheduler stop<br />
service nova-api stop<br />
service nova-conductor stop<br />
<br />
# Add an html redirection to openstack page from index.html<br />
echo '&lt;meta http-equiv="refresh" content="0; url=/horizon" /&gt;' > /var/www/html/index.html<br />
<br />
dhclient eth9 # just in case the Internet connection is not active...<br />
</exec><br />
<br />
<exec seq="step00" type="verbatim"><br />
sed -i '/^network/d' /root/.ssh/known_hosts<br />
ssh-keyscan -t rsa network >> /root/.ssh/known_hosts<br />
sed -i '/^compute1/d' /root/.ssh/known_hosts<br />
ssh-keyscan -t rsa compute1 >> /root/.ssh/known_hosts<br />
sed -i '/^compute2/d' /root/.ssh/known_hosts<br />
ssh-keyscan -t rsa compute2 >> /root/.ssh/known_hosts<br />
dhclient eth9<br />
ping -c 3 www.dit.upm.es<br />
</exec><br />
<br />
<exec seq="step01" type="verbatim"><br />
sed -i '/^network/d' /root/.ssh/known_hosts<br />
ssh-keyscan -t rsa network >> /root/.ssh/known_hosts<br />
sed -i '/^compute1/d' /root/.ssh/known_hosts<br />
ssh-keyscan -t rsa compute1 >> /root/.ssh/known_hosts<br />
sed -i '/^compute2/d' /root/.ssh/known_hosts<br />
ssh-keyscan -t rsa compute2 >> /root/.ssh/known_hosts<br />
# Restart nova services<br />
systemctl restart nova-scheduler<br />
systemctl restart nova-api<br />
systemctl restart nova-conductor<br />
dhclient eth9<br />
ping -c 3 www.dit.upm.es<br />
#systemctl restart memcached<br />
</exec><br />
<br />
<!--<br />
STEP 1: Basic services<br />
--><br />
<filetree seq="step1" root="/etc/mysql/mariadb.conf.d/">conf/controller/mysql/99-openstack.cnf</filetree><br />
<filetree seq="step1" root="/etc/">conf/controller/memcached/memcached.conf</filetree><br />
<filetree seq="step1" root="/etc/default/">conf/controller/etcd/etcd</filetree><br />
<!--filetree seq="step1" root="/etc/mongodb.conf">conf/controller/mongodb/mongodb.conf</filetree!--><br />
<exec seq="step1" type="verbatim"><br />
<br />
# mariadb<br />
systemctl enable mariadb<br />
systemctl start mariadb<br />
<br />
# rabbitmqctl<br />
systemctl enable rabbitmq-server<br />
systemctl start rabbitmq-server<br />
rabbitmqctl add_user openstack xxxx<br />
rabbitmqctl set_permissions openstack ".*" ".*" ".*"<br />
<br />
# memcached<br />
sed -i -e 's/-l 127.0.0.1/-l 10.0.0.11/' /etc/memcached.conf<br />
systemctl enable memcached<br />
systemctl start memcached<br />
<br />
# etcd<br />
systemctl enable etcd<br />
systemctl start etcd<br />
<br />
echo "Services status"<br />
echo "etcd " $( systemctl show -p SubState etcd )<br />
echo "mariadb " $( systemctl show -p SubState mariadb )<br />
echo "memcached " $( systemctl show -p SubState memcached )<br />
echo "rabbitmq-server " $( systemctl show -p SubState rabbitmq-server )<br />
</exec><br />
<br />
<!--<br />
STEP 2: Identity service<br />
--><br />
<filetree seq="step2" root="/etc/keystone/">conf/controller/keystone/keystone.conf</filetree><br />
<filetree seq="step2" root="/root/bin/">conf/controller/keystone/admin-openrc.sh</filetree><br />
<filetree seq="step2" root="/root/bin/">conf/controller/keystone/demo-openrc.sh</filetree><br />
<filetree seq="step2" root="/root/bin/">conf/controller/keystone/octavia-openrc.sh</filetree><br />
<exec seq="step2" type="verbatim"><br />
count=1; while ! mysqladmin ping ; do echo -n $count; echo ": waiting for mysql ..."; ((count++)) &amp;&amp; ((count==6)) &amp;&amp; echo "--" &amp;&amp; echo "-- ERROR: database not ready." &amp;&amp; echo "--" &amp;&amp; break; sleep 2; done<br />
</exec><br />
<exec seq="step2" type="verbatim"><br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE keystone;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
su -s /bin/sh -c "keystone-manage db_sync" keystone<br />
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone<br />
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone<br />
<br />
keystone-manage bootstrap --bootstrap-password xxxx \<br />
--bootstrap-admin-url http://controller:5000/v3/ \<br />
--bootstrap-internal-url http://controller:5000/v3/ \<br />
--bootstrap-public-url http://controller:5000/v3/ \<br />
--bootstrap-region-id RegionOne<br />
<br />
echo "ServerName controller" >> /etc/apache2/apache2.conf<br />
systemctl restart apache2<br />
#rm -f /var/lib/keystone/keystone.db<br />
sleep 5<br />
<br />
export OS_USERNAME=admin<br />
export OS_PASSWORD=xxxx<br />
export OS_PROJECT_NAME=admin<br />
export OS_USER_DOMAIN_NAME=Default<br />
export OS_PROJECT_DOMAIN_NAME=Default<br />
export OS_AUTH_URL=http://controller:5000/v3<br />
export OS_IDENTITY_API_VERSION=3<br />
<br />
# Create users and projects<br />
openstack project create --domain default --description "Service Project" service<br />
openstack project create --domain default --description "Demo Project" demo<br />
openstack user create --domain default --password=xxxx demo<br />
openstack role create user<br />
openstack role add --project demo --user demo user<br />
</exec><br />
<br />
<!--<br />
STEP 3: Image service (Glance)<br />
--><br />
<filetree seq="step3" root="/etc/glance/">conf/controller/glance/glance-api.conf</filetree><br />
<!--filetree seq="step3" root="/etc/glance/">conf/controller/glance/glance-registry.conf</filetree--><br />
<exec seq="step3" type="verbatim"><br />
systemctl enable glance-api<br />
systemctl start glance-api<br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE glance;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx glance<br />
openstack role add --project service --user glance admin<br />
openstack service create --name glance --description "OpenStack Image" image<br />
openstack endpoint create --region RegionOne image public http://controller:9292<br />
openstack endpoint create --region RegionOne image internal http://controller:9292<br />
openstack endpoint create --region RegionOne image admin http://controller:9292<br />
<br />
su -s /bin/sh -c "glance-manage db_sync" glance<br />
systemctl restart glance-api<br />
</exec><br />
<br />
<!--<br />
STEP 3B: Placement service API<br />
--><br />
<filetree seq="step3b" root="/etc/placement/">conf/controller/placement/placement.conf</filetree><br />
<exec seq="step3b" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE placement;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx placement<br />
openstack role add --project service --user placement admin<br />
openstack service create --name placement --description "Placement API" placement<br />
openstack endpoint create --region RegionOne placement public http://controller:8778<br />
openstack endpoint create --region RegionOne placement internal http://controller:8778<br />
openstack endpoint create --region RegionOne placement admin http://controller:8778<br />
su -s /bin/sh -c "placement-manage db sync" placement<br />
systemctl restart apache2<br />
</exec><br />
<br />
<!--<br />
STEP 4: Compute service (Nova)<br />
--><br />
<filetree seq="step41" root="/etc/nova/">conf/controller/nova/nova.conf</filetree><br />
<exec seq="step41" type="verbatim"><br />
# Enable and start services<br />
systemctl enable nova-api<br />
systemctl enable nova-scheduler<br />
systemctl enable nova-conductor<br />
systemctl enable nova-novncproxy<br />
systemctl start nova-api<br />
systemctl start nova-scheduler<br />
systemctl start nova-conductor<br />
systemctl start nova-novncproxy<br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova_api;"<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova;"<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova_cell0;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
<br />
openstack user create --domain default --password=xxxx nova<br />
openstack role add --project service --user nova admin<br />
openstack service create --name nova --description "OpenStack Compute" compute<br />
<br />
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1<br />
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1<br />
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1<br />
<br />
su -s /bin/sh -c "nova-manage api_db sync" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova<br />
su -s /bin/sh -c "nova-manage db sync" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova<br />
<br />
service nova-api restart<br />
service nova-scheduler restart<br />
service nova-conductor restart<br />
service nova-novncproxy restart<br />
</exec><br />
<br />
<exec seq="step43" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Wait for compute1 hypervisor to be up<br />
HOST=compute1<br />
i=5; while ! $( openstack host list | grep $HOST > /dev/null ); do echo "$i - waiting for $HOST to be registered..."; i=$(( i - 1 )); if ((i == 0)); then echo "ERROR: timeout waiting for $HOST"; break; else sleep 5; fi done<br />
</exec><br />
<exec seq="step43" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Wait for compute2 hypervisor to be up<br />
HOST=compute2<br />
i=5; while ! $( openstack host list | grep $HOST > /dev/null ); do echo "$i - waiting for $HOST to be registered..."; i=$(( i - 1 )); if ((i == 0)); then echo "ERROR: timeout waiting for $HOST"; break; else sleep 5; fi done<br />
</exec><br />
<exec seq="step44,discover-hosts" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova<br />
openstack hypervisor list<br />
</exec><br />
<br />
<!--<br />
STEP 5: Network service (Neutron)<br />
--><br />
<filetree seq="step51" root="/etc/neutron/">conf/controller/neutron/neutron.conf</filetree><br />
<filetree seq="step51" root="/etc/neutron/">conf/controller/neutron/metadata_agent.ini</filetree><br />
<!--filetree seq="step51" root="/etc/neutron/">conf/controller/neutron/neutron_lbaas.conf</filetree--><br />
<!--filetree seq="step51" root="/etc/neutron/">conf/controller/neutron/lbaas_agent.ini</filetree--><br />
<filetree seq="step51" root="/etc/neutron/plugins/ml2/">conf/controller/neutron/ml2_conf.ini</filetree><br />
<exec seq="step51" type="verbatim"><br />
systemctl enable neutron-server<br />
systemctl restart neutron-server<br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE neutron;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx neutron<br />
openstack role add --project service --user neutron admin<br />
openstack service create --name neutron --description "OpenStack Networking" network<br />
openstack endpoint create --region RegionOne network public http://controller:9696<br />
openstack endpoint create --region RegionOne network internal http://controller:9696<br />
openstack endpoint create --region RegionOne network admin http://controller:9696<br />
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron<br />
<br />
# LBaaS<br />
# Installation based on recipe:<br />
# - Configure Neutron LBaaS (Load-Balancer-as-a-Service) V2 in www.server-world.info.<br />
#neutron-db-manage --subproject neutron-lbaas upgrade head<br />
#su -s /bin/bash neutron -c "neutron-db-manage --subproject neutron-lbaas --config-file /etc/neutron/neutron.conf upgrade head"<br />
<br />
# FwaaS v2<br />
# https://tinyurl.com/2qk7729b<br />
neutron-db-manage --subproject neutron-fwaas upgrade head<br />
<br />
# Octavia Dashboard panels<br />
# Based on https://opendev.org/openstack/octavia-dashboard<br />
git clone -b stable/2023.1 https://opendev.org/openstack/octavia-dashboard.git<br />
cd octavia-dashboard/<br />
python setup.py sdist<br />
cp -a octavia_dashboard/enabled/_1482_project_load_balancer_panel.py /usr/share/openstack-dashboard/openstack_dashboard/local/enabled/<br />
pip3 install octavia-dashboard<br />
chmod +x manage.py<br />
./manage.py collectstatic --noinput<br />
./manage.py compress<br />
systemctl restart apache2<br />
<br />
systemctl restart nova-api<br />
systemctl restart neutron-server<br />
</exec><br />
<br />
<exec seq="step54" type="verbatim"><br />
# Create external network<br />
source /root/bin/admin-openrc.sh<br />
openstack network create --share --external --provider-physical-network provider --provider-network-type flat ExtNet<br />
openstack subnet create --network ExtNet --gateway 10.0.10.1 --dns-nameserver 10.0.10.1 --subnet-range 10.0.10.0/24 --allocation-pool start=10.0.10.100,end=10.0.10.200 ExtSubNet<br />
</exec><br />
<br />
<!--<br />
STEP 6: Dashboard service<br />
--><br />
<filetree seq="step6" root="/etc/openstack-dashboard/">conf/controller/dashboard/local_settings.py</filetree><br />
<exec seq="step6" type="verbatim"><br />
# FWaaS Dashboard<br />
# https://docs.openstack.org/neutron-fwaas-dashboard/latest/doc-neutron-fwaas-dashboard.pdf<br />
git clone https://opendev.org/openstack/neutron-fwaas-dashboard<br />
cd neutron-fwaas-dashboard<br />
sudo pip install .<br />
cp neutron_fwaas_dashboard/enabled/_701* /usr/share/openstack-dashboard/openstack_dashboard/local/enabled/<br />
./manage.py compilemessages<br />
DJANGO_SETTINGS_MODULE=openstack_dashboard.settings python manage.py collectstatic --noinput<br />
DJANGO_SETTINGS_MODULE=openstack_dashboard.settings python manage.py compress --force<br />
<br />
systemctl enable apache2<br />
systemctl restart apache2<br />
</exec><br />
<br />
<!--<br />
STEP 7: Trove service<br />
--><br />
<cmd-seq seq="step7">step71,step72,step73</cmd-seq><br />
<exec seq="step71" type="verbatim"><br />
apt-get -y install python-trove python-troveclient python-glanceclient trove-common trove-api trove-taskmanager trove-conductor python-pip<br />
pip install trove-dashboard==7.0.0.0b2<br />
</exec><br />
<br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-conductor.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-taskmanager.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-guestagent.conf</filetree><br />
<exec seq="step72" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE trove;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
<br />
openstack user create --domain default --password xxxx trove<br />
openstack role add --project service --user trove admin<br />
openstack service create --name trove --description "Database" database<br />
openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s<br />
<br />
su -s /bin/sh -c "trove-manage db_sync" trove<br />
<br />
service trove-api restart<br />
service trove-taskmanager restart<br />
service trove-conductor restart<br />
<br />
# Install trove_dashboard<br />
cp -a /usr/local/lib/python2.7/dist-packages/trove_dashboard/enabled/* /usr/share/openstack-dashboard/openstack_dashboard/local/enabled/<br />
service apache2 restart<br />
<br />
</exec><br />
<br />
<exec seq="step73" type="verbatim"><br />
#wget -P /tmp/images http://tarballs.openstack.org/trove/images/ubuntu/mariadb.qcow2<br />
wget -P /tmp/images/ http://vnx.dit.upm.es/vnx/filesystems/ostack-images/trove/mariadb.qcow2<br />
glance image-create --name "trove-mariadb" --file /tmp/images/mariadb.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/mariadb.qcow2<br />
su -s /bin/sh -c "trove-manage --config-file /etc/trove/trove.conf datastore_update mysql ''" trove<br />
su -s /bin/sh -c "trove-manage --config-file /etc/trove/trove.conf datastore_version_update mysql mariadb mariadb glance_image_ID '' 1" trove<br />
<br />
# Create example database<br />
openstack flavor show m1.smaller >/dev/null 2>&amp;1 || openstack flavor create m1.smaller --ram 512 --disk 3 --vcpus 1 --id 6<br />
#trove create mysql_instance_1 m1.smaller --size 1 --databases myDB --users userA:xxxx --datastore_version mariadb --datastore mysql<br />
</exec><br />
<br />
<!--<br />
STEP 8: Heat service<br />
--><br />
<!--cmd-seq seq="step8">step81,step82</cmd-seq--><br />
<filetree seq="step8" root="/etc/heat/">conf/controller/heat/heat.conf</filetree><br />
<filetree seq="step8" root="/root/heat/">conf/controller/heat/examples</filetree><br />
<exec seq="step8" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE heat;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password xxxx heat<br />
openstack role add --project service --user heat admin<br />
openstack service create --name heat --description "Orchestration" orchestration<br />
openstack service create --name heat-cfn --description "Orchestration" cloudformation<br />
openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1<br />
openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1<br />
openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1<br />
openstack domain create --description "Stack projects and users" heat<br />
openstack user create --domain heat --password xxxx heat_domain_admin<br />
openstack role add --domain heat --user-domain heat --user heat_domain_admin admin<br />
openstack role create heat_stack_owner<br />
openstack role add --project demo --user demo heat_stack_owner<br />
openstack role create heat_stack_user<br />
<br />
su -s /bin/sh -c "heat-manage db_sync" heat<br />
systemctl enable heat-api<br />
systemctl enable heat-api-cfn<br />
systemctl enable heat-engine<br />
systemctl restart heat-api<br />
systemctl restart heat-api-cfn<br />
systemctl restart heat-engine<br />
<br />
# Install Orchestration interface in Dashboard<br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get install -y gettext<br />
pip3 install heat-dashboard<br />
<br />
cd /root<br />
git clone https://github.com/openstack/heat-dashboard.git<br />
cd heat-dashboard/<br />
git checkout stable/stein<br />
cp heat_dashboard/enabled/_[1-9]*.py /usr/share/openstack-dashboard/openstack_dashboard/local/enabled<br />
python3 ./manage.py compilemessages<br />
cd /usr/share/openstack-dashboard<br />
DJANGO_SETTINGS_MODULE=openstack_dashboard.settings python3 manage.py collectstatic --noinput<br />
DJANGO_SETTINGS_MODULE=openstack_dashboard.settings python3 manage.py compress --force<br />
#rm -f /var/lib/openstack-dashboard/secret_key<br />
systemctl restart apache2<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-heat" type="verbatim"><br />
#source /root/bin/demo-openrc.sh<br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create internal network<br />
openstack network create net-heat<br />
openstack subnet create --network net-heat --gateway 10.1.10.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.10.0/24 --allocation-pool start=10.1.10.8,end=10.1.10.100 subnet-heat<br />
<br />
mkdir -p /root/keys<br />
openstack keypair create key-heat > /root/keys/key-heat<br />
export NET_ID=$( openstack network list --name net-heat -f value -c ID )<br />
openstack stack create -t /root/heat/examples/demo-template.yml --parameter "NetID=$NET_ID" stack<br />
<br />
</exec><br />
<br />
<br />
<!--<br />
STEP 9: Tacker service<br />
--><br />
<cmd-seq seq="step9">step91,step92</cmd-seq><br />
<br />
<exec seq="step91" type="verbatim"><br />
apt-get -y install python-pip git<br />
pip install --upgrade pip<br />
</exec><br />
<br />
<filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/default-vim-config.yaml</filetree><br />
<filetree seq="step92" root="/root/tacker/">conf/controller/tacker/examples</filetree><br />
<exec seq="step92" type="verbatim"><br />
sed -i -e 's/.*"resource_types:OS::Nova::Flavor":.*/ "resource_types:OS::Nova::Flavor": "role:admin",/' /etc/heat/policy.json<br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE tacker;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password xxxx tacker<br />
openstack role add --project service --user tacker admin<br />
openstack service create --name tacker --description "Tacker Project" nfv-orchestration<br />
openstack endpoint create --region RegionOne nfv-orchestration public http://controller:9890/<br />
openstack endpoint create --region RegionOne nfv-orchestration internal http://controller:9890/<br />
openstack endpoint create --region RegionOne nfv-orchestration admin http://controller:9890/<br />
<br />
mkdir -p /root/tacker<br />
cd /root/tacker<br />
git clone https://github.com/openstack/tacker<br />
cd tacker<br />
git checkout stable/ocata<br />
pip install -r requirements.txt<br />
pip install tosca-parser<br />
python setup.py install<br />
mkdir -p /var/log/tacker<br />
<br />
/usr/local/bin/tacker-db-manage --config-file /usr/local/etc/tacker/tacker.conf upgrade head<br />
<br />
# Tacker client<br />
cd /root/tacker<br />
git clone https://github.com/openstack/python-tackerclient<br />
cd python-tackerclient<br />
git checkout stable/ocata<br />
python setup.py install<br />
<br />
# Tacker horizon<br />
cd /root/tacker<br />
git clone https://github.com/openstack/tacker-horizon<br />
cd tacker-horizon<br />
git checkout stable/ocata<br />
python setup.py install<br />
cp tacker_horizon/enabled/* /usr/share/openstack-dashboard/openstack_dashboard/enabled/<br />
service apache2 restart<br />
<br />
# Start tacker server<br />
mkdir -p /var/log/tacker<br />
nohup python /usr/local/bin/tacker-server \<br />
--config-file /usr/local/etc/tacker/tacker.conf \<br />
--log-file /var/log/tacker/tacker.log &amp;<br />
<br />
# Register default VIM<br />
tacker vim-register --is-default --config-file /usr/local/etc/tacker/default-vim-config.yaml \<br />
--description "Default VIM" "Openstack-VIM"<br />
<br />
</exec><br />
<exec seq="step93" type="verbatim"><br />
nohup python /usr/local/bin/tacker-server \<br />
--config-file /usr/local/etc/tacker/tacker.conf \<br />
--log-file /var/log/tacker/tacker.log &amp;<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-tacker" type="verbatim"><br />
source /root/bin/demo-openrc.sh<br />
<br />
# Create internal network<br />
openstack network create net-tacker<br />
openstack subnet create --network net-tacker --gateway 10.1.11.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.11.0/24 --allocation-pool start=10.1.11.8,end=10.1.11.100 subnet-tacker<br />
<br />
cd /root/tacker/examples<br />
tacker vnfd-create --vnfd-file sample-vnfd.yaml testd<br />
<br />
# Falla con error:<br />
# ERROR: Property error: : resources.VDU1.properties.image: : No module named v2.client<br />
tacker vnf-create --vnfd-id $( tacker vnfd-list | awk '/ testd / { print $2 }' ) test<br />
<br />
</exec><br />
<br />
<!--filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<exec seq="step92" type="verbatim"><br />
</exec--><br />
<br />
<!--<br />
STEP 10: Ceilometer service<br />
Based on https://www.server-world.info/en/note?os=Ubuntu_22.04&p=openstack_antelope4&f=8<br />
--><br />
<br />
<exec seq="step100" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
# moved to the rootfs creation script<br />
#apt-get -y install gnocchi-api gnocchi-metricd python3-gnocchiclient<br />
#apt-get -y install ceilometer-agent-central ceilometer-agent-notification<br />
</exec><br />
<br />
<filetree seq="step101" root="/etc/gnocchi/">conf/controller/gnocchi/gnocchi.conf</filetree><br />
<filetree seq="step101" root="/etc/gnocchi/">conf/controller/gnocchi/policy.json</filetree><br />
<exec seq="step101" type="verbatim"><br />
<!-- Install gnocchi --><br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --project service --password xxxx gnocchi<br />
openstack role add --project service --user gnocchi admin<br />
openstack service create --name gnocchi --description "Metric Service" metric<br />
openstack endpoint create --region RegionOne metric public http://controller:8041<br />
openstack endpoint create --region RegionOne metric internal http://controller:8041<br />
openstack endpoint create --region RegionOne metric admin http://controller:8041<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE gnocchi;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
chmod 640 /etc/gnocchi/gnocchi.conf<br />
chgrp gnocchi /etc/gnocchi/gnocchi.conf<br />
<br />
su -s /bin/bash gnocchi -c "gnocchi-upgrade"<br />
a2enmod wsgi<br />
a2ensite gnocchi-api<br />
systemctl restart gnocchi-metricd apache2<br />
systemctl enable gnocchi-metricd<br />
systemctl status gnocchi-metricd<br />
export OS_AUTH_TYPE=password<br />
gnocchi resource list<br />
<br />
</exec><br />
<br />
<filetree seq="step102" root="/etc/ceilometer/">conf/controller/ceilometer/ceilometer.conf</filetree><br />
<!--filetree seq="step102" root="/etc/apache2/sites-available/gnocchi.conf">conf/controller/ceilometer/apache/gnocchi.conf</filetree--><br />
<!--filetree seq="step102" root="/etc/ceilometer/">conf/controller/ceilometer/pipeline.yaml</filetree--><br />
<!--filetree seq="step102" root="/etc/gnocchi/">conf/controller/ceilometer/api-paste.ini</filetree--><br />
<exec seq="step102" type="verbatim"><br />
<br />
<!-- Install Ceilometer --><br />
source /root/bin/admin-openrc.sh<br />
# Ceilometer<br />
# Following https://tinyurl.com/22w6xgm4<br />
openstack user create --domain default --project service --password xxxx ceilometer<br />
openstack role add --project service --user ceilometer admin<br />
openstack service create --name ceilometer --description "OpenStack Telemetry Service" metering<br />
<br />
chmod 640 /etc/ceilometer/ceilometer.conf<br />
chgrp ceilometer /etc/ceilometer/ceilometer.conf<br />
su -s /bin/bash ceilometer -c "ceilometer-upgrade"<br />
systemctl restart ceilometer-agent-central ceilometer-agent-notification<br />
systemctl enable ceilometer-agent-central ceilometer-agent-notification<br />
<br />
#ceilometer-upgrade<br />
#systemctl restart ceilometer-agent-central<br />
#service restart ceilometer-agent-notification<br />
<br />
<br />
# Enable Glance service meters<br />
# https://tinyurl.com/274oe82n<br />
crudini --set /etc/glance/glance-api.conf oslo_messaging_notifications driver messagingv2<br />
crudini --set /etc/glance/glance-api.conf oslo_messaging_notifications transport_url rabbit://openstack:xxxx@controller<br />
systemctl restart glance-api<br />
openstack metric resource list<br />
<br />
# Enable Neutron service meters<br />
crudini --set /etc/neutron/neutron.conf oslo_messaging_notifications driver messagingv2<br />
service neutron-server restart<br />
<br />
# Enable Heat service meters<br />
crudini --set /etc/heat/heat.conf oslo_messaging_notifications driver messagingv2<br />
systemctl restart heat-api<br />
systemctl restart heat-api-cfn<br />
systemctl restart heat-engine<br />
<br />
# Enable Networking service meters<br />
crudini --set /etc/neutron/neutron.conf oslo_messaging_notifications driver messagingv2<br />
systemctl restart neutron-server<br />
</exec><br />
<br />
<!-- STEP 11: SKYLINE --><br />
<!-- Adapted from https://tinyurl.com/245v6q73 --><br />
<exec seq="step111" type="verbatim"><br />
#pip3 install skyline-apiserver<br />
#apt-get -y install npm python-is-python3 nginx<br />
#npm install -g yarn<br />
</exec><br />
<br />
<filetree seq="step112" root="/etc/systemd/system/">conf/controller/skyline/skyline-apiserver.service</filetree><br />
<exec seq="step112" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --project service --password xxxx skyline<br />
openstack role add --project service --user skyline admin<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE skyline;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON skyline.* TO 'skyline'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON skyline.* TO 'skyline'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
#groupadd -g 64080 skyline<br />
#useradd -u 64080 -g skyline -d /var/lib/skyline -s /sbin/nologin skyline<br />
pip3 install skyline-apiserver<br />
#mkdir -p /etc/skyline /var/lib/skyline /var/log/skyline<br />
mkdir -p /etc/skyline /var/log/skyline<br />
#chmod 750 /etc/skyline /var/lib/skyline /var/log/skyline<br />
cd /root<br />
git clone -b stable/2023.1 https://opendev.org/openstack/skyline-apiserver.git<br />
#cp ./skyline-apiserver/etc/gunicorn.py /etc/skyline/gunicorn.py<br />
#cp ./skyline-apiserver/etc/skyline.yaml.sample /etc/skyline/skyline.yaml<br />
</exec><br />
<br />
<filetree seq="step113" root="/etc/skyline/">conf/controller/skyline/gunicorn.py</filetree><br />
<filetree seq="step113" root="/etc/skyline/">conf/controller/skyline/skyline.yaml</filetree><br />
<filetree seq="step113" root="/etc/systemd/system/">conf/controller/skyline/skyline-apiserver.service</filetree><br />
<exec seq="step113" type="verbatim"><br />
cd /root/skyline-apiserver<br />
make db_sync<br />
cd ..<br />
#chown -R skyline. /etc/skyline /var/lib/skyline /var/log/skyline<br />
systemctl daemon-reload<br />
systemctl enable --now skyline-apiserver<br />
apt-get -y install npm python-is-python3 nginx<br />
rm -rf /usr/local/lib/node_modules/yarn/<br />
npm install -g yarn<br />
git clone -b stable/2023.1 https://opendev.org/openstack/skyline-console.git<br />
cd ./skyline-console<br />
make package<br />
pip3 install --force-reinstall ./dist/skyline_console-*.whl<br />
cd ..<br />
skyline-nginx-generator -o /etc/nginx/nginx.conf<br />
sudo sed -i "s/server .* fail_timeout=0;/server 0.0.0.0:28000 fail_timeout=0;/g" /etc/nginx/nginx.conf<br />
sudo systemctl restart skyline-apiserver.service<br />
sudo systemctl enable nginx.service<br />
sudo systemctl restart nginx.service<br />
</exec><br />
<br />
<!-- STEP 12: LOAD BALANCER OCTAVIA --><br />
<!-- Adapted from https://tinyurl.com/245v6q73 --><br />
<exec seq="step121" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE octavia;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON octavia.* TO 'octavia'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON octavia.* TO 'octavia'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
#openstack user create --domain default --project service --password xxxx octavia<br />
openstack user create --domain default --password xxxx octavia<br />
openstack role add --project service --user octavia admin<br />
openstack service create --name octavia --description "OpenStack LBaaS" load-balancer<br />
export octavia_api=network<br />
openstack endpoint create --region RegionOne load-balancer public http://$octavia_api:9876<br />
openstack endpoint create --region RegionOne load-balancer internal http://$octavia_api:9876<br />
openstack endpoint create --region RegionOne load-balancer admin http://$octavia_api:9876<br />
<br />
source /root/bin/octavia-openrc.sh<br />
# Load Balancer (Octavia)<br />
#openstack flavor show m1.octavia >/dev/null 2>&amp;1 || openstack flavor create --id 100 --vcpus 1 --ram 1024 --disk 5 m1.octavia --private --project service<br />
openstack flavor show amphora >/dev/null 2>&amp;1 || openstack flavor create --id 200 --vcpus 1 --ram 1024 --disk 5 amphora --private<br />
wget -P /tmp/images http://vnx.dit.upm.es/vnx/filesystems/ostack-images/ubuntu-amphora-haproxy-amd64.qcow2<br />
#openstack image create "Amphora" --tag "Amphora" --file /tmp/images/ubuntu-amphora-haproxy-amd64.qcow2 --disk-format qcow2 --container-format bare --private --project service<br />
openstack image create --disk-format qcow2 --container-format bare --private --tag amphora --file /tmp/images/ubuntu-amphora-haproxy-amd64.qcow2 amphora-x64-haproxy<br />
rm /tmp/images/ubuntu-amphora-haproxy-amd64.qcow2<br />
<br />
</exec><br />
<br />
<!-- STEP 13: TELEMETRY ALARM SERVICE --><br />
<!-- See: https://docs.openstack.org/aodh/latest/install/install-ubuntu.html --><br />
<exec seq="step130" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get install -y aodh-api aodh-evaluator aodh-notifier aodh-listener aodh-expirer python3-aodhclient<br />
</exec><br />
<br />
<filetree seq="step131" root="/etc/aodh/">conf/controller/aodh/aodh.conf</filetree><br />
<exec seq="step131" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE aodh;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password xxxx aodh<br />
openstack role add --project service --user aodh admin<br />
openstack service create --name aodh --description "Telemetry" alarming<br />
openstack endpoint create --region RegionOne alarming public http://controller:8042<br />
openstack endpoint create --region RegionOne alarming internal http://controller:8042<br />
openstack endpoint create --region RegionOne alarming admin http://controller:8042<br />
<br />
aodh-dbsync<br />
<br />
# aodh-api no funciona desde wsgi. Hay que arrancarlo manualmente<br />
rm /etc/apache2/sites-enabled/aodh-api.conf<br />
systemctl restart apache2<br />
#service aodh-api restart<br />
nohup aodh-api --port 8042 -- --config-file /etc/aodh/aodh.conf &amp;<br />
systemctl restart aodh-evaluator<br />
systemctl restart aodh-notifier<br />
systemctl restart aodh-listener<br />
<br />
</exec><br />
<br />
<exec seq="step999" type="verbatim"><br />
# Change horizon port to 8080<br />
sed -i 's/Listen 80/Listen 8080/' /etc/apache2/ports.conf<br />
sed -i 's/VirtualHost \*:80/VirtualHost *:8080/' /etc/apache2/sites-enabled/000-default.conf<br />
systemctl restart apache2<br />
# Change Skyline to port 80<br />
sed -i 's/0.0.0.0:9999/0.0.0.0:80/' /etc/nginx/nginx.conf<br />
systemctl restart nginx<br />
systemctl restart skyline-apiserver<br />
</exec><br />
<br />
<!--<br />
LOAD IMAGES TO GLANCE<br />
--><br />
<exec seq="load-img" type="verbatim"><br />
dhclient eth9 # just in case the Internet connection is not active...<br />
<br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create flavors if not created<br />
openstack flavor show m1.nano >/dev/null 2>&amp;1 || openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano<br />
openstack flavor show m1.tiny >/dev/null 2>&amp;1 || openstack flavor create --id 1 --vcpus 1 --ram 512 --disk 1 m1.tiny<br />
openstack flavor show m1.smaller >/dev/null 2>&amp;1 || openstack flavor create --id 6 --vcpus 1 --ram 512 --disk 3 m1.smaller<br />
#openstack flavor show m1.octavia >/dev/null 2>&amp;1 || openstack flavor create --id 100 --vcpus 1 --ram 1024 --disk 5 m1.octavia --private --project service<br />
<br />
# CentOS image<br />
# Cirros image<br />
#wget -P /tmp/images http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img<br />
wget -P /tmp/images http://vnx.dit.upm.es/vnx/filesystems/ostack-images/cirros-0.3.4-x86_64-disk-vnx.qcow2<br />
glance image-create --name "cirros-0.3.4-x86_64-vnx" --file /tmp/images/cirros-0.3.4-x86_64-disk-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/cirros-0.3.4-x86_64-disk*.qcow2<br />
<br />
# Ubuntu image (trusty)<br />
#wget -P /tmp/images http://vnx.dit.upm.es/vnx/filesystems/ostack-images/trusty-server-cloudimg-amd64-disk1-vnx.qcow2<br />
#glance image-create --name "trusty-server-cloudimg-amd64-vnx" --file /tmp/images/trusty-server-cloudimg-amd64-disk1-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
#rm /tmp/images/trusty-server-cloudimg-amd64-disk1*.qcow2<br />
<br />
# Ubuntu image (xenial)<br />
#wget -P /tmp/images http://vnx.dit.upm.es/vnx/filesystems/ostack-images/xenial-server-cloudimg-amd64-disk1-vnx.qcow2<br />
#glance image-create --name "xenial-server-cloudimg-amd64-vnx" --file /tmp/images/xenial-server-cloudimg-amd64-disk1-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
#rm /tmp/images/xenial-server-cloudimg-amd64-disk1*.qcow2<br />
<br />
# Ubuntu image (focal,20.04)<br />
rm -f/tmp/images/focal-server-cloudimg-amd64-vnx.qcow2<br />
wget -P /tmp/images http://vnx.dit.upm.es/vnx/filesystems/ostack-images/focal-server-cloudimg-amd64-vnx.qcow2<br />
openstack image create "focal-server-cloudimg-amd64-vnx" --file /tmp/images/focal-server-cloudimg-amd64-vnx.qcow2 --disk-format qcow2 --container-format bare --public --progress<br />
rm /tmp/images/focal-server-cloudimg-amd64-vnx.qcow2<br />
<br />
# Ubuntu image (jammy,22.04)<br />
rm -f/tmp/images/jammy-server-cloudimg-amd64-vnx.qcow2<br />
wget -P /tmp/images http://vnx.dit.upm.es/vnx/filesystems/ostack-images/jammy-server-cloudimg-amd64-vnx.qcow2<br />
openstack image create "jammy-server-cloudimg-amd64-vnx" --file /tmp/images/jammy-server-cloudimg-amd64-vnx.qcow2 --disk-format qcow2 --container-format bare --public --progress<br />
rm /tmp/images/jammy-server-cloudimg-amd64-vnx.qcow2<br />
<br />
# CentOS-7<br />
#wget -P /tmp/images http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2<br />
#glance image-create --name "CentOS-7-x86_64" --file /tmp/images/CentOS-7-x86_64-GenericCloud.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
#rm /tmp/images/CentOS-7-x86_64-GenericCloud.qcow2<br />
<br />
# Load Balancer (Octavia)<br />
#wget -P /tmp/images http://vnx.dit.upm.es/vnx/filesystems/ostack-images/ubuntu-amphora-haproxy-amd64.qcow2<br />
#openstack image create "Amphora" --tag "Amphora" --file /tmp/images/ubuntu-amphora-haproxy-amd64.qcow2 --disk-format qcow2 --container-format bare --private --project service<br />
#rm /tmp/images/ubuntu-amphora-haproxy-amd64.qcow2<br />
<br />
</exec><br />
<br />
<!--<br />
CREATE DEMO SCENARIO<br />
--><br />
<exec seq="create-demo-scenario" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create security group rules to allow ICMP, SSH and WWW access<br />
admin_project_id=$(openstack project show admin -c id -f value)<br />
default_secgroup_id=$(openstack security group list -f value | grep default | grep $admin_project_id | cut -d " " -f1)<br />
openstack security group rule create --proto icmp --dst-port 0 $default_secgroup_id<br />
openstack security group rule create --proto tcp --dst-port 80 $default_secgroup_id<br />
openstack security group rule create --proto tcp --dst-port 22 $default_secgroup_id<br />
<br />
# Create internal network<br />
openstack network create net0<br />
openstack subnet create --network net0 --gateway 10.1.1.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.1.0/24 --allocation-pool start=10.1.1.8,end=10.1.1.100 subnet0<br />
<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm1 > /root/keys/vm1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm1 --nic net-id=net0 --key-name vm1<br />
<br />
# Create external network<br />
#openstack network create --share --external --provider-physical-network provider --provider-network-type flat ExtNet<br />
#openstack subnet create --network ExtNet --gateway 10.0.10.1 --dns-nameserver 10.0.10.1 --subnet-range 10.0.10.0/24 --allocation-pool start=10.0.10.100,end=10.0.10.200 ExtSubNet<br />
openstack router create r0<br />
openstack router set r0 --external-gateway ExtNet<br />
openstack router add subnet r0 subnet0<br />
<br />
# Assign floating IP address to vm1<br />
openstack server add floating ip vm1 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-vm2" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm2 > /root/keys/vm2<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm2 --nic net-id=net0 --key-name vm2<br />
# Assign floating IP address to vm2<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm2<br />
openstack server add floating ip vm2 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
</exec><br />
<br />
<exec seq="create-demo-vm3" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm3 > /root/keys/vm3<br />
openstack server create --flavor m1.smaller --image focal-server-cloudimg-amd64-vnx vm3 --nic net-id=net0 --key-name vm3<br />
# Assign floating IP address to vm3<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm3<br />
openstack server add floating ip vm3 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
</exec><br />
<br />
<exec seq="create-demo-vm4" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm4 > /root/keys/vm4<br />
openstack server create --flavor m1.smaller --image jammy-server-cloudimg-amd64-vnx vm4 --nic net-id=net0 --key-name vm4 --property VAR1=2 --property VAR2=3<br />
# Assign floating IP address to vm4<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm4<br />
openstack server add floating ip vm4 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
</exec><br />
<br />
<exec seq="create-vlan-demo-scenario" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create security group rules to allow ICMP, SSH and WWW access<br />
admin_project_id=$(openstack project show admin -c id -f value)<br />
default_secgroup_id=$(openstack security group list -f value | grep $admin_project_id | cut -d " " -f1)<br />
openstack security group rule create --proto icmp --dst-port 0 $default_secgroup_id<br />
openstack security group rule create --proto tcp --dst-port 80 $default_secgroup_id<br />
openstack security group rule create --proto tcp --dst-port 22 $default_secgroup_id<br />
<br />
# Create vlan based networks and subnetworks<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1000 vlan1000<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1001 vlan1001<br />
openstack subnet create --network vlan1000 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.2.0/24 --allocation-pool start=10.1.2.2,end=10.1.2.99 subvlan1000<br />
openstack subnet create --network vlan1001 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.3.0/24 --allocation-pool start=10.1.3.2,end=10.1.3.99 subvlan1001<br />
<br />
# Create virtual machine<br />
mkdir -p tmp<br />
openstack keypair create vmA1 > tmp/vmA1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vmA1 --nic net-id=vlan1000 --key-name vmA1<br />
openstack keypair create vmB1 > tmp/vmB1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vmB1 --nic net-id=vlan1001 --key-name vmB1<br />
<br />
</exec><br />
<br />
<!--<br />
VERIFY<br />
--><br />
<exec seq="verify" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
echo "--"<br />
echo "-- Keystone (identity)"<br />
echo "--"<br />
echo "Command: openstack --os-auth-url http://controller:5000/v3 --os-project-domain-name Default --os-user-domain-name Default --os-project-name admin --os-username admin token issue"<br />
openstack --os-auth-url http://controller:5000/v3 \<br />
--os-project-domain-name Default --os-user-domain-name Default \<br />
--os-project-name admin --os-username admin token issue<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
echo "--"<br />
echo "-- Glance (images)"<br />
echo "--"<br />
echo "Command: openstack image list"<br />
openstack image list<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
echo "--"<br />
echo "-- Nova (compute)"<br />
echo "--"<br />
echo "Command: openstack compute service list"<br />
openstack compute service list<br />
echo "Command: openstack hypervisor list"<br />
openstack hypervisor service list<br />
echo "Command: openstack catalog list"<br />
openstack catalog list<br />
echo "Command: nova-status upgrade check"<br />
nova-status upgrade check<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
echo "--"<br />
echo "-- Neutron (network)"<br />
echo "--"<br />
echo "Command: openstack extension list --network"<br />
openstack extension list --network<br />
echo "Command: openstack network agent list"<br />
openstack network agent list<br />
echo "Command: openstack security group list"<br />
openstack security group list<br />
echo "Command: openstack security group rule list"<br />
openstack security group rule list<br />
</exec><br />
<br />
</vm><br />
<br />
<!--<br />
~~<br />
~~ N E T W O R K N O D E<br />
~~<br />
--><br />
<vm name="network" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-network</filesystem><br />
<mem>1G</mem><br />
<shareddir root="/root/shared">shared</shareddir><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.21/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.21/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="4" net="ExtNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<forwarding type="ip" /><br />
<forwarding type="ipv6" /><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/controller/bin</filetree><br />
<filetree seq="on_boot" root="/root/bin/">conf/controller/keystone/admin-openrc.sh</filetree><br />
<filetree seq="on_boot" root="/root/bin/">conf/controller/keystone/demo-openrc.sh</filetree><br />
<filetree seq="on_boot" root="/root/bin/">conf/controller/keystone/octavia-openrc.sh</filetree><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<filetree seq="on_boot" root="/tmp/">conf/controller/ssh/id_rsa.pub</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts<br />
rm /root/hosts<br />
</exec><br />
<exec seq="on_boot" type="verbatim"><br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
mkdir /root/.ssh<br />
cat /tmp/id_rsa.pub >> /root/.ssh/authorized_keys<br />
dhclient eth9 # just in case the Internet connection is not active...<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --><br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<filetree seq="on_boot" root="/root/">conf/network/bin</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
chmod +x /root/bin/*<br />
</exec><br />
<br />
<exec seq="step00,step01" type="verbatim"><br />
dhclient eth9<br />
ping -c 3 www.dit.upm.es<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) --><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/neutron.conf</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/metadata_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/plugins/ml2/">conf/network/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step52" root="/etc/neutron/plugins/ml2/">conf/network/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/l3_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/dhcp_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/dnsmasq-neutron.conf</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/fwaas_driver.ini</filetree><br />
<!--filetree seq="step52" root="/etc/neutron/">conf/network/neutron/lbaas_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/neutron_lbaas.conf</filetree--><br />
<exec seq="step52" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
ovs-vsctl add-br br-provider<br />
ovs-vsctl add-port br-provider eth4<br />
<br />
#service neutron-lbaasv2-agent restart<br />
#systemctl restart neutron-lbaasv2-agent<br />
#systemctl enable neutron-lbaasv2-agent<br />
#service openvswitch-switch restart<br />
<br />
systemctl enable neutron-openvswitch-agent<br />
systemctl enable neutron-dhcp-agent<br />
systemctl enable neutron-metadata-agent<br />
systemctl enable neutron-l3-agent<br />
systemctl start neutron-openvswitch-agent<br />
systemctl start neutron-dhcp-agent<br />
systemctl start neutron-metadata-agent<br />
systemctl start neutron-l3-agent<br />
</exec><br />
<br />
<!-- STEP 12: LOAD BALANCER OCTAVIA --><br />
<!-- Official recipe in: https://github.com/openstack/octavia/blob/master/doc/source/install/install-ubuntu.rst --><br />
<!-- Adapted from https://tinyurl.com/245v6q73 --><br />
<exec seq="step122" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
#source /root/bin/admin-openrc.sh<br />
source /root/bin/octavia-openrc.sh<br />
#apt -y install octavia-api octavia-health-manager octavia-housekeeping octavia-worker python3-ovn-octavia-provider<br />
#apt -y install octavia-api octavia-health-manager octavia-housekeeping octavia-worker python3-octavia python3-octaviaclient<br />
mkdir -p /etc/octavia/certs/private<br />
sudo chmod 755 /etc/octavia -R<br />
mkdir ~/work<br />
cd ~/work<br />
git clone https://opendev.org/openstack/octavia.git<br />
cd octavia/bin<br />
sed -i 's/not-secure-passphrase/$1/' create_dual_intermediate_CA.sh<br />
source create_dual_intermediate_CA.sh 01234567890123456789012345678901<br />
#cp -p ./dual_ca/etc/octavia/certs/server_ca.cert.pem /etc/octavia/certs<br />
#cp -p ./dual_ca/etc/octavia/certs/server_ca-chain.cert.pem /etc/octavia/certs<br />
#cp -p ./dual_ca/etc/octavia/certs/server_ca.key.pem /etc/octavia/certs/private<br />
#cp -p ./dual_ca/etc/octavia/certs/client_ca.cert.pem /etc/octavia/certs<br />
#cp -p ./dual_ca/etc/octavia/certs/client.cert-and-key.pem /etc/octavia/certs/private<br />
#chown -R octavia /etc/octavia/certs<br />
cp -p etc/octavia/certs/server_ca.cert.pem /etc/octavia/certs<br />
cp -p etc/octavia/certs/server_ca-chain.cert.pem /etc/octavia/certs<br />
cp -p etc/octavia/certs/server_ca.key.pem /etc/octavia/certs/private<br />
cp -p etc/octavia/certs/client_ca.cert.pem /etc/octavia/certs<br />
cp -p etc/octavia/certs/client.cert-and-key.pem /etc/octavia/certs/private<br />
chown -R octavia.octavia /etc/octavia/certs<br />
</exec><br />
<br />
<filetree seq="step123" root="/etc/octavia/">conf/network/octavia/octavia.conf</filetree><br />
<filetree seq="step123" root="/etc/octavia/">conf/network/octavia/policy.yaml</filetree><br />
<exec seq="step123" type="verbatim"><br />
#chmod 640 /etc/octavia/{octavia.conf,policy.yaml}<br />
#chgrp octavia /etc/octavia/{octavia.conf,policy.yaml}<br />
#su -s /bin/bash octavia -c "octavia-db-manage --config-file /etc/octavia/octavia.conf upgrade head"<br />
#systemctl restart octavia-api octavia-health-manager octavia-housekeeping octavia-worker<br />
#systemctl enable octavia-api octavia-health-manager octavia-housekeeping octavia-worker<br />
<br />
#source /root/bin/admin-openrc.sh<br />
source /root/bin/octavia-openrc.sh<br />
#openstack security group create lb-mgmt-sec-group --project service<br />
#openstack security group rule create --protocol icmp --ingress lb-mgmt-sec-group<br />
#openstack security group rule create --protocol tcp --dst-port 22:22 lb-mgmt-sec-group<br />
#openstack security group rule create --protocol tcp --dst-port 80:80 lb-mgmt-sec-group<br />
#openstack security group rule create --protocol tcp --dst-port 443:443 lb-mgmt-sec-group<br />
#openstack security group rule create --protocol tcp --dst-port 9443:9443 lb-mgmt-sec-group<br />
<br />
openstack security group create lb-mgmt-sec-grp<br />
openstack security group rule create --protocol icmp lb-mgmt-sec-grp<br />
openstack security group rule create --protocol tcp --dst-port 22 lb-mgmt-sec-grp<br />
openstack security group rule create --protocol tcp --dst-port 9443 lb-mgmt-sec-grp<br />
openstack security group create lb-health-mgr-sec-grp<br />
openstack security group rule create --protocol udp --dst-port 5555 lb-health-mgr-sec-grp<br />
<br />
ssh-keygen -t rsa -N "" -f /root/.ssh/id_rsa<br />
openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey<br />
<br />
mkdir -m755 -p /etc/dhcp/octavia<br />
cp ~/work/octavia/etc/dhcp/dhclient.conf /etc/dhcp/octavia<br />
</exec><br />
<br />
<br />
<br />
<exec seq="step124" type="verbatim"><br />
<br />
source /root/bin/octavia-openrc.sh<br />
<br />
OCTAVIA_MGMT_SUBNET=172.16.0.0/12<br />
OCTAVIA_MGMT_SUBNET_START=172.16.0.100<br />
OCTAVIA_MGMT_SUBNET_END=172.16.31.254<br />
OCTAVIA_MGMT_PORT_IP=172.16.0.2<br />
<br />
openstack network create lb-mgmt-net<br />
openstack subnet create --subnet-range $OCTAVIA_MGMT_SUBNET --allocation-pool \<br />
start=$OCTAVIA_MGMT_SUBNET_START,end=$OCTAVIA_MGMT_SUBNET_END \<br />
--network lb-mgmt-net lb-mgmt-subnet<br />
<br />
SUBNET_ID=$(openstack subnet show lb-mgmt-subnet -f value -c id)<br />
PORT_FIXED_IP="--fixed-ip subnet=$SUBNET_ID,ip-address=$OCTAVIA_MGMT_PORT_IP"<br />
<br />
MGMT_PORT_ID=$(openstack port create --security-group \<br />
lb-health-mgr-sec-grp --device-owner Octavia:health-mgr \<br />
--host=$(hostname) -c id -f value --network lb-mgmt-net \<br />
$PORT_FIXED_IP octavia-health-manager-listen-port)<br />
<br />
MGMT_PORT_MAC=$(openstack port show -c mac_address -f value \<br />
$MGMT_PORT_ID)<br />
<br />
#ip link add o-hm0 type veth peer name o-bhm0<br />
#ovs-vsctl -- --may-exist add-port br-int o-hm0 -- \<br />
# set Interface o-hm0 type=internal -- \<br />
# set Interface o-hm0 external-ids:iface-status=active -- \<br />
# set Interface o-hm0 external-ids:attached-mac=fa:16:3e:51:e9:c3 -- \<br />
# set Interface o-hm0 external-ids:iface-id=6fb13c3f-469e-4a81-a504-a161c6848654 -- \<br />
# set Interface o-hm0 external-ids:skip_cleanup=true<br />
<br />
ovs-vsctl -- --may-exist add-port br-int o-hm0 -- \<br />
set Interface o-hm0 type=internal -- \<br />
set Interface o-hm0 external-ids:iface-status=active -- \<br />
set Interface o-hm0 external-ids:attached-mac=$MGMT_PORT_MAC -- \<br />
set Interface o-hm0 external-ids:iface-id=$MGMT_PORT_ID -- \<br />
set Interface o-hm0 external-ids:skip_cleanup=true<br />
<br />
#NETID=$(openstack network show lb-mgmt-net -c id -f value)<br />
#BRNAME=brq$(echo $NETID|cut -c 1-11)<br />
#brctl addif $BRNAME o-bhm0<br />
ip link set o-bhm0 up<br />
<br />
ip link set dev o-hm0 address $MGMT_PORT_MAC<br />
iptables -I INPUT -i o-hm0 -p udp --dport 5555 -j ACCEPT<br />
dhclient -v o-hm0 -cf /etc/dhcp/octavia<br />
<br />
<br />
SECGRPID=$( openstack security group show lb-mgmt-sec-grp -c id -f value )<br />
LBMGMTNETID=$( openstack network show lb-mgmt-net -c id -f value )<br />
FLVRID=$( openstack flavor show amphora -c id -f value )<br />
#FLVRID=$( openstack flavor show m1.octavia -c id -f value )<br />
SERVICEPROJECTID=$( openstack project show service -c id -f value )<br />
<br />
#crudini --set /etc/octavia/octavia.conf controller_worker amp_image_tag Amphora<br />
crudini --set /etc/octavia/octavia.conf controller_worker amp_image_owner_id $SERVICEPROJECTID<br />
crudini --set /etc/octavia/octavia.conf controller_worker amp_image_tag amphora<br />
crudini --set /etc/octavia/octavia.conf controller_worker amp_ssh_key_name mykey<br />
crudini --set /etc/octavia/octavia.conf controller_worker amp_secgroup_list $SECGRPID<br />
crudini --set /etc/octavia/octavia.conf controller_worker amp_boot_network_list $LBMGMTNETID<br />
crudini --set /etc/octavia/octavia.conf controller_worker amp_flavor_id $FLVRID<br />
crudini --set /etc/octavia/octavia.conf controller_worker network_driver allowed_address_pairs_driver<br />
crudini --set /etc/octavia/octavia.conf controller_worker compute_driver compute_nova_driver<br />
crudini --set /etc/octavia/octavia.conf controller_worker amphora_driver amphora_haproxy_rest_driver<br />
crudini --set /etc/octavia/octavia.conf controller_worker client_ca /etc/octavia/certs/client_ca.cert.pem<br />
<br />
octavia-db-manage --config-file /etc/octavia/octavia.conf upgrade head<br />
systemctl restart octavia-api octavia-health-manager octavia-housekeeping octavia-worker<br />
</exec><br />
<br />
</vm><br />
<br />
<!--<br />
~~<br />
~~ C O M P U T E 1 N O D E<br />
~~<br />
--><br />
<vm name="compute1" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-compute</filesystem><br />
<mem>2G</mem><br />
<shareddir root="/root/shared">shared</shareddir><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.31/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.31/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<filetree seq="on_boot" root="/tmp/">conf/controller/ssh/id_rsa.pub</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
# Create /dev/net/tun device<br />
#mkdir -p /dev/net/<br />
#mknod -m 666 /dev/net/tun c 10 200<br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
mkdir /root/.ssh<br />
cat /tmp/id_rsa.pub >> /root/.ssh/authorized_keys<br />
dhclient eth9 # just in case the Internet connection is not active...<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --><br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<exec seq="step00,step01" type="verbatim"><br />
dhclient eth9<br />
ping -c 3 www.dit.upm.es<br />
</exec><br />
<br />
<!-- STEP 42: Compute service (Nova) --><br />
<filetree seq="step42" root="/etc/nova/">conf/compute1/nova/nova.conf</filetree><br />
<filetree seq="step42" root="/etc/nova/">conf/compute1/nova/nova-compute.conf</filetree><br />
<exec seq="step42" type="verbatim"><br />
systemctl enable nova-compute<br />
systemctl start nova-compute<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron) --><br />
<filetree seq="step53" root="/etc/neutron/">conf/compute1/neutron/neutron.conf</filetree><br />
<filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/openvswitch_agent.ini</filetree><br />
<exec seq="step53" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
systemctl enable openvswitch-switch<br />
systemctl enable neutron-openvswitch-agent<br />
systemctl enable libvirtd.service libvirt-guests.service<br />
systemctl enable nova-compute<br />
systemctl start openvswitch-switch<br />
systemctl start neutron-openvswitch-agent<br />
systemctl restart libvirtd.service libvirt-guests.service<br />
systemctl restart nova-compute<br />
</exec><br />
<br />
<!--filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<exec seq="step92" type="verbatim"><br />
</exec--><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
#export DEBIAN_FRONTEND=noninteractive<br />
#apt-get -y install ceilometer-agent-compute<br />
</exec><br />
<filetree seq="step102" root="/etc/ceilometer/">conf/compute1/ceilometer/ceilometer.conf</filetree><br />
<exec seq="step102" type="verbatim"><br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit True<br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit_period hour<br />
crudini --set /etc/nova/nova.conf notifications notify_on_state_change vm_and_task_state<br />
crudini --set /etc/nova/nova.conf oslo_messaging_notifications driver messagingv2<br />
systemctl restart ceilometer-agent-compute<br />
systemctl enable ceilometer-agent-compute<br />
systemctl restart nova-compute<br />
</exec><br />
<br />
</vm><br />
<br />
<!--<br />
~~~<br />
~~~ C O M P U T E 2 N O D E<br />
~~~<br />
--><br />
<vm name="compute2" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-compute</filesystem><br />
<mem>2G</mem><br />
<shareddir root="/root/shared">shared</shareddir><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.32/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.32/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<filetree seq="on_boot" root="/tmp/">conf/controller/ssh/id_rsa.pub</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
# Create /dev/net/tun device<br />
#mkdir -p /dev/net/<br />
#mknod -m 666 /dev/net/tun c 10 200<br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
mkdir /root/.ssh<br />
cat /tmp/id_rsa.pub >> /root/.ssh/authorized_keys<br />
dhclient eth9 # just in case the Internet connection is not active...<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --><br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<exec seq="step00,step01" type="verbatim"><br />
dhclient eth9<br />
ping -c 3 www.dit.upm.es<br />
</exec><br />
<br />
<!-- STEP 42: Compute service (Nova) --><br />
<filetree seq="step42" root="/etc/nova/">conf/compute2/nova/nova.conf</filetree><br />
<filetree seq="step42" root="/etc/nova/">conf/compute2/nova/nova-compute.conf</filetree><br />
<exec seq="step42" type="verbatim"><br />
systemctl enable nova-compute<br />
systemctl start nova-compute<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) --><br />
<filetree seq="step53" root="/etc/neutron/">conf/compute2/neutron/neutron.conf</filetree><br />
<filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/openvswitch_agent.ini</filetree><br />
<exec seq="step53" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
systemctl enable openvswitch-switch<br />
systemctl enable neutron-openvswitch-agent<br />
systemctl enable libvirtd.service libvirt-guests.service<br />
systemctl enable nova-compute<br />
systemctl start openvswitch-switch<br />
systemctl start neutron-openvswitch-agent<br />
systemctl restart libvirtd.service libvirt-guests.service<br />
systemctl restart nova-compute<br />
</exec><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
#export DEBIAN_FRONTEND=noninteractive<br />
#apt-get -y install ceilometer-agent-compute<br />
</exec><br />
<filetree seq="step102" root="/etc/ceilometer/">conf/compute2/ceilometer/ceilometer.conf</filetree><br />
<exec seq="step102" type="verbatim"><br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit True<br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit_period hour<br />
crudini --set /etc/nova/nova.conf notifications notify_on_state_change vm_and_task_state<br />
crudini --set /etc/nova/nova.conf oslo_messaging_notifications driver messagingv2<br />
systemctl restart ceilometer-agent-compute<br />
systemctl restart nova-compute<br />
</exec><br />
<br />
</vm><br />
<br />
<!--<br />
~~<br />
~~ H O S T N O D E<br />
~~<br />
--><br />
<host><br />
<hostif net="ExtNet"><br />
<ipv4>10.0.10.1/24</ipv4><br />
</hostif><br />
<hostif net="MgmtNet"><br />
<ipv4>10.0.0.1/24</ipv4><br />
</hostif><br />
<exec seq="step00" type="verbatim"><br />
echo "--\n-- Waiting for all VMs to be ssh ready...\n--"<br />
</exec><br />
<exec seq="step00" type="verbatim"><br />
# Wait till ssh is accesible in all VMs<br />
while ! $( nc -z controller 22 ); do sleep 1; done<br />
while ! $( nc -z network 22 ); do sleep 1; done<br />
while ! $( nc -z compute1 22 ); do sleep 1; done<br />
while ! $( nc -z compute2 22 ); do sleep 1; done<br />
</exec><br />
<exec seq="step00" type="verbatim"><br />
echo "-- ...OK\n--"<br />
</exec><br />
</host><br />
<br />
</vnx><br />
</pre></div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Vnx-labo-openstack-4nodes-classic-ovs-antelope&diff=2663
Vnx-labo-openstack-4nodes-classic-ovs-antelope
2023-09-18T08:27:52Z
<p>David: </p>
<hr />
<div>Being edited...<br />
<br />
{{Title|VNX Openstack Antelope four nodes classic scenario using Open vSwitch}}<br />
<br />
== Introduction ==<br />
<br />
This is an Openstack tutorial scenario designed to experiment with Openstack free and open-source software platform for cloud-computing.<br />
<br />
The scenario is made of four virtual machines: a controller node, a network node and two compute nodes, all based on LXC. Optionally, new compute nodes can be added by starting additional VNX scenarios.<br />
<br />
Openstack version used is Antelope (2023.1) over Ubuntu 22.04 LTS. The deployment scenario is the one that was named "Classic with Open vSwitch" and was described in previous versions of Openstack documentation (https://docs.openstack.org/liberty/networking-guide/scenario-classic-ovs.html). It is prepared to experiment either with the deployment of virtual machines using "Self Service Networks" provided by Openstack, or with the use of external "Provider networks".<br />
<br />
The configuration has been developed integrating into the VNX scenario all the installation and configuration commands described in [https://docs.openstack.org/2023.1/install/ Openstack Antelope installation recipes].<br />
<br />
[[File:Openstack_tutorial-stein.png|center|thumb|600px|<div align=center><br />
'''Figure 1: Openstack tutorial scenario'''</div>]]<br />
<br />
== Requirements ==<br />
<br />
To use the scenario you need a Linux computer (Ubuntu 16.04 or later recommended) with VNX software installed. At least 8GB of memory are needed to execute the scenario. <br />
<br />
See how to install VNX here: http://vnx.dit.upm.es/vnx/index.php/Vnx-install<br />
<br />
If already installed, update VNX to the latest version with:<br />
<br />
vnx_update<br />
<br />
== Installation ==<br />
<br />
Download the scenario with the virtual machines images included and unpack it:<br />
<br />
wget http://idefix.dit.upm.es/vnx/examples/openstack/openstack_lab-stein_4n_classic_ovs-v01-with-rootfs.tgz<br />
sudo vnx --unpack openstack_lab-stein_4n_classic_ovs-v01-with-rootfs.tgz<br />
<br />
== Starting the scenario ==<br />
<br />
Start the scenario and configure it and load an example cirros and ubuntu images with:<br />
cd openstack_lab-stein_4n_classic_ovs-v01<br />
# Start the scenario<br />
sudo vnx -f openstack_lab.xml -v --create<br />
# Wait for all consoles to have started and configure all Openstack services<br />
vnx -f openstack_lab.xml -v -x start-all<br />
# Load vm images in GLANCE<br />
vnx -f openstack_lab.xml -v -x load-img<br />
<br />
<br />
[[File:Tutorial-openstack-stein-4n-classic-ovs.png|center|thumb|600px|<div align=center><br />
'''Figure 2: Openstack tutorial detailed topology'''</div>]]<br />
<br />
Once started, you can connect to Openstack Dashboard (default/admin/xxxx) starting a browser and pointing it to the controller horizon page. For example:<br />
<br />
firefox 10.0.10.11/horizon<br />
<br />
== Self Service networks example ==<br />
<br />
Access the network topology Dashboard page (Project->Network->Network topology) and create a simple demo scenario inside Openstack:<br />
<br />
vnx -f openstack_lab.xml -v -x create-demo-scenario<br />
<br />
You should see the simple scenario as it is being created through the Dashboard.<br />
<br />
Once created you should be able to access vm1 console, to ping or ssh from the host to the vm1 or the opposite (see the floating IP assigned to vm1 in the Dashboard, probably 10.0.10.102).<br />
<br />
You can create a second Cirros virtual machine (vm2) or a third Ubuntu virtual machine (vm3) and test connectivity among all virtual machines with: <br />
<br />
vnx -f openstack_lab.xml -v -x create-demo-vm2<br />
vnx -f openstack_lab.xml -v -x create-demo-vm3<br />
<br />
To allow external Internet access from vm1 you hace to configure a NAT in the host. You can easily do it using '''vnx_config_nat''' command distributed with VNX. Just find out the name of the public network interface of your host (i.e eth0) and execute:<br />
<br />
vnx_config_nat ExtNet eth0<br />
<br />
Besides, you can access the Openstack controller by ssh from the host and execute management commands directly:<br />
slogin root@controller # root/xxxx<br />
source bin/admin-openrc.sh # Load admin credentials<br />
<br />
For example, to show the virtual machines started:<br />
openstack server list<br />
<br />
You can also execute that commands from the host where the virtual scenario is started. For that purpose you need to install openstack client first:<br />
<br />
pip install python-openstackclient<br />
source bin/admin-openrc.sh # Load admin credentials<br />
openstack server list<br />
<br />
== Provider networks example ==<br />
<br />
Compute nodes in the Openstack virtual lab scenario have two network interfaces for internal and external connections: <br />
* '''eth2''', connected to Tunnels network and used to connect with VMs in other compute nodes or routers in the network node<br />
* '''eth3''', connected to VLANs network and used to connect with VMs in other compute nodes and also to connect to external systems through the Providers networks infrastructure. <br />
<br />
To demonstrate how Openstack VMs can be connected with external systems though the VLAN network switches, an additional demo scenario is included. Just execute:<br />
vnx -f openstack_lab.xml -v -x create-vlan-demo-scenario<br />
<br />
That scenario will create two new networks and subnetworks associated with VLANs 1000 and 1001, and two VMs, vmA1 and vmB1 connected to that networks. You can see the scenario created through the openstack Dashboard.<br />
<br />
[[File:Openstack-provider-networks-example.png|center|thumb|600px|<div align=center><br />
'''Figure 3: Provider networks testing scenario'''</div>]]<br />
<br />
The commands used to create that networks and vms are the following (can be consulted also in the scenario XML file):<br />
<br />
<pre><br />
# Networks<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1000 vlan1000<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1001 vlan1001<br />
openstack subnet create --network vlan1000 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.2.0/24 --allocation-pool start=10.1.2.2,end=10.1.2.99 subvlan1000<br />
openstack subnet create --network vlan1001 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.3.0/24 --allocation-pool start=10.1.3.2,end=10.1.3.99 subvlan1001<br />
<br />
# VMs<br />
mkdir -p tmp<br />
openstack keypair create vmA1 > tmp/vmA1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vmA1 --nic net-id=vlan1000 --key-name vmA1<br />
openstack keypair create vmB1 > tmp/vmB1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vmB1 --nic net-id=vlan1001 --key-name vmB1<br />
</pre><br />
<br />
To demonstrate the connectivity of vmA1 and vmB1 to external systems connected on VLANs 1000/1001, you can start and additional virtual scenario which creates three additional systems: vmA2 (vlan 1000), vmB2 (vlan 1001) and vlan-router (connected to both vlans). To start it just execute:<br />
vnx -f openstack_lab-vms-vlan.xml -v -t<br />
<br />
[[File:Openstack_lab-vms-vlan-map.png|center|thumb|600px|<div align=center><br />
'''Figure 4: Provider networks demo external scenario'''</div>]]<br />
<br />
Once the scenario is started, you should be able to ping, traceroute and ssh among vmA1, vmB1, vmA2 and vmB2 using the following IP addresses:<br />
<ul><br />
<li>Virtual machines inside Openstack:</li><br />
<ul><br />
<li>vmA1 and vmB1: dynamic addresses assigned from 10.1.2.0/24. You can consult the addresses from Horizon or using the command:</li><br />
openstack server list<br />
</ul><br />
<li>Virtual machines outside Openstack:</li><br />
<ul><br />
<li>vmA2: 10.1.2.100/24</li><br />
<li>vmB2: 10.1.3.100/24</li><br />
</ul><br />
</ul><br />
<br />
Take into account that the pings from the exterior virtual machines to the internal ones is not allowed by the default security groups filters applied by Openstack.<br />
<br />
You can have a look at the virtual switch that supports the Openstack VLAN Network executing the following command in the host:<br />
ovs-vsctl show<br />
<br />
<br />
[[File:Vnx-demo-scenario-openstack-stein.png|center|thumb|600px|<div align=center><br />
'''Figure 5: Openstack Dashboard view of the demo virtual scenarios created'''</div>]]<br />
<br />
== Adding additional compute nodes ==<br />
<br />
Three additional VNX scenarios are provided to add new compute nodes to the scenario. <br />
<br />
For example, to start compute nodes 3 and 4, just:<br />
vnx -f openstack_lab-cmp34.xml -v -t<br />
# Wait for consoles to start<br />
vnx -f openstack_lab-cmp34.xml -v -x start-all<br />
<br />
After that, you can see the new compute nodes added <br />
by going to "Admin->Compute->Hypervisors->Compute host" option. However, the new compute nodes are not added yet to the list of Hypervisors in "Admin->Compute->Hypervisors->Hypervisor" option.<br />
<br />
To add them, just execute:<br />
vnx -f openstack_lab.xml -v -x discover-hosts<br />
<br />
The same procedure can be used to start nodes 5 and 6 (openstack_lab-cmp56.xml) and nodes 7 and 8 (openstack_lab-cmp78.xml).<br />
<br />
== Stopping or releasing the scenario ==<br />
<br />
To stop the scenario preserving the configuration and the changes made:<br />
<br />
vnx -f openstack_lab.xml -v --shutdown<br />
<br />
To start it again use:<br />
<br />
vnx -f openstack_lab.xml -v --start<br />
<br />
To stop the scenario destroying all the configuration and changes made:<br />
<br />
vnx -f openstack_lab.xml -v --destroy<br />
<br />
To unconfigure the NAT, just execute (change eth0 by the name of your external interface):<br />
<br />
vnx_config_nat -d ExtNet eth0<br />
<br />
== Other useful information ==<br />
<br />
To pack the scenario in a tgz file:<br />
<br />
bin/pack-scenario-with-rootfs # including rootfs<br />
bin/pack-scenario # without rootfs<br />
<br />
== Other Openstack Dashboard screen captures ==<br />
<br />
[[File:Vnx-demo-scenario-openstack-mitaka-compute-overview.png|center|thumb|600px|<div align=center><br />
'''Figure 6: Openstack Dashboard compute overview'''</div>]]<br />
<br />
[[File:Vnx-demo-scenario-openstack-mitaka-instances.png|center|thumb|600px|<div align=center><br />
'''Figure 7: Openstack Dashboard view of the demo virtual machines created'''</div>]]<br />
<br />
== XML specification of Openstack tutorial scenario ==<br />
<br />
<pre><br />
<?xml version="1.0" encoding="UTF-8"?><br />
<br />
<!--<br />
~~~~~~~~~~~~~~~~~~~~~~<br />
VNX Sample scenarios<br />
~~~~~~~~~~~~~~~~~~~~~~<br />
<br />
Name: openstack_tutorial-stein<br />
<br />
Description: This is an Openstack tutorial scenario designed to experiment with Openstack free and open-source <br />
software platform for cloud-computing. It is made of four LXC containers: <br />
- one controller<br />
- one network node<br />
- two compute nodes<br />
Openstack version used: Stein.<br />
The network configuration is based on the one named "Classic with Open vSwitch" described here:<br />
http://docs.openstack.org/liberty/networking-guide/scenario-classic-ovs.html<br />
<br />
Author: David Fernandez (david@dit.upm.es)<br />
<br />
This file is part of the Virtual Networks over LinuX (VNX) Project distribution. <br />
(www: http://www.dit.upm.es/vnx - e-mail: vnx@dit.upm.es) <br />
<br />
Copyright (C) 2019 Departamento de Ingenieria de Sistemas Telematicos (DIT)<br />
Universidad Politecnica de Madrid (UPM)<br />
SPAIN<br />
--><br />
<br />
<vnx xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"<br />
xsi:noNamespaceSchemaLocation="/usr/share/xml/vnx/vnx-2.00.xsd"><br />
<global><br />
<version>2.0</version><br />
<scenario_name>openstack_tutorial-stein</scenario_name><br />
<ssh_key>/root/.ssh/id_rsa.pub</ssh_key><br />
<automac offset="0"/><br />
<!--vm_mgmt type="none"/--><br />
<vm_mgmt type="private" network="10.20.0.0" mask="24" offset="0"><br />
<host_mapping /><br />
</vm_mgmt> <br />
<vm_defaults><br />
<console id="0" display="no"/><br />
<console id="1" display="yes"/><br />
</vm_defaults><br />
<cmd-seq seq="step1-6">step1,step2,step3,step3b,step4,step5,step6</cmd-seq><br />
<cmd-seq seq="step4">step41,step42,step43,step44</cmd-seq><br />
<cmd-seq seq="step5">step51,step52,step53</cmd-seq><br />
<!-- start-all for 'noconfig' scenario: all installation steps included --><br />
<!--cmd-seq seq="start-all">step1,step2,step3,step4,step5,step6</cmd-seq--><br />
<!-- start-all for configured scenario: only network and compute node steps included --><br />
<cmd-seq seq="step10">step101,step102</cmd-seq><br />
<cmd-seq seq="start-all">step00,step42,step43,step44,step52,step53</cmd-seq><br />
<cmd-seq seq="discover-hosts">step44</cmd-seq><br />
</global><br />
<br />
<net name="MgmtNet" mode="openvswitch" mtu="1450"/><br />
<net name="TunnNet" mode="openvswitch" mtu="1450"/><br />
<net name="ExtNet" mode="openvswitch" /><br />
<net name="VlanNet" mode="openvswitch" /><br />
<net name="virbr0" mode="virtual_bridge" managed="no"/><br />
<br />
<vm name="controller" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-controller</filesystem><br />
<mem>1G</mem><br />
<!--console id="0" display="yes"/--><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.11/24</ipv4><br />
</if><br />
<if id="2" net="ExtNet"><br />
<ipv4>10.0.10.11/24</ipv4><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-controller.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<filetree seq="on_boot" root="/root/">conf/controller/bin</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
chmod +x /root/bin/*<br />
</exec><br />
<exec seq="on_boot" type="verbatim"><br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
<br />
# Change owner of secret_key to horizon to avoid a 500 error when<br />
# accessing horizon (new problem arosed in v04)<br />
# See: https://ask.openstack.org/en/question/30059/getting-500-internal-server-error-while-accessing-horizon-dashboard-in-ubuntu-icehouse/<br />
chown horizon /var/lib/openstack-dashboard/secret_key<br />
<br />
# Stop nova services. Before being configured, they consume a lot of CPU<br />
service nova-scheduler stop<br />
service nova-api stop<br />
service nova-conductor stop<br />
</exec><br />
<br />
<exec seq="step00" type="verbatim"><br />
# Restart nova services<br />
service nova-scheduler start<br />
service nova-api start<br />
service nova-conductor start<br />
</exec><br />
<br />
<!-- STEP 1: Basic services --><br />
<filetree seq="step1" root="/etc/mysql/mariadb.conf.d/">conf/controller/mysql/mysqld_openstack.cnf</filetree><br />
<filetree seq="step1" root="/etc/">conf/controller/memcached/memcached.conf</filetree><br />
<filetree seq="step1" root="/etc/default/">conf/controller/etcd/etcd</filetree><br />
<!--filetree seq="step1" root="/etc/mongodb.conf">conf/controller/mongodb/mongodb.conf</filetree!--><br />
<exec seq="step1" type="verbatim"><br />
<br />
# Stop nova services. Before being configured, they consume a lot of CPU<br />
service nova-scheduler stop<br />
service nova-api stop<br />
service nova-conductor stop<br />
<br />
# Change all ocurrences of utf8mb4 to utf8. See comment above<br />
#for f in $( find /etc/mysql/mariadb.conf.d/ -type f ); do echo "Changing utf8mb4 to utf8 in file $f"; sed -i -e 's/utf8mb4/utf8/g' $f; done<br />
service mysql restart<br />
#mysql_secure_installation # to be run manually<br />
<br />
rabbitmqctl add_user openstack xxxx<br />
rabbitmqctl set_permissions openstack ".*" ".*" ".*" <br />
<br />
service memcached restart<br />
<br />
systemctl enable etcd<br />
systemctl start etcd<br />
<br />
#service mongodb stop<br />
#rm -f /var/lib/mongodb/journal/prealloc.*<br />
#service mongodb start<br />
</exec><br />
<br />
<!-- STEP 2: Identity service --><br />
<filetree seq="step2" root="/etc/keystone/">conf/controller/keystone/keystone.conf</filetree><br />
<filetree seq="step2" root="/root/bin/">conf/controller/keystone/admin-openrc.sh</filetree><br />
<filetree seq="step2" root="/root/bin/">conf/controller/keystone/demo-openrc.sh</filetree><br />
<exec seq="step2" type="verbatim"><br />
count=1; while ! mysqladmin ping ; do echo -n $count; echo ": waiting for mysql ..."; ((count++)) &amp;&amp; ((count==6)) &amp;&amp; echo "--" &amp;&amp; echo "-- ERROR: database not ready." &amp;&amp; echo "--" &amp;&amp; break; sleep 2; done<br />
</exec><br />
<exec seq="step2" type="verbatim"><br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE keystone;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
su -s /bin/sh -c "keystone-manage db_sync" keystone<br />
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone<br />
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone<br />
<br />
keystone-manage bootstrap --bootstrap-password xxxx \<br />
--bootstrap-admin-url http://controller:5000/v3/ \<br />
--bootstrap-internal-url http://controller:5000/v3/ \<br />
--bootstrap-public-url http://controller:5000/v3/ \<br />
--bootstrap-region-id RegionOne<br />
<br />
echo "ServerName controller" >> /etc/apache2/apache2.conf<br />
#ln -s /etc/apache2/sites-available/wsgi-keystone.conf /etc/apache2/sites-enabled<br />
service apache2 restart<br />
rm -f /var/lib/keystone/keystone.db<br />
sleep 5<br />
<br />
export OS_USERNAME=admin<br />
export OS_PASSWORD=xxxx<br />
export OS_PROJECT_NAME=admin<br />
export OS_USER_DOMAIN_NAME=Default<br />
export OS_PROJECT_DOMAIN_NAME=Default<br />
export OS_AUTH_URL=http://controller:5000/v3<br />
export OS_IDENTITY_API_VERSION=3<br />
<br />
# Create users and projects<br />
openstack project create --domain default --description "Service Project" service<br />
openstack project create --domain default --description "Demo Project" demo<br />
openstack user create --domain default --password=xxxx demo<br />
openstack role create user<br />
openstack role add --project demo --user demo user<br />
</exec><br />
<br />
<!-- <br />
STEP 3: Image service (Glance) <br />
--><br />
<filetree seq="step3" root="/etc/glance/">conf/controller/glance/glance-api.conf</filetree><br />
<filetree seq="step3" root="/etc/glance/">conf/controller/glance/glance-registry.conf</filetree><br />
<exec seq="step3" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE glance;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx glance<br />
openstack role add --project service --user glance admin<br />
openstack service create --name glance --description "OpenStack Image" image<br />
openstack endpoint create --region RegionOne image public http://controller:9292<br />
openstack endpoint create --region RegionOne image internal http://controller:9292<br />
openstack endpoint create --region RegionOne image admin http://controller:9292<br />
<br />
su -s /bin/sh -c "glance-manage db_sync" glance<br />
service glance-registry restart<br />
service glance-api restart<br />
#rm -f /var/lib/glance/glance.sqlite<br />
</exec><br />
<br />
<!-- <br />
STEP 3B: Placement service API<br />
--><br />
<filetree seq="step3b" root="/etc/placement/">conf/controller/placement/placement.conf</filetree><br />
<exec seq="step3b" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE placement;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx placement<br />
openstack role add --project service --user placement admin<br />
openstack service create --name placement --description "Placement API" placement<br />
openstack endpoint create --region RegionOne placement public http://controller:8778<br />
openstack endpoint create --region RegionOne placement internal http://controller:8778<br />
openstack endpoint create --region RegionOne placement admin http://controller:8778<br />
su -s /bin/sh -c "placement-manage db sync" placement<br />
service apache2 restart<br />
</exec><br />
<br />
<br />
<br />
<!-- STEP 4: Compute service (Nova) --><br />
<filetree seq="step41" root="/etc/nova/">conf/controller/nova/nova.conf</filetree><br />
<exec seq="step41" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova_api;"<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova;"<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova_cell0;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
<br />
openstack user create --domain default --password=xxxx nova<br />
openstack role add --project service --user nova admin<br />
openstack service create --name nova --description "OpenStack Compute" compute<br />
<br />
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1<br />
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1<br />
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1<br />
<br />
# Restart services stopped at step 1 to save CPU<br />
service nova-scheduler start<br />
service nova-api start<br />
service nova-conductor start<br />
<br />
su -s /bin/sh -c "nova-manage api_db sync" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova<br />
su -s /bin/sh -c "nova-manage db sync" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova<br />
<br />
service nova-api restart<br />
service nova-consoleauth restart<br />
service nova-scheduler restart<br />
service nova-conductor restart<br />
service nova-novncproxy restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<exec seq="step43" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Wait for compute1 hypervisor to be up<br />
while [ $( openstack hypervisor list --matching compute1 -f value -c State ) != 'up' ]; do echo "waiting for compute1 hypervisor..."; sleep 5; done<br />
</exec><br />
<exec seq="step43" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Wait for compute2 hypervisor to be up<br />
while [ $( openstack hypervisor list --matching compute2 -f value -c State ) != 'up' ]; do echo "waiting for compute2 hypervisor..."; sleep 5; done<br />
</exec><br />
<exec seq="step44" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
openstack hypervisor list<br />
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron) --><br />
<filetree seq="step51" root="/etc/neutron/">conf/controller/neutron/neutron.conf</filetree><br />
<!--filetree seq="step51" root="/etc/neutron/">conf/controller/neutron/neutron_lbaas.conf</filetree--><br />
<filetree seq="step51" root="/etc/neutron/plugins/ml2/">conf/controller/neutron/ml2_conf.ini</filetree><br />
<exec seq="step51" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE neutron;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx neutron<br />
openstack role add --project service --user neutron admin<br />
openstack service create --name neutron --description "OpenStack Networking" network<br />
openstack endpoint create --region RegionOne network public http://controller:9696<br />
openstack endpoint create --region RegionOne network internal http://controller:9696<br />
openstack endpoint create --region RegionOne network admin http://controller:9696<br />
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron<br />
<br />
# LBaaS<br />
neutron-db-manage --subproject neutron-lbaas upgrade head<br />
<br />
# FwaaS<br />
neutron-db-manage --subproject neutron-fwaas upgrade head<br />
<br />
# LBaaS Dashboard panels<br />
#git clone https://git.openstack.org/openstack/neutron-lbaas-dashboard<br />
#cd neutron-lbaas-dashboard<br />
#git checkout stable/mitaka<br />
#python setup.py install<br />
#cp neutron_lbaas_dashboard/enabled/_1481_project_ng_loadbalancersv2_panel.py /usr/share/openstack-dashboard/openstack_dashboard/local/enabled/<br />
#cd /usr/share/openstack-dashboard<br />
#./manage.py collectstatic --noinput<br />
#./manage.py compress<br />
#sudo service apache2 restart<br />
<br />
service nova-api restart<br />
service neutron-server restart<br />
</exec><br />
<br />
<!-- STEP 6: Dashboard service --><br />
<filetree seq="step6" root="/etc/openstack-dashboard/">conf/controller/dashboard/local_settings.py</filetree><br />
<exec seq="step6" type="verbatim"><br />
#chown www-data:www-data /var/lib/openstack-dashboard/secret_key<br />
rm /var/lib/openstack-dashboard/secret_key<br />
systemctl enable apache2<br />
service apache2 restart<br />
</exec><br />
<br />
<!-- STEP 7: Trove service --><br />
<cmd-seq seq="step7">step71,step72,step73</cmd-seq><br />
<exec seq="step71" type="verbatim"><br />
apt-get -y install python-trove python-troveclient python-glanceclient trove-common trove-api trove-taskmanager trove-conductor python-pip<br />
pip install trove-dashboard==7.0.0.0b2<br />
</exec><br />
<br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-conductor.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-taskmanager.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-guestagent.conf</filetree><br />
<exec seq="step72" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE trove;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
<br />
openstack user create --domain default --password xxxx trove<br />
openstack role add --project service --user trove admin<br />
openstack service create --name trove --description "Database" database<br />
openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s<br />
<br />
su -s /bin/sh -c "trove-manage db_sync" trove<br />
<br />
service trove-api restart<br />
service trove-taskmanager restart<br />
service trove-conductor restart<br />
<br />
# Install trove_dashboard<br />
cp -a /usr/local/lib/python2.7/dist-packages/trove_dashboard/enabled/* /usr/share/openstack-dashboard/openstack_dashboard/local/enabled/<br />
service apache2 restart<br />
<br />
</exec><br />
<br />
<exec seq="step73" type="verbatim"><br />
#wget -P /tmp/images http://tarballs.openstack.org/trove/images/ubuntu/mariadb.qcow2<br />
wget -P /tmp/images/ http://138.4.7.228/download/vnx/filesystems/ostack-images/trove/mariadb.qcow2<br />
glance image-create --name "trove-mariadb" --file /tmp/images/mariadb.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/mariadb.qcow2<br />
su -s /bin/sh -c "trove-manage --config-file /etc/trove/trove.conf datastore_update mysql ''" trove <br />
su -s /bin/sh -c "trove-manage --config-file /etc/trove/trove.conf datastore_version_update mysql mariadb mariadb glance_image_ID '' 1" trove<br />
<br />
# Create example database<br />
openstack flavor show m1.smaller >/dev/null 2>&amp;1 || openstack flavor create m1.smaller --ram 512 --disk 3 --vcpus 1 --id 6<br />
#trove create mysql_instance_1 m1.smaller --size 1 --databases myDB --users userA:xxxx --datastore_version mariadb --datastore mysql<br />
</exec><br />
<br />
<!-- STEP 8: Heat service --><br />
<!--cmd-seq seq="step8">step81,step82</cmd-seq--><br />
<br />
<filetree seq="step8" root="/etc/heat/">conf/controller/heat/heat.conf</filetree><br />
<filetree seq="step8" root="/root/heat/">conf/controller/heat/examples</filetree><br />
<exec seq="step8" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE heat;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password xxxx heat<br />
openstack role add --project service --user heat admin<br />
openstack service create --name heat --description "Orchestration" orchestration<br />
openstack service create --name heat-cfn --description "Orchestration" cloudformation<br />
openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1<br />
openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1<br />
openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1<br />
openstack domain create --description "Stack projects and users" heat<br />
openstack user create --domain heat --password xxxx heat_domain_admin<br />
openstack role add --domain heat --user-domain heat --user heat_domain_admin admin<br />
openstack role create heat_stack_owner<br />
openstack role add --project demo --user demo heat_stack_owner<br />
openstack role create heat_stack_user<br />
<br />
su -s /bin/sh -c "heat-manage db_sync" heat<br />
service heat-api restart<br />
service heat-api-cfn restart<br />
service heat-engine restart<br />
<br />
# Install Orchestration interface in Dashboard<br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get install -y gettext<br />
pip3 install heat-dashboard<br />
<br />
cd /root<br />
git clone https://github.com/openstack/heat-dashboard.git<br />
cd heat-dashboard/<br />
git checkout stable/stein<br />
cp heat_dashboard/enabled/_[1-9]*.py /usr/share/openstack-dashboard/openstack_dashboard/local/enabled<br />
python3 ./manage.py compilemessages<br />
cd /usr/share/openstack-dashboard<br />
DJANGO_SETTINGS_MODULE=openstack_dashboard.settings python3 manage.py collectstatic --noinput<br />
DJANGO_SETTINGS_MODULE=openstack_dashboard.settings python3 manage.py compress --force<br />
rm /var/lib/openstack-dashboard/secret_key<br />
service apache2 restart<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-heat" type="verbatim"><br />
source /root/bin/demo-openrc.sh<br />
<br />
# Create internal network<br />
openstack network create net-heat<br />
openstack subnet create --network net-heat --gateway 10.1.10.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.10.0/24 --allocation-pool start=10.1.10.8,end=10.1.10.100 subnet-heat<br />
<br />
mkdir -p /root/keys<br />
openstack keypair create key-heat > /root/keys/key-heat<br />
#export NET_ID=$(openstack network list | awk '/ net-heat / { print $2 }')<br />
export NET_ID=$( openstack network list --name net-heat -f value -c ID )<br />
openstack stack create -t /root/heat/examples/demo-template.yml --parameter "NetID=$NET_ID" stack<br />
<br />
</exec><br />
<br />
<br />
<!-- STEP 9: Tacker service --><br />
<cmd-seq seq="step9">step91,step92</cmd-seq><br />
<br />
<exec seq="step91" type="verbatim"><br />
apt-get -y install python-pip git<br />
pip install --upgrade pip<br />
</exec><br />
<br />
<filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/default-vim-config.yaml</filetree><br />
<filetree seq="step92" root="/root/tacker/">conf/controller/tacker/examples</filetree><br />
<exec seq="step92" type="verbatim"><br />
sed -i -e 's/.*"resource_types:OS::Nova::Flavor":.*/ "resource_types:OS::Nova::Flavor": "role:admin",/' /etc/heat/policy.json <br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE tacker;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password xxxx tacker<br />
openstack role add --project service --user tacker admin<br />
openstack service create --name tacker --description "Tacker Project" nfv-orchestration<br />
openstack endpoint create --region RegionOne nfv-orchestration public http://controller:9890/<br />
openstack endpoint create --region RegionOne nfv-orchestration internal http://controller:9890/<br />
openstack endpoint create --region RegionOne nfv-orchestration admin http://controller:9890/<br />
<br />
mkdir -p /root/tacker<br />
cd /root/tacker<br />
git clone https://github.com/openstack/tacker<br />
cd tacker<br />
git checkout stable/ocata<br />
pip install -r requirements.txt<br />
pip install tosca-parser<br />
python setup.py install<br />
mkdir -p /var/log/tacker<br />
<br />
/usr/local/bin/tacker-db-manage --config-file /usr/local/etc/tacker/tacker.conf upgrade head<br />
<br />
# Tacker client<br />
cd /root/tacker<br />
git clone https://github.com/openstack/python-tackerclient<br />
cd python-tackerclient<br />
git checkout stable/ocata<br />
python setup.py install<br />
<br />
# Tacker horizon<br />
cd /root/tacker<br />
git clone https://github.com/openstack/tacker-horizon<br />
cd tacker-horizon<br />
git checkout stable/ocata<br />
python setup.py install<br />
cp tacker_horizon/enabled/* /usr/share/openstack-dashboard/openstack_dashboard/enabled/<br />
service apache2 restart<br />
<br />
# Start tacker server<br />
mkdir -p /var/log/tacker<br />
nohup python /usr/local/bin/tacker-server \<br />
--config-file /usr/local/etc/tacker/tacker.conf \<br />
--log-file /var/log/tacker/tacker.log &amp;<br />
<br />
# Register default VIM<br />
tacker vim-register --is-default --config-file /usr/local/etc/tacker/default-vim-config.yaml \<br />
--description "Default VIM" "Openstack-VIM"<br />
<br />
</exec><br />
<exec seq="step93" type="verbatim"><br />
nohup python /usr/local/bin/tacker-server \<br />
--config-file /usr/local/etc/tacker/tacker.conf \<br />
--log-file /var/log/tacker/tacker.log &amp;<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-tacker" type="verbatim"><br />
source /root/bin/demo-openrc.sh<br />
<br />
# Create internal network<br />
openstack network create net-tacker<br />
openstack subnet create --network net-tacker --gateway 10.1.11.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.11.0/24 --allocation-pool start=10.1.11.8,end=10.1.11.100 subnet-tacker<br />
<br />
cd /root/tacker/examples<br />
tacker vnfd-create --vnfd-file sample-vnfd.yaml testd<br />
<br />
# Falla con error: <br />
# ERROR: Property error: : resources.VDU1.properties.image: : No module named v2.client<br />
tacker vnf-create --vnfd-id $( tacker vnfd-list | awk '/ testd / { print $2 }' ) test<br />
<br />
</exec><br />
<br />
<!--filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<exec seq="step92" type="verbatim"><br />
</exec--><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
#apt-get -y install python-pip<br />
#pip install --upgrade pip<br />
#pip install gnocchi[mysql,keystone] gnocchiclient<br />
apt-get -y install ceilometer-collector ceilometer-agent-central ceilometer-agent-notification python-ceilometerclient<br />
apt-get -y install gnocchi-common gnocchi-api gnocchi-metricd gnocchi-statsd python-gnocchiclient<br />
<br />
</exec><br />
<br />
<filetree seq="step102" root="/etc/ceilometer/">conf/controller/ceilometer/ceilometer.conf</filetree><br />
<!--filetree seq="step102" root="/etc/mongodb.conf">conf/controller/mongodb/mongodb.conf</filetree--><br />
<!--filetree seq="step102" root="/etc/apache2/sites-available/ceilometer.conf">conf/controller/ceilometer/apache/ceilometer.conf</filetree--><br />
<!--filetree seq="step102" root="/etc/apache2/sites-available/gnocchi.conf">conf/controller/ceilometer/apache/gnocchi.conf</filetree--><br />
<filetree seq="step102" root="/etc/gnocchi/">conf/controller/ceilometer/gnocchi.conf</filetree><br />
<filetree seq="step102" root="/etc/gnocchi/">conf/controller/ceilometer/api-paste.ini</filetree><br />
<exec seq="step102" type="verbatim"><br />
<br />
# Create gnocchi database<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE gnocchidb default character set utf8;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON gnocchidb.* TO 'gnocchi'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON gnocchidb.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
# Ceilometer<br />
source /root/bin/admin-openrc.sh <br />
openstack user create --domain default --password xxxx ceilometer<br />
openstack role add --project service --user ceilometer admin<br />
openstack service create --name ceilometer --description "Telemetry" metering<br />
openstack user create --domain default --password xxxx gnocchi<br />
openstack role add --project service --user gnocchi admin<br />
openstack service create --name gnocchi --description "Metric Service" metric<br />
openstack endpoint create --region RegionOne metric public http://controller:8041<br />
openstack endpoint create --region RegionOne metric internal http://controller:8041<br />
openstack endpoint create --region RegionOne metric admin http://controller:8041<br />
<br />
mkdir -p /var/cache/gnocchi<br />
chown gnocchi:gnocchi -R /var/cache/gnocchi<br />
mkdir -p /var/lib/gnocchi<br />
chown gnocchi:gnocchi -R /var/lib/gnocchi<br />
<br />
gnocchi-upgrade<br />
sed -i 's/8000/8041/g' /usr/bin/gnocchi-api<br />
# Correct error in gnocchi-api startup script<br />
sed -i -e 's/exec $DAEMON $DAEMON_ARGS/exec $DAEMON -- $DAEMON_ARGS/' /etc/init.d/gnocchi-api<br />
systemctl enable gnocchi-api.service gnocchi-metricd.service gnocchi-statsd.service<br />
systemctl start gnocchi-api.service gnocchi-metricd.service gnocchi-statsd.service<br />
<br />
ceilometer-upgrade --skip-metering-database<br />
service ceilometer-agent-central restart<br />
service ceilometer-agent-notification restart<br />
service ceilometer-collector restart<br />
<br />
# Enable Glance service meters<br />
crudini --set /etc/glance/glance-api.conf DEFAULT transport_url rabbit://openstack:xxxx@controller<br />
crudini --set /etc/glance/glance-api.conf oslo_messaging_notifications driver messagingv2<br />
crudini --set /etc/glance/glance-registry.conf DEFAULT transport_url rabbit://openstack:xxxx@controller<br />
crudini --set /etc/glance/glance-registry.conf oslo_messaging_notifications driver messagingv2<br />
<br />
service glance-registry restart<br />
service glance-api restart<br />
<br />
# Enable Neutron service meters<br />
crudini --set /etc/neutron/neutron.conf oslo_messaging_notifications driver messagingv2<br />
service neutron-server restart<br />
<br />
# Enable Heat service meters<br />
crudini --set /etc/heat/heat.conf oslo_messaging_notifications driver messagingv2<br />
service heat-api restart<br />
service heat-api-cfn restart<br />
service heat-engine restart<br />
<br />
#crudini --set /etc/glance/glance-api.conf DEFAULT rpc_backend rabbit<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_notifications driver messagingv2<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_host controller<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_userid openstack<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_password xxxx<br />
<br />
#crudini --set /etc/glance/glance-registry.conf DEFAULT rpc_backend rabbit<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_notifications driver messagingv2<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_host controller<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_userid openstack<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_password xxxx<br />
<br />
<br />
</exec><br />
<br />
<br />
<exec seq="load-img" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create flavors if not created<br />
openstack flavor show m1.nano >/dev/null 2>&amp;1 || openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano<br />
openstack flavor show m1.tiny >/dev/null 2>&amp;1 || openstack flavor create --id 1 --vcpus 1 --ram 512 --disk 1 m1.tiny<br />
openstack flavor show m1.smaller >/dev/null 2>&amp;1 || openstack flavor create --id 6 --vcpus 1 --ram 512 --disk 3 m1.smaller<br />
<br />
# CentOS image<br />
# Cirros image <br />
#wget -P /tmp/images http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img<br />
wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/cirros-0.3.4-x86_64-disk-vnx.qcow2<br />
glance image-create --name "cirros-0.3.4-x86_64-vnx" --file /tmp/images/cirros-0.3.4-x86_64-disk-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/cirros-0.3.4-x86_64-disk*.qcow2<br />
<br />
# Ubuntu image (trusty)<br />
#wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/trusty-server-cloudimg-amd64-disk1-vnx.qcow2<br />
#glance image-create --name "trusty-server-cloudimg-amd64-vnx" --file /tmp/images/trusty-server-cloudimg-amd64-disk1-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
#rm /tmp/images/trusty-server-cloudimg-amd64-disk1*.qcow2<br />
<br />
# Ubuntu image (xenial)<br />
wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/xenial-server-cloudimg-amd64-disk1-vnx.qcow2<br />
glance image-create --name "xenial-server-cloudimg-amd64-vnx" --file /tmp/images/xenial-server-cloudimg-amd64-disk1-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/xenial-server-cloudimg-amd64-disk1*.qcow2<br />
<br />
#wget -P /tmp/images http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2<br />
#glance image-create --name "CentOS-7-x86_64" --file /tmp/images/CentOS-7-x86_64-GenericCloud.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
#rm /tmp/images/CentOS-7-x86_64-GenericCloud.qcow2<br />
</exec><br />
<br />
<exec seq="create-demo-scenario" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create internal network<br />
#neutron net-create net0<br />
openstack network create net0<br />
#neutron subnet-create net0 10.1.1.0/24 --name subnet0 --gateway 10.1.1.1 --dns-nameserver 8.8.8.8<br />
openstack subnet create --network net0 --gateway 10.1.1.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.1.0/24 --allocation-pool start=10.1.1.8,end=10.1.1.100 subnet0<br />
<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm1 > /root/keys/vm1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm1 --nic net-id=net0 --key-name vm1<br />
<br />
# Create external network<br />
#neutron net-create ExtNet --provider:physical_network provider --provider:network_type flat --router:external --shared<br />
openstack network create --share --external --provider-physical-network provider --provider-network-type flat ExtNet<br />
#neutron subnet-create --name ExtSubnet --allocation-pool start=10.0.10.100,end=10.0.10.200 --dns-nameserver 10.0.10.1 --gateway 10.0.10.1 ExtNet 10.0.10.0/24<br />
openstack subnet create --network ExtNet --gateway 10.0.10.1 --dns-nameserver 10.0.10.1 --subnet-range 10.0.10.0/24 --allocation-pool start=10.0.10.100,end=10.0.10.200 ExtSubNet<br />
#neutron router-create r0<br />
openstack router create r0<br />
#neutron router-gateway-set r0 ExtNet<br />
openstack router set r0 --external-gateway ExtNet<br />
#neutron router-interface-add r0 subnet0<br />
openstack router add subnet r0 subnet0<br />
<br />
<br />
# Assign floating IP address to vm1<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm1<br />
openstack server add floating ip vm1 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
<br />
<br />
# Create security group rules to allow ICMP, SSH and WWW access<br />
openstack security group rule create --proto icmp --dst-port 0 default<br />
openstack security group rule create --proto tcp --dst-port 80 default<br />
openstack security group rule create --proto tcp --dst-port 22 default<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-vm2" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm2 > /root/keys/vm2<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm2 --nic net-id=net0 --key-name vm2<br />
# Assign floating IP address to vm2<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm2<br />
openstack server add floating ip vm2 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
</exec><br />
<br />
<exec seq="create-demo-vm3" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm3 > /root/keys/vm3<br />
openstack server create --flavor m1.smaller --image xenial-server-cloudimg-amd64-vnx vm3 --nic net-id=net0 --key-name vm3<br />
# Assign floating IP address to vm3<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm3<br />
openstack server add floating ip vm3 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
</exec><br />
<br />
<exec seq="create-vlan-demo-scenario" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create vlan based networks and subnetworks<br />
#neutron net-create vlan1000 --shared --provider:physical_network vlan --provider:network_type vlan --provider:segmentation_id 1000<br />
#neutron net-create vlan1001 --shared --provider:physical_network vlan --provider:network_type vlan --provider:segmentation_id 1001<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1000 vlan1000<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1001 vlan1001<br />
#neutron subnet-create vlan1000 10.1.2.0/24 --name vlan1000-subnet --allocation-pool start=10.1.2.2,end=10.1.2.99 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8<br />
#neutron subnet-create vlan1001 10.1.3.0/24 --name vlan1001-subnet --allocation-pool start=10.1.3.2,end=10.1.3.99 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8<br />
openstack subnet create --network vlan1000 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.2.0/24 --allocation-pool start=10.1.2.2,end=10.1.2.99 subvlan1000<br />
openstack subnet create --network vlan1001 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.3.0/24 --allocation-pool start=10.1.3.2,end=10.1.3.99 subvlan1001<br />
<br />
<br />
# Create virtual machine<br />
mkdir -p tmp<br />
openstack keypair create vm3 > tmp/vm3<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm3 --nic net-id=vlan1000 --key-name vm3<br />
openstack keypair create vm4 > tmp/vm4<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm4 --nic net-id=vlan1001 --key-name vm4<br />
<br />
<br />
# Create security group rules to allow ICMP, SSH and WWW access<br />
openstack security group rule create --proto icmp --dst-port 0 default<br />
openstack security group rule create --proto tcp --dst-port 80 default<br />
openstack security group rule create --proto tcp --dst-port 22 default<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
echo "--"<br />
echo "-- Keystone (identity)"<br />
echo "--"<br />
echo "Command: openstack --os-auth-url http://controller:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue"<br />
openstack --os-auth-url http://controller:35357/v3 \<br />
--os-project-domain-name default --os-user-domain-name default \<br />
--os-project-name admin --os-username admin token issue<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Glance (images)"<br />
echo "--"<br />
echo "Command: openstack image list"<br />
openstack image list<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Nova (compute)"<br />
echo "--"<br />
echo "Command: openstack compute service list"<br />
openstack compute service list<br />
echo "Command: openstack hypervisor service list"<br />
openstack hypervisor service list<br />
echo "Command: openstack catalog list"<br />
openstack catalog list<br />
echo "Command: nova-status upgrade check"<br />
nova-status upgrade check<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Neutron (network)"<br />
echo "--"<br />
echo "Command: openstack extension list --network"<br />
openstack extension list --network<br />
echo "Command: openstack network agent list"<br />
openstack network agent list<br />
echo "Command: openstack security group list"<br />
openstack security group list<br />
echo "Command: openstack security group rule list"<br />
openstack security group rule list<br />
</exec><br />
<br />
</vm><br />
<br />
<vm name="network" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-network</filesystem><br />
<!--vm name="network" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-network</filesystem--><br />
<mem>1G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.21/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.21/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="4" net="ExtNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<forwarding type="ip" /><br />
<forwarding type="ipv6" /><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts<br />
rm /root/hosts<br />
</exec><br />
<exec seq="on_boot" type="verbatim"><br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --><br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<filetree seq="on_boot" root="/root/">conf/network/bin</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
chmod +x /root/bin/*<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) --><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/neutron.conf</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/metadata_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/plugins/ml2/">conf/network/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step52" root="/etc/neutron/plugins/ml2/">conf/network/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/l3_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/dhcp_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/dnsmasq-neutron.conf</filetree><br />
<!--filetree seq="step52" root="/etc/neutron/">conf/network/neutron/fwaas_driver.ini</filetree--><br />
<!--filetree seq="step52" root="/etc/neutron/">conf/network/neutron/lbaas_agent.ini</filetree--><br />
<exec seq="step52" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
#ovs-vsctl add-br br-ex<br />
ovs-vsctl add-port br-provider eth4<br />
<br />
service neutron-lbaasv2-agent restart<br />
service openvswitch-switch restart<br />
<br />
service neutron-openvswitch-agent restart<br />
#service neutron-linuxbridge-agent restart<br />
service neutron-dhcp-agent restart<br />
service neutron-metadata-agent restart<br />
service neutron-l3-agent restart<br />
<br />
rm -f /var/lib/neutron/neutron.sqlite<br />
</exec><br />
<br />
</vm><br />
<br />
<br />
<vm name="compute1" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-compute</filesystem><br />
<!--vm name="compute1" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64" vcpu="2"--><br />
<!--filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-compute</filesystem--><br />
<mem>2G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.31/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.31/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
# Create /dev/net/tun device <br />
#mkdir -p /dev/net/<br />
#mknod -m 666 /dev/net/tun c 10 200<br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<!-- STEP 42: Compute service (Nova) --><br />
<filetree seq="step42" root="/etc/nova/">conf/compute1/nova/nova.conf</filetree><br />
<filetree seq="step42" root="/etc/nova/">conf/compute1/nova/nova-compute.conf</filetree><br />
<exec seq="step42" type="verbatim"><br />
service nova-compute restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron) --><br />
<filetree seq="step53" root="/etc/neutron/">conf/compute1/neutron/neutron.conf</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/ml2_conf.ini</filetree!--><br />
<exec seq="step53" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
service openvswitch-switch restart<br />
service nova-compute restart<br />
service neutron-openvswitch-agent restart<br />
</exec><br />
<br />
<!--filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<exec seq="step92" type="verbatim"><br />
</exec--><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get -y install ceilometer-agent-compute<br />
</exec><br />
<filetree seq="step102" root="/etc/ceilometer/">conf/compute2/ceilometer/ceilometer.conf</filetree><br />
<exec seq="step102" type="verbatim"><br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit True<br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit_period hour<br />
crudini --set /etc/nova/nova.conf DEFAULT notify_on_state_change vm_and_task_state<br />
crudini --set /etc/nova/nova.conf oslo_messaging_notifications driver messagingv2<br />
service ceilometer-agent-compute restart<br />
service nova-compute restart<br />
</exec><br />
<br />
</vm><br />
<br />
<vm name="compute2" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-compute</filesystem><br />
<!--vm name="compute2" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64" vcpu="2"--><br />
<!--filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-compute</filesystem!--><br />
<mem>2G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.32/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.32/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
# Create /dev/net/tun device <br />
#mkdir -p /dev/net/<br />
#mknod -m 666 /dev/net/tun c 10 200<br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<!-- STEP 42: Compute service (Nova) --><br />
<filetree seq="step42" root="/etc/nova/">conf/compute2/nova/nova.conf</filetree><br />
<filetree seq="step42" root="/etc/nova/">conf/compute2/nova/nova-compute.conf</filetree><br />
<exec seq="step42" type="verbatim"><br />
service nova-compute restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) --><br />
<filetree seq="step53" root="/etc/neutron/">conf/compute2/neutron/neutron.conf</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/ml2_conf.ini</filetree!--><br />
<exec seq="step53" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
service openvswitch-switch restart<br />
service nova-compute restart<br />
service neutron-openvswitch-agent restart<br />
</exec><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get -y install ceilometer-agent-compute<br />
</exec><br />
<filetree seq="step102" root="/etc/ceilometer/">conf/compute2/ceilometer/ceilometer.conf</filetree><br />
<exec seq="step102" type="verbatim"><br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit True<br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit_period hour<br />
crudini --set /etc/nova/nova.conf DEFAULT notify_on_state_change vm_and_task_state<br />
crudini --set /etc/nova/nova.conf oslo_messaging_notifications driver messagingv2<br />
service ceilometer-agent-compute restart<br />
service nova-compute restart<br />
</exec><br />
<br />
</vm><br />
<br />
<br />
<host><br />
<hostif net="ExtNet"><br />
<ipv4>10.0.10.1/24</ipv4><br />
</hostif><br />
<hostif net="MgmtNet"><br />
<ipv4>10.0.0.1/24</ipv4><br />
</hostif><br />
<exec seq="step00" type="verbatim"><br />
echo "--\n-- Waiting for all VMs to be ssh ready...\n--"<br />
</exec><br />
<exec seq="step00" type="verbatim"><br />
# Wait till ssh is accesible in all VMs<br />
while ! $( nc -z controller 22 ); do sleep 1; done<br />
while ! $( nc -z network 22 ); do sleep 1; done<br />
while ! $( nc -z compute1 22 ); do sleep 1; done<br />
while ! $( nc -z compute2 22 ); do sleep 1; done<br />
</exec><br />
<exec seq="step00" type="verbatim"><br />
echo "-- ...OK\n--"<br />
</exec><br />
</host><br />
<br />
</vnx></div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Vnx-labo-openstack-4nodes-classic-ovs-antelope&diff=2662
Vnx-labo-openstack-4nodes-classic-ovs-antelope
2023-09-18T08:26:16Z
<p>David: </p>
<hr />
<div>Being edited...<br />
<br />
{{Title|VNX Openstack Antelope four nodes classic scenario using Open vSwitch}}<br />
<br />
== Introduction ==<br />
<br />
This is an Openstack tutorial scenario designed to experiment with Openstack free and open-source software platform for cloud-computing.<br />
<br />
The scenario is made of four virtual machines: a controller node, a network node and two compute nodes, all based on LXC. Optionally, new compute nodes can be added by starting additional VNX scenarios.<br />
<br />
Openstack version used is Antelope (2023.1) over Ubuntu 22.04 LTS. The deployment scenario is the one that was named "Classic with Open vSwitch" and was described in previous versions of Openstack documentation (https://docs.openstack.org/liberty/networking-guide/scenario-classic-ovs.html). It is prepared to experiment either with the deployment of virtual machines using "Self Service Networks" provided by Openstack, or with the use of external "Provider networks".<br />
<br />
The configuration has been developed integrating into the VNX scenario all the installation and configuration commands described in [https://docs.openstack.org/antelope/install/ Openstack Antelope installation recipes].<br />
<br />
[[File:Openstack_tutorial-stein.png|center|thumb|600px|<div align=center><br />
'''Figure 1: Openstack tutorial scenario'''</div>]]<br />
<br />
== Requirements ==<br />
<br />
To use the scenario you need a Linux computer (Ubuntu 16.04 or later recommended) with VNX software installed. At least 8GB of memory are needed to execute the scenario. <br />
<br />
See how to install VNX here: http://vnx.dit.upm.es/vnx/index.php/Vnx-install<br />
<br />
If already installed, update VNX to the latest version with:<br />
<br />
vnx_update<br />
<br />
== Installation ==<br />
<br />
Download the scenario with the virtual machines images included and unpack it:<br />
<br />
wget http://idefix.dit.upm.es/vnx/examples/openstack/openstack_lab-stein_4n_classic_ovs-v01-with-rootfs.tgz<br />
sudo vnx --unpack openstack_lab-stein_4n_classic_ovs-v01-with-rootfs.tgz<br />
<br />
== Starting the scenario ==<br />
<br />
Start the scenario and configure it and load an example cirros and ubuntu images with:<br />
cd openstack_lab-stein_4n_classic_ovs-v01<br />
# Start the scenario<br />
sudo vnx -f openstack_lab.xml -v --create<br />
# Wait for all consoles to have started and configure all Openstack services<br />
vnx -f openstack_lab.xml -v -x start-all<br />
# Load vm images in GLANCE<br />
vnx -f openstack_lab.xml -v -x load-img<br />
<br />
<br />
[[File:Tutorial-openstack-stein-4n-classic-ovs.png|center|thumb|600px|<div align=center><br />
'''Figure 2: Openstack tutorial detailed topology'''</div>]]<br />
<br />
Once started, you can connect to Openstack Dashboard (default/admin/xxxx) starting a browser and pointing it to the controller horizon page. For example:<br />
<br />
firefox 10.0.10.11/horizon<br />
<br />
== Self Service networks example ==<br />
<br />
Access the network topology Dashboard page (Project->Network->Network topology) and create a simple demo scenario inside Openstack:<br />
<br />
vnx -f openstack_lab.xml -v -x create-demo-scenario<br />
<br />
You should see the simple scenario as it is being created through the Dashboard.<br />
<br />
Once created you should be able to access vm1 console, to ping or ssh from the host to the vm1 or the opposite (see the floating IP assigned to vm1 in the Dashboard, probably 10.0.10.102).<br />
<br />
You can create a second Cirros virtual machine (vm2) or a third Ubuntu virtual machine (vm3) and test connectivity among all virtual machines with: <br />
<br />
vnx -f openstack_lab.xml -v -x create-demo-vm2<br />
vnx -f openstack_lab.xml -v -x create-demo-vm3<br />
<br />
To allow external Internet access from vm1 you hace to configure a NAT in the host. You can easily do it using '''vnx_config_nat''' command distributed with VNX. Just find out the name of the public network interface of your host (i.e eth0) and execute:<br />
<br />
vnx_config_nat ExtNet eth0<br />
<br />
Besides, you can access the Openstack controller by ssh from the host and execute management commands directly:<br />
slogin root@controller # root/xxxx<br />
source bin/admin-openrc.sh # Load admin credentials<br />
<br />
For example, to show the virtual machines started:<br />
openstack server list<br />
<br />
You can also execute that commands from the host where the virtual scenario is started. For that purpose you need to install openstack client first:<br />
<br />
pip install python-openstackclient<br />
source bin/admin-openrc.sh # Load admin credentials<br />
openstack server list<br />
<br />
== Provider networks example ==<br />
<br />
Compute nodes in the Openstack virtual lab scenario have two network interfaces for internal and external connections: <br />
* '''eth2''', connected to Tunnels network and used to connect with VMs in other compute nodes or routers in the network node<br />
* '''eth3''', connected to VLANs network and used to connect with VMs in other compute nodes and also to connect to external systems through the Providers networks infrastructure. <br />
<br />
To demonstrate how Openstack VMs can be connected with external systems though the VLAN network switches, an additional demo scenario is included. Just execute:<br />
vnx -f openstack_lab.xml -v -x create-vlan-demo-scenario<br />
<br />
That scenario will create two new networks and subnetworks associated with VLANs 1000 and 1001, and two VMs, vmA1 and vmB1 connected to that networks. You can see the scenario created through the openstack Dashboard.<br />
<br />
[[File:Openstack-provider-networks-example.png|center|thumb|600px|<div align=center><br />
'''Figure 3: Provider networks testing scenario'''</div>]]<br />
<br />
The commands used to create that networks and vms are the following (can be consulted also in the scenario XML file):<br />
<br />
<pre><br />
# Networks<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1000 vlan1000<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1001 vlan1001<br />
openstack subnet create --network vlan1000 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.2.0/24 --allocation-pool start=10.1.2.2,end=10.1.2.99 subvlan1000<br />
openstack subnet create --network vlan1001 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.3.0/24 --allocation-pool start=10.1.3.2,end=10.1.3.99 subvlan1001<br />
<br />
# VMs<br />
mkdir -p tmp<br />
openstack keypair create vmA1 > tmp/vmA1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vmA1 --nic net-id=vlan1000 --key-name vmA1<br />
openstack keypair create vmB1 > tmp/vmB1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vmB1 --nic net-id=vlan1001 --key-name vmB1<br />
</pre><br />
<br />
To demonstrate the connectivity of vmA1 and vmB1 to external systems connected on VLANs 1000/1001, you can start and additional virtual scenario which creates three additional systems: vmA2 (vlan 1000), vmB2 (vlan 1001) and vlan-router (connected to both vlans). To start it just execute:<br />
vnx -f openstack_lab-vms-vlan.xml -v -t<br />
<br />
[[File:Openstack_lab-vms-vlan-map.png|center|thumb|600px|<div align=center><br />
'''Figure 4: Provider networks demo external scenario'''</div>]]<br />
<br />
Once the scenario is started, you should be able to ping, traceroute and ssh among vmA1, vmB1, vmA2 and vmB2 using the following IP addresses:<br />
<ul><br />
<li>Virtual machines inside Openstack:</li><br />
<ul><br />
<li>vmA1 and vmB1: dynamic addresses assigned from 10.1.2.0/24. You can consult the addresses from Horizon or using the command:</li><br />
openstack server list<br />
</ul><br />
<li>Virtual machines outside Openstack:</li><br />
<ul><br />
<li>vmA2: 10.1.2.100/24</li><br />
<li>vmB2: 10.1.3.100/24</li><br />
</ul><br />
</ul><br />
<br />
Take into account that the pings from the exterior virtual machines to the internal ones is not allowed by the default security groups filters applied by Openstack.<br />
<br />
You can have a look at the virtual switch that supports the Openstack VLAN Network executing the following command in the host:<br />
ovs-vsctl show<br />
<br />
<br />
[[File:Vnx-demo-scenario-openstack-stein.png|center|thumb|600px|<div align=center><br />
'''Figure 5: Openstack Dashboard view of the demo virtual scenarios created'''</div>]]<br />
<br />
== Adding additional compute nodes ==<br />
<br />
Three additional VNX scenarios are provided to add new compute nodes to the scenario. <br />
<br />
For example, to start compute nodes 3 and 4, just:<br />
vnx -f openstack_lab-cmp34.xml -v -t<br />
# Wait for consoles to start<br />
vnx -f openstack_lab-cmp34.xml -v -x start-all<br />
<br />
After that, you can see the new compute nodes added <br />
by going to "Admin->Compute->Hypervisors->Compute host" option. However, the new compute nodes are not added yet to the list of Hypervisors in "Admin->Compute->Hypervisors->Hypervisor" option.<br />
<br />
To add them, just execute:<br />
vnx -f openstack_lab.xml -v -x discover-hosts<br />
<br />
The same procedure can be used to start nodes 5 and 6 (openstack_lab-cmp56.xml) and nodes 7 and 8 (openstack_lab-cmp78.xml).<br />
<br />
== Stopping or releasing the scenario ==<br />
<br />
To stop the scenario preserving the configuration and the changes made:<br />
<br />
vnx -f openstack_lab.xml -v --shutdown<br />
<br />
To start it again use:<br />
<br />
vnx -f openstack_lab.xml -v --start<br />
<br />
To stop the scenario destroying all the configuration and changes made:<br />
<br />
vnx -f openstack_lab.xml -v --destroy<br />
<br />
To unconfigure the NAT, just execute (change eth0 by the name of your external interface):<br />
<br />
vnx_config_nat -d ExtNet eth0<br />
<br />
== Other useful information ==<br />
<br />
To pack the scenario in a tgz file:<br />
<br />
bin/pack-scenario-with-rootfs # including rootfs<br />
bin/pack-scenario # without rootfs<br />
<br />
== Other Openstack Dashboard screen captures ==<br />
<br />
[[File:Vnx-demo-scenario-openstack-mitaka-compute-overview.png|center|thumb|600px|<div align=center><br />
'''Figure 6: Openstack Dashboard compute overview'''</div>]]<br />
<br />
[[File:Vnx-demo-scenario-openstack-mitaka-instances.png|center|thumb|600px|<div align=center><br />
'''Figure 7: Openstack Dashboard view of the demo virtual machines created'''</div>]]<br />
<br />
== XML specification of Openstack tutorial scenario ==<br />
<br />
<pre><br />
<?xml version="1.0" encoding="UTF-8"?><br />
<br />
<!--<br />
~~~~~~~~~~~~~~~~~~~~~~<br />
VNX Sample scenarios<br />
~~~~~~~~~~~~~~~~~~~~~~<br />
<br />
Name: openstack_tutorial-stein<br />
<br />
Description: This is an Openstack tutorial scenario designed to experiment with Openstack free and open-source <br />
software platform for cloud-computing. It is made of four LXC containers: <br />
- one controller<br />
- one network node<br />
- two compute nodes<br />
Openstack version used: Stein.<br />
The network configuration is based on the one named "Classic with Open vSwitch" described here:<br />
http://docs.openstack.org/liberty/networking-guide/scenario-classic-ovs.html<br />
<br />
Author: David Fernandez (david@dit.upm.es)<br />
<br />
This file is part of the Virtual Networks over LinuX (VNX) Project distribution. <br />
(www: http://www.dit.upm.es/vnx - e-mail: vnx@dit.upm.es) <br />
<br />
Copyright (C) 2019 Departamento de Ingenieria de Sistemas Telematicos (DIT)<br />
Universidad Politecnica de Madrid (UPM)<br />
SPAIN<br />
--><br />
<br />
<vnx xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"<br />
xsi:noNamespaceSchemaLocation="/usr/share/xml/vnx/vnx-2.00.xsd"><br />
<global><br />
<version>2.0</version><br />
<scenario_name>openstack_tutorial-stein</scenario_name><br />
<ssh_key>/root/.ssh/id_rsa.pub</ssh_key><br />
<automac offset="0"/><br />
<!--vm_mgmt type="none"/--><br />
<vm_mgmt type="private" network="10.20.0.0" mask="24" offset="0"><br />
<host_mapping /><br />
</vm_mgmt> <br />
<vm_defaults><br />
<console id="0" display="no"/><br />
<console id="1" display="yes"/><br />
</vm_defaults><br />
<cmd-seq seq="step1-6">step1,step2,step3,step3b,step4,step5,step6</cmd-seq><br />
<cmd-seq seq="step4">step41,step42,step43,step44</cmd-seq><br />
<cmd-seq seq="step5">step51,step52,step53</cmd-seq><br />
<!-- start-all for 'noconfig' scenario: all installation steps included --><br />
<!--cmd-seq seq="start-all">step1,step2,step3,step4,step5,step6</cmd-seq--><br />
<!-- start-all for configured scenario: only network and compute node steps included --><br />
<cmd-seq seq="step10">step101,step102</cmd-seq><br />
<cmd-seq seq="start-all">step00,step42,step43,step44,step52,step53</cmd-seq><br />
<cmd-seq seq="discover-hosts">step44</cmd-seq><br />
</global><br />
<br />
<net name="MgmtNet" mode="openvswitch" mtu="1450"/><br />
<net name="TunnNet" mode="openvswitch" mtu="1450"/><br />
<net name="ExtNet" mode="openvswitch" /><br />
<net name="VlanNet" mode="openvswitch" /><br />
<net name="virbr0" mode="virtual_bridge" managed="no"/><br />
<br />
<vm name="controller" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-controller</filesystem><br />
<mem>1G</mem><br />
<!--console id="0" display="yes"/--><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.11/24</ipv4><br />
</if><br />
<if id="2" net="ExtNet"><br />
<ipv4>10.0.10.11/24</ipv4><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-controller.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<filetree seq="on_boot" root="/root/">conf/controller/bin</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
chmod +x /root/bin/*<br />
</exec><br />
<exec seq="on_boot" type="verbatim"><br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
<br />
# Change owner of secret_key to horizon to avoid a 500 error when<br />
# accessing horizon (new problem arosed in v04)<br />
# See: https://ask.openstack.org/en/question/30059/getting-500-internal-server-error-while-accessing-horizon-dashboard-in-ubuntu-icehouse/<br />
chown horizon /var/lib/openstack-dashboard/secret_key<br />
<br />
# Stop nova services. Before being configured, they consume a lot of CPU<br />
service nova-scheduler stop<br />
service nova-api stop<br />
service nova-conductor stop<br />
</exec><br />
<br />
<exec seq="step00" type="verbatim"><br />
# Restart nova services<br />
service nova-scheduler start<br />
service nova-api start<br />
service nova-conductor start<br />
</exec><br />
<br />
<!-- STEP 1: Basic services --><br />
<filetree seq="step1" root="/etc/mysql/mariadb.conf.d/">conf/controller/mysql/mysqld_openstack.cnf</filetree><br />
<filetree seq="step1" root="/etc/">conf/controller/memcached/memcached.conf</filetree><br />
<filetree seq="step1" root="/etc/default/">conf/controller/etcd/etcd</filetree><br />
<!--filetree seq="step1" root="/etc/mongodb.conf">conf/controller/mongodb/mongodb.conf</filetree!--><br />
<exec seq="step1" type="verbatim"><br />
<br />
# Stop nova services. Before being configured, they consume a lot of CPU<br />
service nova-scheduler stop<br />
service nova-api stop<br />
service nova-conductor stop<br />
<br />
# Change all ocurrences of utf8mb4 to utf8. See comment above<br />
#for f in $( find /etc/mysql/mariadb.conf.d/ -type f ); do echo "Changing utf8mb4 to utf8 in file $f"; sed -i -e 's/utf8mb4/utf8/g' $f; done<br />
service mysql restart<br />
#mysql_secure_installation # to be run manually<br />
<br />
rabbitmqctl add_user openstack xxxx<br />
rabbitmqctl set_permissions openstack ".*" ".*" ".*" <br />
<br />
service memcached restart<br />
<br />
systemctl enable etcd<br />
systemctl start etcd<br />
<br />
#service mongodb stop<br />
#rm -f /var/lib/mongodb/journal/prealloc.*<br />
#service mongodb start<br />
</exec><br />
<br />
<!-- STEP 2: Identity service --><br />
<filetree seq="step2" root="/etc/keystone/">conf/controller/keystone/keystone.conf</filetree><br />
<filetree seq="step2" root="/root/bin/">conf/controller/keystone/admin-openrc.sh</filetree><br />
<filetree seq="step2" root="/root/bin/">conf/controller/keystone/demo-openrc.sh</filetree><br />
<exec seq="step2" type="verbatim"><br />
count=1; while ! mysqladmin ping ; do echo -n $count; echo ": waiting for mysql ..."; ((count++)) &amp;&amp; ((count==6)) &amp;&amp; echo "--" &amp;&amp; echo "-- ERROR: database not ready." &amp;&amp; echo "--" &amp;&amp; break; sleep 2; done<br />
</exec><br />
<exec seq="step2" type="verbatim"><br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE keystone;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
su -s /bin/sh -c "keystone-manage db_sync" keystone<br />
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone<br />
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone<br />
<br />
keystone-manage bootstrap --bootstrap-password xxxx \<br />
--bootstrap-admin-url http://controller:5000/v3/ \<br />
--bootstrap-internal-url http://controller:5000/v3/ \<br />
--bootstrap-public-url http://controller:5000/v3/ \<br />
--bootstrap-region-id RegionOne<br />
<br />
echo "ServerName controller" >> /etc/apache2/apache2.conf<br />
#ln -s /etc/apache2/sites-available/wsgi-keystone.conf /etc/apache2/sites-enabled<br />
service apache2 restart<br />
rm -f /var/lib/keystone/keystone.db<br />
sleep 5<br />
<br />
export OS_USERNAME=admin<br />
export OS_PASSWORD=xxxx<br />
export OS_PROJECT_NAME=admin<br />
export OS_USER_DOMAIN_NAME=Default<br />
export OS_PROJECT_DOMAIN_NAME=Default<br />
export OS_AUTH_URL=http://controller:5000/v3<br />
export OS_IDENTITY_API_VERSION=3<br />
<br />
# Create users and projects<br />
openstack project create --domain default --description "Service Project" service<br />
openstack project create --domain default --description "Demo Project" demo<br />
openstack user create --domain default --password=xxxx demo<br />
openstack role create user<br />
openstack role add --project demo --user demo user<br />
</exec><br />
<br />
<!-- <br />
STEP 3: Image service (Glance) <br />
--><br />
<filetree seq="step3" root="/etc/glance/">conf/controller/glance/glance-api.conf</filetree><br />
<filetree seq="step3" root="/etc/glance/">conf/controller/glance/glance-registry.conf</filetree><br />
<exec seq="step3" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE glance;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx glance<br />
openstack role add --project service --user glance admin<br />
openstack service create --name glance --description "OpenStack Image" image<br />
openstack endpoint create --region RegionOne image public http://controller:9292<br />
openstack endpoint create --region RegionOne image internal http://controller:9292<br />
openstack endpoint create --region RegionOne image admin http://controller:9292<br />
<br />
su -s /bin/sh -c "glance-manage db_sync" glance<br />
service glance-registry restart<br />
service glance-api restart<br />
#rm -f /var/lib/glance/glance.sqlite<br />
</exec><br />
<br />
<!-- <br />
STEP 3B: Placement service API<br />
--><br />
<filetree seq="step3b" root="/etc/placement/">conf/controller/placement/placement.conf</filetree><br />
<exec seq="step3b" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE placement;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx placement<br />
openstack role add --project service --user placement admin<br />
openstack service create --name placement --description "Placement API" placement<br />
openstack endpoint create --region RegionOne placement public http://controller:8778<br />
openstack endpoint create --region RegionOne placement internal http://controller:8778<br />
openstack endpoint create --region RegionOne placement admin http://controller:8778<br />
su -s /bin/sh -c "placement-manage db sync" placement<br />
service apache2 restart<br />
</exec><br />
<br />
<br />
<br />
<!-- STEP 4: Compute service (Nova) --><br />
<filetree seq="step41" root="/etc/nova/">conf/controller/nova/nova.conf</filetree><br />
<exec seq="step41" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova_api;"<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova;"<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova_cell0;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
<br />
openstack user create --domain default --password=xxxx nova<br />
openstack role add --project service --user nova admin<br />
openstack service create --name nova --description "OpenStack Compute" compute<br />
<br />
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1<br />
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1<br />
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1<br />
<br />
# Restart services stopped at step 1 to save CPU<br />
service nova-scheduler start<br />
service nova-api start<br />
service nova-conductor start<br />
<br />
su -s /bin/sh -c "nova-manage api_db sync" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova<br />
su -s /bin/sh -c "nova-manage db sync" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova<br />
<br />
service nova-api restart<br />
service nova-consoleauth restart<br />
service nova-scheduler restart<br />
service nova-conductor restart<br />
service nova-novncproxy restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<exec seq="step43" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Wait for compute1 hypervisor to be up<br />
while [ $( openstack hypervisor list --matching compute1 -f value -c State ) != 'up' ]; do echo "waiting for compute1 hypervisor..."; sleep 5; done<br />
</exec><br />
<exec seq="step43" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Wait for compute2 hypervisor to be up<br />
while [ $( openstack hypervisor list --matching compute2 -f value -c State ) != 'up' ]; do echo "waiting for compute2 hypervisor..."; sleep 5; done<br />
</exec><br />
<exec seq="step44" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
openstack hypervisor list<br />
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron) --><br />
<filetree seq="step51" root="/etc/neutron/">conf/controller/neutron/neutron.conf</filetree><br />
<!--filetree seq="step51" root="/etc/neutron/">conf/controller/neutron/neutron_lbaas.conf</filetree--><br />
<filetree seq="step51" root="/etc/neutron/plugins/ml2/">conf/controller/neutron/ml2_conf.ini</filetree><br />
<exec seq="step51" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE neutron;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx neutron<br />
openstack role add --project service --user neutron admin<br />
openstack service create --name neutron --description "OpenStack Networking" network<br />
openstack endpoint create --region RegionOne network public http://controller:9696<br />
openstack endpoint create --region RegionOne network internal http://controller:9696<br />
openstack endpoint create --region RegionOne network admin http://controller:9696<br />
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron<br />
<br />
# LBaaS<br />
neutron-db-manage --subproject neutron-lbaas upgrade head<br />
<br />
# FwaaS<br />
neutron-db-manage --subproject neutron-fwaas upgrade head<br />
<br />
# LBaaS Dashboard panels<br />
#git clone https://git.openstack.org/openstack/neutron-lbaas-dashboard<br />
#cd neutron-lbaas-dashboard<br />
#git checkout stable/mitaka<br />
#python setup.py install<br />
#cp neutron_lbaas_dashboard/enabled/_1481_project_ng_loadbalancersv2_panel.py /usr/share/openstack-dashboard/openstack_dashboard/local/enabled/<br />
#cd /usr/share/openstack-dashboard<br />
#./manage.py collectstatic --noinput<br />
#./manage.py compress<br />
#sudo service apache2 restart<br />
<br />
service nova-api restart<br />
service neutron-server restart<br />
</exec><br />
<br />
<!-- STEP 6: Dashboard service --><br />
<filetree seq="step6" root="/etc/openstack-dashboard/">conf/controller/dashboard/local_settings.py</filetree><br />
<exec seq="step6" type="verbatim"><br />
#chown www-data:www-data /var/lib/openstack-dashboard/secret_key<br />
rm /var/lib/openstack-dashboard/secret_key<br />
systemctl enable apache2<br />
service apache2 restart<br />
</exec><br />
<br />
<!-- STEP 7: Trove service --><br />
<cmd-seq seq="step7">step71,step72,step73</cmd-seq><br />
<exec seq="step71" type="verbatim"><br />
apt-get -y install python-trove python-troveclient python-glanceclient trove-common trove-api trove-taskmanager trove-conductor python-pip<br />
pip install trove-dashboard==7.0.0.0b2<br />
</exec><br />
<br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-conductor.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-taskmanager.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-guestagent.conf</filetree><br />
<exec seq="step72" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE trove;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
<br />
openstack user create --domain default --password xxxx trove<br />
openstack role add --project service --user trove admin<br />
openstack service create --name trove --description "Database" database<br />
openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s<br />
<br />
su -s /bin/sh -c "trove-manage db_sync" trove<br />
<br />
service trove-api restart<br />
service trove-taskmanager restart<br />
service trove-conductor restart<br />
<br />
# Install trove_dashboard<br />
cp -a /usr/local/lib/python2.7/dist-packages/trove_dashboard/enabled/* /usr/share/openstack-dashboard/openstack_dashboard/local/enabled/<br />
service apache2 restart<br />
<br />
</exec><br />
<br />
<exec seq="step73" type="verbatim"><br />
#wget -P /tmp/images http://tarballs.openstack.org/trove/images/ubuntu/mariadb.qcow2<br />
wget -P /tmp/images/ http://138.4.7.228/download/vnx/filesystems/ostack-images/trove/mariadb.qcow2<br />
glance image-create --name "trove-mariadb" --file /tmp/images/mariadb.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/mariadb.qcow2<br />
su -s /bin/sh -c "trove-manage --config-file /etc/trove/trove.conf datastore_update mysql ''" trove <br />
su -s /bin/sh -c "trove-manage --config-file /etc/trove/trove.conf datastore_version_update mysql mariadb mariadb glance_image_ID '' 1" trove<br />
<br />
# Create example database<br />
openstack flavor show m1.smaller >/dev/null 2>&amp;1 || openstack flavor create m1.smaller --ram 512 --disk 3 --vcpus 1 --id 6<br />
#trove create mysql_instance_1 m1.smaller --size 1 --databases myDB --users userA:xxxx --datastore_version mariadb --datastore mysql<br />
</exec><br />
<br />
<!-- STEP 8: Heat service --><br />
<!--cmd-seq seq="step8">step81,step82</cmd-seq--><br />
<br />
<filetree seq="step8" root="/etc/heat/">conf/controller/heat/heat.conf</filetree><br />
<filetree seq="step8" root="/root/heat/">conf/controller/heat/examples</filetree><br />
<exec seq="step8" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE heat;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password xxxx heat<br />
openstack role add --project service --user heat admin<br />
openstack service create --name heat --description "Orchestration" orchestration<br />
openstack service create --name heat-cfn --description "Orchestration" cloudformation<br />
openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1<br />
openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1<br />
openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1<br />
openstack domain create --description "Stack projects and users" heat<br />
openstack user create --domain heat --password xxxx heat_domain_admin<br />
openstack role add --domain heat --user-domain heat --user heat_domain_admin admin<br />
openstack role create heat_stack_owner<br />
openstack role add --project demo --user demo heat_stack_owner<br />
openstack role create heat_stack_user<br />
<br />
su -s /bin/sh -c "heat-manage db_sync" heat<br />
service heat-api restart<br />
service heat-api-cfn restart<br />
service heat-engine restart<br />
<br />
# Install Orchestration interface in Dashboard<br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get install -y gettext<br />
pip3 install heat-dashboard<br />
<br />
cd /root<br />
git clone https://github.com/openstack/heat-dashboard.git<br />
cd heat-dashboard/<br />
git checkout stable/stein<br />
cp heat_dashboard/enabled/_[1-9]*.py /usr/share/openstack-dashboard/openstack_dashboard/local/enabled<br />
python3 ./manage.py compilemessages<br />
cd /usr/share/openstack-dashboard<br />
DJANGO_SETTINGS_MODULE=openstack_dashboard.settings python3 manage.py collectstatic --noinput<br />
DJANGO_SETTINGS_MODULE=openstack_dashboard.settings python3 manage.py compress --force<br />
rm /var/lib/openstack-dashboard/secret_key<br />
service apache2 restart<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-heat" type="verbatim"><br />
source /root/bin/demo-openrc.sh<br />
<br />
# Create internal network<br />
openstack network create net-heat<br />
openstack subnet create --network net-heat --gateway 10.1.10.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.10.0/24 --allocation-pool start=10.1.10.8,end=10.1.10.100 subnet-heat<br />
<br />
mkdir -p /root/keys<br />
openstack keypair create key-heat > /root/keys/key-heat<br />
#export NET_ID=$(openstack network list | awk '/ net-heat / { print $2 }')<br />
export NET_ID=$( openstack network list --name net-heat -f value -c ID )<br />
openstack stack create -t /root/heat/examples/demo-template.yml --parameter "NetID=$NET_ID" stack<br />
<br />
</exec><br />
<br />
<br />
<!-- STEP 9: Tacker service --><br />
<cmd-seq seq="step9">step91,step92</cmd-seq><br />
<br />
<exec seq="step91" type="verbatim"><br />
apt-get -y install python-pip git<br />
pip install --upgrade pip<br />
</exec><br />
<br />
<filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/default-vim-config.yaml</filetree><br />
<filetree seq="step92" root="/root/tacker/">conf/controller/tacker/examples</filetree><br />
<exec seq="step92" type="verbatim"><br />
sed -i -e 's/.*"resource_types:OS::Nova::Flavor":.*/ "resource_types:OS::Nova::Flavor": "role:admin",/' /etc/heat/policy.json <br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE tacker;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password xxxx tacker<br />
openstack role add --project service --user tacker admin<br />
openstack service create --name tacker --description "Tacker Project" nfv-orchestration<br />
openstack endpoint create --region RegionOne nfv-orchestration public http://controller:9890/<br />
openstack endpoint create --region RegionOne nfv-orchestration internal http://controller:9890/<br />
openstack endpoint create --region RegionOne nfv-orchestration admin http://controller:9890/<br />
<br />
mkdir -p /root/tacker<br />
cd /root/tacker<br />
git clone https://github.com/openstack/tacker<br />
cd tacker<br />
git checkout stable/ocata<br />
pip install -r requirements.txt<br />
pip install tosca-parser<br />
python setup.py install<br />
mkdir -p /var/log/tacker<br />
<br />
/usr/local/bin/tacker-db-manage --config-file /usr/local/etc/tacker/tacker.conf upgrade head<br />
<br />
# Tacker client<br />
cd /root/tacker<br />
git clone https://github.com/openstack/python-tackerclient<br />
cd python-tackerclient<br />
git checkout stable/ocata<br />
python setup.py install<br />
<br />
# Tacker horizon<br />
cd /root/tacker<br />
git clone https://github.com/openstack/tacker-horizon<br />
cd tacker-horizon<br />
git checkout stable/ocata<br />
python setup.py install<br />
cp tacker_horizon/enabled/* /usr/share/openstack-dashboard/openstack_dashboard/enabled/<br />
service apache2 restart<br />
<br />
# Start tacker server<br />
mkdir -p /var/log/tacker<br />
nohup python /usr/local/bin/tacker-server \<br />
--config-file /usr/local/etc/tacker/tacker.conf \<br />
--log-file /var/log/tacker/tacker.log &amp;<br />
<br />
# Register default VIM<br />
tacker vim-register --is-default --config-file /usr/local/etc/tacker/default-vim-config.yaml \<br />
--description "Default VIM" "Openstack-VIM"<br />
<br />
</exec><br />
<exec seq="step93" type="verbatim"><br />
nohup python /usr/local/bin/tacker-server \<br />
--config-file /usr/local/etc/tacker/tacker.conf \<br />
--log-file /var/log/tacker/tacker.log &amp;<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-tacker" type="verbatim"><br />
source /root/bin/demo-openrc.sh<br />
<br />
# Create internal network<br />
openstack network create net-tacker<br />
openstack subnet create --network net-tacker --gateway 10.1.11.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.11.0/24 --allocation-pool start=10.1.11.8,end=10.1.11.100 subnet-tacker<br />
<br />
cd /root/tacker/examples<br />
tacker vnfd-create --vnfd-file sample-vnfd.yaml testd<br />
<br />
# Falla con error: <br />
# ERROR: Property error: : resources.VDU1.properties.image: : No module named v2.client<br />
tacker vnf-create --vnfd-id $( tacker vnfd-list | awk '/ testd / { print $2 }' ) test<br />
<br />
</exec><br />
<br />
<!--filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<exec seq="step92" type="verbatim"><br />
</exec--><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
#apt-get -y install python-pip<br />
#pip install --upgrade pip<br />
#pip install gnocchi[mysql,keystone] gnocchiclient<br />
apt-get -y install ceilometer-collector ceilometer-agent-central ceilometer-agent-notification python-ceilometerclient<br />
apt-get -y install gnocchi-common gnocchi-api gnocchi-metricd gnocchi-statsd python-gnocchiclient<br />
<br />
</exec><br />
<br />
<filetree seq="step102" root="/etc/ceilometer/">conf/controller/ceilometer/ceilometer.conf</filetree><br />
<!--filetree seq="step102" root="/etc/mongodb.conf">conf/controller/mongodb/mongodb.conf</filetree--><br />
<!--filetree seq="step102" root="/etc/apache2/sites-available/ceilometer.conf">conf/controller/ceilometer/apache/ceilometer.conf</filetree--><br />
<!--filetree seq="step102" root="/etc/apache2/sites-available/gnocchi.conf">conf/controller/ceilometer/apache/gnocchi.conf</filetree--><br />
<filetree seq="step102" root="/etc/gnocchi/">conf/controller/ceilometer/gnocchi.conf</filetree><br />
<filetree seq="step102" root="/etc/gnocchi/">conf/controller/ceilometer/api-paste.ini</filetree><br />
<exec seq="step102" type="verbatim"><br />
<br />
# Create gnocchi database<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE gnocchidb default character set utf8;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON gnocchidb.* TO 'gnocchi'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON gnocchidb.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
# Ceilometer<br />
source /root/bin/admin-openrc.sh <br />
openstack user create --domain default --password xxxx ceilometer<br />
openstack role add --project service --user ceilometer admin<br />
openstack service create --name ceilometer --description "Telemetry" metering<br />
openstack user create --domain default --password xxxx gnocchi<br />
openstack role add --project service --user gnocchi admin<br />
openstack service create --name gnocchi --description "Metric Service" metric<br />
openstack endpoint create --region RegionOne metric public http://controller:8041<br />
openstack endpoint create --region RegionOne metric internal http://controller:8041<br />
openstack endpoint create --region RegionOne metric admin http://controller:8041<br />
<br />
mkdir -p /var/cache/gnocchi<br />
chown gnocchi:gnocchi -R /var/cache/gnocchi<br />
mkdir -p /var/lib/gnocchi<br />
chown gnocchi:gnocchi -R /var/lib/gnocchi<br />
<br />
gnocchi-upgrade<br />
sed -i 's/8000/8041/g' /usr/bin/gnocchi-api<br />
# Correct error in gnocchi-api startup script<br />
sed -i -e 's/exec $DAEMON $DAEMON_ARGS/exec $DAEMON -- $DAEMON_ARGS/' /etc/init.d/gnocchi-api<br />
systemctl enable gnocchi-api.service gnocchi-metricd.service gnocchi-statsd.service<br />
systemctl start gnocchi-api.service gnocchi-metricd.service gnocchi-statsd.service<br />
<br />
ceilometer-upgrade --skip-metering-database<br />
service ceilometer-agent-central restart<br />
service ceilometer-agent-notification restart<br />
service ceilometer-collector restart<br />
<br />
# Enable Glance service meters<br />
crudini --set /etc/glance/glance-api.conf DEFAULT transport_url rabbit://openstack:xxxx@controller<br />
crudini --set /etc/glance/glance-api.conf oslo_messaging_notifications driver messagingv2<br />
crudini --set /etc/glance/glance-registry.conf DEFAULT transport_url rabbit://openstack:xxxx@controller<br />
crudini --set /etc/glance/glance-registry.conf oslo_messaging_notifications driver messagingv2<br />
<br />
service glance-registry restart<br />
service glance-api restart<br />
<br />
# Enable Neutron service meters<br />
crudini --set /etc/neutron/neutron.conf oslo_messaging_notifications driver messagingv2<br />
service neutron-server restart<br />
<br />
# Enable Heat service meters<br />
crudini --set /etc/heat/heat.conf oslo_messaging_notifications driver messagingv2<br />
service heat-api restart<br />
service heat-api-cfn restart<br />
service heat-engine restart<br />
<br />
#crudini --set /etc/glance/glance-api.conf DEFAULT rpc_backend rabbit<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_notifications driver messagingv2<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_host controller<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_userid openstack<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_password xxxx<br />
<br />
#crudini --set /etc/glance/glance-registry.conf DEFAULT rpc_backend rabbit<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_notifications driver messagingv2<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_host controller<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_userid openstack<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_password xxxx<br />
<br />
<br />
</exec><br />
<br />
<br />
<exec seq="load-img" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create flavors if not created<br />
openstack flavor show m1.nano >/dev/null 2>&amp;1 || openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano<br />
openstack flavor show m1.tiny >/dev/null 2>&amp;1 || openstack flavor create --id 1 --vcpus 1 --ram 512 --disk 1 m1.tiny<br />
openstack flavor show m1.smaller >/dev/null 2>&amp;1 || openstack flavor create --id 6 --vcpus 1 --ram 512 --disk 3 m1.smaller<br />
<br />
# CentOS image<br />
# Cirros image <br />
#wget -P /tmp/images http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img<br />
wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/cirros-0.3.4-x86_64-disk-vnx.qcow2<br />
glance image-create --name "cirros-0.3.4-x86_64-vnx" --file /tmp/images/cirros-0.3.4-x86_64-disk-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/cirros-0.3.4-x86_64-disk*.qcow2<br />
<br />
# Ubuntu image (trusty)<br />
#wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/trusty-server-cloudimg-amd64-disk1-vnx.qcow2<br />
#glance image-create --name "trusty-server-cloudimg-amd64-vnx" --file /tmp/images/trusty-server-cloudimg-amd64-disk1-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
#rm /tmp/images/trusty-server-cloudimg-amd64-disk1*.qcow2<br />
<br />
# Ubuntu image (xenial)<br />
wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/xenial-server-cloudimg-amd64-disk1-vnx.qcow2<br />
glance image-create --name "xenial-server-cloudimg-amd64-vnx" --file /tmp/images/xenial-server-cloudimg-amd64-disk1-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/xenial-server-cloudimg-amd64-disk1*.qcow2<br />
<br />
#wget -P /tmp/images http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2<br />
#glance image-create --name "CentOS-7-x86_64" --file /tmp/images/CentOS-7-x86_64-GenericCloud.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
#rm /tmp/images/CentOS-7-x86_64-GenericCloud.qcow2<br />
</exec><br />
<br />
<exec seq="create-demo-scenario" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create internal network<br />
#neutron net-create net0<br />
openstack network create net0<br />
#neutron subnet-create net0 10.1.1.0/24 --name subnet0 --gateway 10.1.1.1 --dns-nameserver 8.8.8.8<br />
openstack subnet create --network net0 --gateway 10.1.1.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.1.0/24 --allocation-pool start=10.1.1.8,end=10.1.1.100 subnet0<br />
<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm1 > /root/keys/vm1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm1 --nic net-id=net0 --key-name vm1<br />
<br />
# Create external network<br />
#neutron net-create ExtNet --provider:physical_network provider --provider:network_type flat --router:external --shared<br />
openstack network create --share --external --provider-physical-network provider --provider-network-type flat ExtNet<br />
#neutron subnet-create --name ExtSubnet --allocation-pool start=10.0.10.100,end=10.0.10.200 --dns-nameserver 10.0.10.1 --gateway 10.0.10.1 ExtNet 10.0.10.0/24<br />
openstack subnet create --network ExtNet --gateway 10.0.10.1 --dns-nameserver 10.0.10.1 --subnet-range 10.0.10.0/24 --allocation-pool start=10.0.10.100,end=10.0.10.200 ExtSubNet<br />
#neutron router-create r0<br />
openstack router create r0<br />
#neutron router-gateway-set r0 ExtNet<br />
openstack router set r0 --external-gateway ExtNet<br />
#neutron router-interface-add r0 subnet0<br />
openstack router add subnet r0 subnet0<br />
<br />
<br />
# Assign floating IP address to vm1<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm1<br />
openstack server add floating ip vm1 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
<br />
<br />
# Create security group rules to allow ICMP, SSH and WWW access<br />
openstack security group rule create --proto icmp --dst-port 0 default<br />
openstack security group rule create --proto tcp --dst-port 80 default<br />
openstack security group rule create --proto tcp --dst-port 22 default<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-vm2" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm2 > /root/keys/vm2<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm2 --nic net-id=net0 --key-name vm2<br />
# Assign floating IP address to vm2<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm2<br />
openstack server add floating ip vm2 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
</exec><br />
<br />
<exec seq="create-demo-vm3" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm3 > /root/keys/vm3<br />
openstack server create --flavor m1.smaller --image xenial-server-cloudimg-amd64-vnx vm3 --nic net-id=net0 --key-name vm3<br />
# Assign floating IP address to vm3<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm3<br />
openstack server add floating ip vm3 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
</exec><br />
<br />
<exec seq="create-vlan-demo-scenario" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create vlan based networks and subnetworks<br />
#neutron net-create vlan1000 --shared --provider:physical_network vlan --provider:network_type vlan --provider:segmentation_id 1000<br />
#neutron net-create vlan1001 --shared --provider:physical_network vlan --provider:network_type vlan --provider:segmentation_id 1001<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1000 vlan1000<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1001 vlan1001<br />
#neutron subnet-create vlan1000 10.1.2.0/24 --name vlan1000-subnet --allocation-pool start=10.1.2.2,end=10.1.2.99 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8<br />
#neutron subnet-create vlan1001 10.1.3.0/24 --name vlan1001-subnet --allocation-pool start=10.1.3.2,end=10.1.3.99 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8<br />
openstack subnet create --network vlan1000 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.2.0/24 --allocation-pool start=10.1.2.2,end=10.1.2.99 subvlan1000<br />
openstack subnet create --network vlan1001 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.3.0/24 --allocation-pool start=10.1.3.2,end=10.1.3.99 subvlan1001<br />
<br />
<br />
# Create virtual machine<br />
mkdir -p tmp<br />
openstack keypair create vm3 > tmp/vm3<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm3 --nic net-id=vlan1000 --key-name vm3<br />
openstack keypair create vm4 > tmp/vm4<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm4 --nic net-id=vlan1001 --key-name vm4<br />
<br />
<br />
# Create security group rules to allow ICMP, SSH and WWW access<br />
openstack security group rule create --proto icmp --dst-port 0 default<br />
openstack security group rule create --proto tcp --dst-port 80 default<br />
openstack security group rule create --proto tcp --dst-port 22 default<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
echo "--"<br />
echo "-- Keystone (identity)"<br />
echo "--"<br />
echo "Command: openstack --os-auth-url http://controller:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue"<br />
openstack --os-auth-url http://controller:35357/v3 \<br />
--os-project-domain-name default --os-user-domain-name default \<br />
--os-project-name admin --os-username admin token issue<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Glance (images)"<br />
echo "--"<br />
echo "Command: openstack image list"<br />
openstack image list<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Nova (compute)"<br />
echo "--"<br />
echo "Command: openstack compute service list"<br />
openstack compute service list<br />
echo "Command: openstack hypervisor service list"<br />
openstack hypervisor service list<br />
echo "Command: openstack catalog list"<br />
openstack catalog list<br />
echo "Command: nova-status upgrade check"<br />
nova-status upgrade check<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Neutron (network)"<br />
echo "--"<br />
echo "Command: openstack extension list --network"<br />
openstack extension list --network<br />
echo "Command: openstack network agent list"<br />
openstack network agent list<br />
echo "Command: openstack security group list"<br />
openstack security group list<br />
echo "Command: openstack security group rule list"<br />
openstack security group rule list<br />
</exec><br />
<br />
</vm><br />
<br />
<vm name="network" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-network</filesystem><br />
<!--vm name="network" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-network</filesystem--><br />
<mem>1G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.21/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.21/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="4" net="ExtNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<forwarding type="ip" /><br />
<forwarding type="ipv6" /><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts<br />
rm /root/hosts<br />
</exec><br />
<exec seq="on_boot" type="verbatim"><br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --><br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<filetree seq="on_boot" root="/root/">conf/network/bin</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
chmod +x /root/bin/*<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) --><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/neutron.conf</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/metadata_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/plugins/ml2/">conf/network/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step52" root="/etc/neutron/plugins/ml2/">conf/network/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/l3_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/dhcp_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/dnsmasq-neutron.conf</filetree><br />
<!--filetree seq="step52" root="/etc/neutron/">conf/network/neutron/fwaas_driver.ini</filetree--><br />
<!--filetree seq="step52" root="/etc/neutron/">conf/network/neutron/lbaas_agent.ini</filetree--><br />
<exec seq="step52" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
#ovs-vsctl add-br br-ex<br />
ovs-vsctl add-port br-provider eth4<br />
<br />
service neutron-lbaasv2-agent restart<br />
service openvswitch-switch restart<br />
<br />
service neutron-openvswitch-agent restart<br />
#service neutron-linuxbridge-agent restart<br />
service neutron-dhcp-agent restart<br />
service neutron-metadata-agent restart<br />
service neutron-l3-agent restart<br />
<br />
rm -f /var/lib/neutron/neutron.sqlite<br />
</exec><br />
<br />
</vm><br />
<br />
<br />
<vm name="compute1" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-compute</filesystem><br />
<!--vm name="compute1" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64" vcpu="2"--><br />
<!--filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-compute</filesystem--><br />
<mem>2G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.31/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.31/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
# Create /dev/net/tun device <br />
#mkdir -p /dev/net/<br />
#mknod -m 666 /dev/net/tun c 10 200<br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<!-- STEP 42: Compute service (Nova) --><br />
<filetree seq="step42" root="/etc/nova/">conf/compute1/nova/nova.conf</filetree><br />
<filetree seq="step42" root="/etc/nova/">conf/compute1/nova/nova-compute.conf</filetree><br />
<exec seq="step42" type="verbatim"><br />
service nova-compute restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron) --><br />
<filetree seq="step53" root="/etc/neutron/">conf/compute1/neutron/neutron.conf</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/ml2_conf.ini</filetree!--><br />
<exec seq="step53" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
service openvswitch-switch restart<br />
service nova-compute restart<br />
service neutron-openvswitch-agent restart<br />
</exec><br />
<br />
<!--filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<exec seq="step92" type="verbatim"><br />
</exec--><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get -y install ceilometer-agent-compute<br />
</exec><br />
<filetree seq="step102" root="/etc/ceilometer/">conf/compute2/ceilometer/ceilometer.conf</filetree><br />
<exec seq="step102" type="verbatim"><br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit True<br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit_period hour<br />
crudini --set /etc/nova/nova.conf DEFAULT notify_on_state_change vm_and_task_state<br />
crudini --set /etc/nova/nova.conf oslo_messaging_notifications driver messagingv2<br />
service ceilometer-agent-compute restart<br />
service nova-compute restart<br />
</exec><br />
<br />
</vm><br />
<br />
<vm name="compute2" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-compute</filesystem><br />
<!--vm name="compute2" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64" vcpu="2"--><br />
<!--filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-compute</filesystem!--><br />
<mem>2G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.32/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.32/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
# Create /dev/net/tun device <br />
#mkdir -p /dev/net/<br />
#mknod -m 666 /dev/net/tun c 10 200<br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<!-- STEP 42: Compute service (Nova) --><br />
<filetree seq="step42" root="/etc/nova/">conf/compute2/nova/nova.conf</filetree><br />
<filetree seq="step42" root="/etc/nova/">conf/compute2/nova/nova-compute.conf</filetree><br />
<exec seq="step42" type="verbatim"><br />
service nova-compute restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) --><br />
<filetree seq="step53" root="/etc/neutron/">conf/compute2/neutron/neutron.conf</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/ml2_conf.ini</filetree!--><br />
<exec seq="step53" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
service openvswitch-switch restart<br />
service nova-compute restart<br />
service neutron-openvswitch-agent restart<br />
</exec><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get -y install ceilometer-agent-compute<br />
</exec><br />
<filetree seq="step102" root="/etc/ceilometer/">conf/compute2/ceilometer/ceilometer.conf</filetree><br />
<exec seq="step102" type="verbatim"><br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit True<br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit_period hour<br />
crudini --set /etc/nova/nova.conf DEFAULT notify_on_state_change vm_and_task_state<br />
crudini --set /etc/nova/nova.conf oslo_messaging_notifications driver messagingv2<br />
service ceilometer-agent-compute restart<br />
service nova-compute restart<br />
</exec><br />
<br />
</vm><br />
<br />
<br />
<host><br />
<hostif net="ExtNet"><br />
<ipv4>10.0.10.1/24</ipv4><br />
</hostif><br />
<hostif net="MgmtNet"><br />
<ipv4>10.0.0.1/24</ipv4><br />
</hostif><br />
<exec seq="step00" type="verbatim"><br />
echo "--\n-- Waiting for all VMs to be ssh ready...\n--"<br />
</exec><br />
<exec seq="step00" type="verbatim"><br />
# Wait till ssh is accesible in all VMs<br />
while ! $( nc -z controller 22 ); do sleep 1; done<br />
while ! $( nc -z network 22 ); do sleep 1; done<br />
while ! $( nc -z compute1 22 ); do sleep 1; done<br />
while ! $( nc -z compute2 22 ); do sleep 1; done<br />
</exec><br />
<exec seq="step00" type="verbatim"><br />
echo "-- ...OK\n--"<br />
</exec><br />
</host><br />
<br />
</vnx></div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Vnx-labo-openstack-4nodes-classic-ovs-antelope&diff=2661
Vnx-labo-openstack-4nodes-classic-ovs-antelope
2023-09-18T08:21:40Z
<p>David: Created page with "OpenStack Antelope"</p>
<hr />
<div>OpenStack Antelope</div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Vnx-rootfsopenbsd&diff=2660
Vnx-rootfsopenbsd
2023-02-26T22:45:13Z
<p>David: /* OpenBSD tips */</p>
<hr />
<div>{{Title|How to create a KVM OpenBSD root filesystem for VNX}}<br />
<br />
Follow this procedure to create a KVM OpenBSD based root filesystem for VNX. The procedure has been tested with OpenBSD 7.2. <br />
<br />
== Basic installation ==<br />
<ul><br />
<li>Create the filesystem disk image:</li><br />
qemu-img create -f qcow2 vnx_rootfs_kvm_openbsd64-7.2.qcow2 20G<br />
<li>Get OpenBSD installation CD. For example:</li><br />
wget http://ftp.eu.openbsd.org/pub/OpenBSD/7.2/amd64/install72.iso<br />
mv install72.iso /almacen/iso/openbsd-install72.iso<br />
<li>Create the virtual machine with:</li><br />
vnx --create-rootfs vnx_rootfs_kvm_openbsd64-7.2.qcow2 --install-media /almacen/iso/openbsd-install72.iso --mem 2G --arch=x86_64<br />
<li>Follow OpenBSD installation menus to install a basic system:</li><br />
<ul><br />
<li>When asked about the network interface, answer "done" to not configure the network now.</li><br />
<li>Answer 'yes' to the question "Change the default console to com0" to enable serial console.</li><br />
<li>Add a user named "vnx".</li><br />
<li>Use the whole "wd0" or "sd0" disk and "Auto layout".</li><br />
<li>Choose cd0 for the "location of sets". Choose the default "sets".</li><br />
</ul><br />
<li>After ending installation, but before shutting down the virtual machine, you have to disable mpbios, as follows:</li><br />
chroot /mnt<br />
config -ef /bsd<br />
disable mpbios<br />
quit<br />
<li>Finally, halt the system:</li><br />
halt -p<br />
</ul><br />
<br />
The OS installer will offer to reboot, but do not do that. Instead, close the VM console window and then, from the host OS, destroy the virtual machine:<br />
# virsh list<br />
Id Nombre Estado<br />
----------------------------------------------------<br />
33 vnx_rootfs_kvm_openbsd64-7.2.qcow2-7440 ejecutando<br />
<br />
# virsh destroy 33<br />
<br />
== Configuration ==<br />
<br />
<ul><br />
<li>Start the system with the following command:</li><br />
vnx --modify-rootfs vnx_rootfs_kvm_openbsd64-7.2.qcow2 --update-aced --mem 2G --arch x86_64 --vcpu 2<br />
Note: ignore the errors "timeout waiting for response on VM socket".<br />
<li>Access the system through the text console to easy the copy-paste of commands:</li><br />
# virsh list<br />
Id Name State<br />
----------------------------------------------------<br />
31 vnx_rootfs_kvm_openbsd64-7.2.qcow2-912 running<br />
<br />
# virsh console 31<br />
<li>In case you do not have access to the serial console, you can configure it manually by editing /etc/ttys file and changing the line:</li><br />
tty00 "/usr/libexec/getty std.9600" dialup off secure<br />
to:<br />
tty00 "/usr/libexec/getty std.9600" vt100 on secure<br />
Reboot the system after modifying the ttys file.<br />
<li>Loogin as root in the console and configure the network with DHCP:</li><br />
dhclient re0<br />
<li>Configure the environment variable with network repository:</li><br />
export PKG_PATH=ftp://ftp.eu.openbsd.org/pub/OpenBSD/`uname -r`/packages/`machine -a`/<br />
<li>Install bash and change package repository (change ftp.es.freebsd.org to your nearest mirror):</li><br />
pkg_add -r bash <br />
usermod -s /usr/local/bin/bash root<br />
usermod -s /usr/local/bin/bash vnx<br />
echo "export PKG_PATH=ftp://ftp.eu.openbsd.org/pub/OpenBSD/`uname -r`/packages/`machine -a`/" > ~/.bash_profile<br />
Note: in case pkg_add does not progress, try using another openbsd mirror in PKG_PATH variable.<br />
<li>Install XML::LibXML and NetAddr-IP perl libraries:</li><br />
pkg_add -r p5-XML-LibXML p5-NetAddr-IP <br />
<li>Install VNX autoconfiguration daemon:</li><br />
mount_msdos /dev/wd1i /mnt # if virtio=no in vnx.conf<br />
mount_msdos /dev/sd1c /mnt # if virtio=yes in vnx.conf<br />
perl /mnt/vnxaced-lf/install_vnxaced<br />
sed -i 's#bin/sh#bin/ksh#' /etc/rc.d/vnxaced<br />
<li>Create a file /etc/vnx_rootfs_version to store version number and informacion about modification:</li><br />
VER=v0.25<br />
OS=OpenBSD 7.2<br />
DESC=Basic OpenBSD 7.2 root filesystem without GUI<br />
<li>Configure interface em0 so that it does not get configured with DHCP. To do that, if file /etc/hostname.em0 exists, edit it and delete or comment the line with "dhcp".</li><br />
<li>Stop the machine with that script:</li><br />
vnx_halt<br />
<br />
</ul><br />
<br />
If everything went well, your root filesystem will be ready to be used with VNX. You can make a simple test using the simple_openbsd64.xml scenario distributed with VNX.<br />
<br />
== Installing additional software ==<br />
<br />
To install additional software or to modify your root file system, you just have to: <br />
<ul><br />
<li>Start a virtual machine from it:</li><br />
vnx --modify-rootfs vnx_rootfs_kvm_openbsd64-7.2.qcow2 --arch x86_64<br />
<li>Check network connectivity. Maybe you have to activate the network interface by hand:</li><br />
dhclient re0<br />
<li>Do the modifications you want.</li><br />
<li>Finally, halt the system using:</li><br />
vnx_halt<br />
</ul><br />
<br />
== Known problems ==<br />
<br />
== OpenBSD tips ==<br />
<br />
To upgrade OpenBSD to the next release, the OpenBSD site provides useful hints. For instance, to upgrade from 7.1 to 7.2, you can follow the instructions provided in http://www.openbsd.org/faq/upgrade72.html</div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Vnx-rootfsopenbsd&diff=2659
Vnx-rootfsopenbsd
2023-02-26T22:43:17Z
<p>David: </p>
<hr />
<div>{{Title|How to create a KVM OpenBSD root filesystem for VNX}}<br />
<br />
Follow this procedure to create a KVM OpenBSD based root filesystem for VNX. The procedure has been tested with OpenBSD 7.2. <br />
<br />
== Basic installation ==<br />
<ul><br />
<li>Create the filesystem disk image:</li><br />
qemu-img create -f qcow2 vnx_rootfs_kvm_openbsd64-7.2.qcow2 20G<br />
<li>Get OpenBSD installation CD. For example:</li><br />
wget http://ftp.eu.openbsd.org/pub/OpenBSD/7.2/amd64/install72.iso<br />
mv install72.iso /almacen/iso/openbsd-install72.iso<br />
<li>Create the virtual machine with:</li><br />
vnx --create-rootfs vnx_rootfs_kvm_openbsd64-7.2.qcow2 --install-media /almacen/iso/openbsd-install72.iso --mem 2G --arch=x86_64<br />
<li>Follow OpenBSD installation menus to install a basic system:</li><br />
<ul><br />
<li>When asked about the network interface, answer "done" to not configure the network now.</li><br />
<li>Answer 'yes' to the question "Change the default console to com0" to enable serial console.</li><br />
<li>Add a user named "vnx".</li><br />
<li>Use the whole "wd0" or "sd0" disk and "Auto layout".</li><br />
<li>Choose cd0 for the "location of sets". Choose the default "sets".</li><br />
</ul><br />
<li>After ending installation, but before shutting down the virtual machine, you have to disable mpbios, as follows:</li><br />
chroot /mnt<br />
config -ef /bsd<br />
disable mpbios<br />
quit<br />
<li>Finally, halt the system:</li><br />
halt -p<br />
</ul><br />
<br />
The OS installer will offer to reboot, but do not do that. Instead, close the VM console window and then, from the host OS, destroy the virtual machine:<br />
# virsh list<br />
Id Nombre Estado<br />
----------------------------------------------------<br />
33 vnx_rootfs_kvm_openbsd64-7.2.qcow2-7440 ejecutando<br />
<br />
# virsh destroy 33<br />
<br />
== Configuration ==<br />
<br />
<ul><br />
<li>Start the system with the following command:</li><br />
vnx --modify-rootfs vnx_rootfs_kvm_openbsd64-7.2.qcow2 --update-aced --mem 2G --arch x86_64 --vcpu 2<br />
Note: ignore the errors "timeout waiting for response on VM socket".<br />
<li>Access the system through the text console to easy the copy-paste of commands:</li><br />
# virsh list<br />
Id Name State<br />
----------------------------------------------------<br />
31 vnx_rootfs_kvm_openbsd64-7.2.qcow2-912 running<br />
<br />
# virsh console 31<br />
<li>In case you do not have access to the serial console, you can configure it manually by editing /etc/ttys file and changing the line:</li><br />
tty00 "/usr/libexec/getty std.9600" dialup off secure<br />
to:<br />
tty00 "/usr/libexec/getty std.9600" vt100 on secure<br />
Reboot the system after modifying the ttys file.<br />
<li>Loogin as root in the console and configure the network with DHCP:</li><br />
dhclient re0<br />
<li>Configure the environment variable with network repository:</li><br />
export PKG_PATH=ftp://ftp.eu.openbsd.org/pub/OpenBSD/`uname -r`/packages/`machine -a`/<br />
<li>Install bash and change package repository (change ftp.es.freebsd.org to your nearest mirror):</li><br />
pkg_add -r bash <br />
usermod -s /usr/local/bin/bash root<br />
usermod -s /usr/local/bin/bash vnx<br />
echo "export PKG_PATH=ftp://ftp.eu.openbsd.org/pub/OpenBSD/`uname -r`/packages/`machine -a`/" > ~/.bash_profile<br />
Note: in case pkg_add does not progress, try using another openbsd mirror in PKG_PATH variable.<br />
<li>Install XML::LibXML and NetAddr-IP perl libraries:</li><br />
pkg_add -r p5-XML-LibXML p5-NetAddr-IP <br />
<li>Install VNX autoconfiguration daemon:</li><br />
mount_msdos /dev/wd1i /mnt # if virtio=no in vnx.conf<br />
mount_msdos /dev/sd1c /mnt # if virtio=yes in vnx.conf<br />
perl /mnt/vnxaced-lf/install_vnxaced<br />
sed -i 's#bin/sh#bin/ksh#' /etc/rc.d/vnxaced<br />
<li>Create a file /etc/vnx_rootfs_version to store version number and informacion about modification:</li><br />
VER=v0.25<br />
OS=OpenBSD 7.2<br />
DESC=Basic OpenBSD 7.2 root filesystem without GUI<br />
<li>Configure interface em0 so that it does not get configured with DHCP. To do that, if file /etc/hostname.em0 exists, edit it and delete or comment the line with "dhcp".</li><br />
<li>Stop the machine with that script:</li><br />
vnx_halt<br />
<br />
</ul><br />
<br />
If everything went well, your root filesystem will be ready to be used with VNX. You can make a simple test using the simple_openbsd64.xml scenario distributed with VNX.<br />
<br />
== Installing additional software ==<br />
<br />
To install additional software or to modify your root file system, you just have to: <br />
<ul><br />
<li>Start a virtual machine from it:</li><br />
vnx --modify-rootfs vnx_rootfs_kvm_openbsd64-7.2.qcow2 --arch x86_64<br />
<li>Check network connectivity. Maybe you have to activate the network interface by hand:</li><br />
dhclient re0<br />
<li>Do the modifications you want.</li><br />
<li>Finally, halt the system using:</li><br />
vnx_halt<br />
</ul><br />
<br />
== Known problems ==<br />
<br />
== OpenBSD tips ==<br />
<br />
To upgrade OpenBSD to the next release, the OpenBSD site provides useful hints. For instance, to upgrade from 5.8 to 5.9, you can follow the instructions provided in http://www.openbsd.org/faq/upgrade59.html</div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Vnx-rootfsopenbsd&diff=2658
Vnx-rootfsopenbsd
2023-02-26T22:29:57Z
<p>David: </p>
<hr />
<div>{{Title|How to create a KVM OpenBSD root filesystem for VNX}}<br />
<br />
Follow this procedure to create a KVM OpenBSD based root filesystem for VNX. The procedure has been tested with OpenBSD 5.8. <br />
<br />
== Basic installation ==<br />
<ul><br />
<li>Create the filesystem disk image:</li><br />
# 32 bits<br />
qemu-img create -f qcow2 vnx_rootfs_kvm_openbsd.qcow2 12G<br />
# 64 bits<br />
qemu-img create -f qcow2 vnx_rootfs_kvm_openbsd64.qcow2 12G<br />
<li>Get OpenBSD installation CD. For example:</li><br />
# 32 bits<br />
wget http://mirror.meerval.net/pub/OpenBSD/5.9/i386/install59.iso<br />
mv install59.iso /almacen/iso/openbsd-install59-i386.iso<br />
# 64 bits<br />
wget http://ftp.eu.openbsd.org/pub/OpenBSD/5.9/amd64/install59.iso<br />
mv install59.iso /almacen/iso/openbsd-install59-amd64.iso<br />
<li>Create the virtual machine with:</li><br />
# 32 bits<br />
vnx --create-rootfs vnx_rootfs_kvm_openbsd.qcow2 --install-media /almacen/iso/openbsd-install59-i386.iso --mem 512M<br />
# 64 bits<br />
vnx --create-rootfs vnx_rootfs_kvm_openbsd64.qcow2 --install-media /almacen/iso/openbsd-install59-amd64.iso --mem 512M --arch=x86_64<br />
<li>Follow OpenBSD installation menus to install a basic system:</li><br />
<ul><br />
<li>When asked about the network interface, answer "done" to not configure the network now.</li><br />
<li>Answer 'yes' to the question "Change the default console to com0" to enable serial console.</li><br />
<li>Add a user named "vnx".</li><br />
<li>Use the whole "wd0" or "sd0" disk and "Auto layout".</li><br />
<li>Choose cd0 for the "location of sets". Choose the default "sets".</li><br />
</ul><br />
<li>After ending installation, but before shutting down the virtual machine, you have to disable mpbios, as follows:</li><br />
chroot /mnt<br />
config -ef /bsd<br />
disable mpbios<br />
quit<br />
<li>Finally, halt the system:</li><br />
halt -p<br />
</ul><br />
<br />
The OS installer will offer to reboot, but do not do that. Instead, close the VM console window and then, from the host OS, destroy the virtual machine:<br />
# virsh list<br />
Id Nombre Estado<br />
----------------------------------------------------<br />
33 vnx_rootfs_kvm_openbsd-5.9-v025.qcow2-7440 ejecutando<br />
<br />
# virsh destroy 33<br />
<br />
== Configuration ==<br />
<br />
<ul><br />
<li>Start the system with the following command:</li><br />
# 32 bits<br />
vnx --modify-rootfs vnx_rootfs_kvm_openbsd.qcow2 --update-aced --mem 512M<br />
# 64 bits<br />
vnx --modify-rootfs vnx_rootfs_kvm_openbsd64.qcow2 --update-aced --mem 512M --arch x86_64<br />
Note: ignore the errors "timeout waiting for response on VM socket".<br />
<li>Access the system through the text console to easy the copy-paste of commands:</li><br />
# virsh list<br />
Id Name State<br />
----------------------------------------------------<br />
31 vnx_rootfs_kvm_openbsd64-5.9-v025.qcow2-912 running<br />
<br />
# virsh console 31<br />
<li>In case you do not have access to the serial console, you can configure it manually by editting /etc/ttys file and changing the line:</li><br />
tty00 "/usr/libexec/getty std.9600" dialup off secure<br />
to:<br />
tty00 "/usr/libexec/getty std.9600" vt100 on secure<br />
Reboot the system after modifying the ttys file.<br />
<li>Loogin as root in the console and configure the network with DHCP:</li><br />
dhclient re0<br />
<li>Configure the environment variable with network repository:</li><br />
export PKG_PATH=ftp://ftp.eu.openbsd.org/pub/OpenBSD/`uname -r`/packages/`machine -a`/<br />
<li>Install bash and change package repository (change ftp.es.freebsd.org to your nearest mirror):</li><br />
pkg_add -r bash <br />
usermod -s /usr/local/bin/bash root<br />
usermod -s /usr/local/bin/bash vnx<br />
echo "export PKG_PATH=ftp://ftp.eu.openbsd.org/pub/OpenBSD/`uname -r`/packages/`machine -a`/" > ~/.bash_profile<br />
<li>Install XML::LibXML and NetAddr-IP perl libraries:</li><br />
pkg_add -r p5-XML-LibXML p5-NetAddr-IP <br />
<li>Install VNX autoconfiguration daemon:</li><br />
mount_msdos /dev/wd1i /mnt # if virtio=no in vnx.conf<br />
mount_msdos /dev/sd1c /mnt # if virtio=yes in vnx.conf<br />
perl /mnt/vnxaced-lf/install_vnxaced<br />
sed -i 's#bin/sh#bin/ksh#' /etc/rc.d/vnxaced<br />
<li>Create a file /etc/vnx_rootfs_version to store version number and informacion about modification:</li><br />
VER=v0.25<br />
OS=OpenBSD 5.8<br />
DESC=Basic OpenBSD 5.8 root filesystem without GUI<br />
<li>Configure interface em0 so that it does not get configured with DHCP. To do that, if file /etc/hostname.em0 exists, edit it and delete or comment the line with "dhcp".</li><br />
<li>Stop the machine with that script:</li><br />
vnx_halt<br />
<br />
</ul><br />
<br />
If everything went well, your root filesystem will be ready to be used with VNX. You can make a simple test using the simple_openbsd.xml (32 bits) or simple_openbsd64.xml (64 bits) scenario distributed with VNX.<br />
<br />
== Installing additional software ==<br />
<br />
To install additional software or to modify your root file system, you just have to: <br />
<ul><br />
<li>Start a virtual machine from it:</li><br />
# 32 bits<br />
vnx --modify-rootfs vnx_rootfs_kvm_openbsd.qcow2<br />
# 64 bits<br />
vnx --modify-rootfs vnx_rootfs_kvm_openbsd.qcow2 --arch x86_64<br />
<li>Check network connectivity. Maybe you have to activate the network interface by hand:</li><br />
dhclient re0<br />
<li>Do the modifications you want.</li><br />
<li>Finally, halt the system using:</li><br />
vnx_halt<br />
</ul><br />
<br />
== Known problems ==<br />
<br />
== OpenBSD tips ==<br />
<br />
To upgrade OpenBSD to the next release, the OpenBSD site provides useful hints. For instance, to upgrade from 5.8 to 5.9, you can follow the instructions provided in http://www.openbsd.org/faq/upgrade59.html</div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Vnx-rootfsubuntu&diff=2657
Vnx-rootfsubuntu
2023-01-03T23:55:12Z
<p>David: </p>
<hr />
<div>{{Title|How to create a KVM Ubuntu root filesystem for VNX}}<br />
<br />
== Basic installation ==<br />
<br />
Follow this procedure to create a KVM Ubuntu based root filesystem for VNX. The procedure has been tested with Ubuntu 9.10, 10.04, 10.10, 11.04, 12.04, 13.04, 13.10, 14.04, 14.10, 15.04, 15.10 and 16.04.<br />
<ul><br />
<li>Create the filesystem disk image:</li><br />
qemu-img create -f qcow2 vnx_rootfs_kvm_ubuntu.qcow2 20G<br />
<li>Get Ubuntu installation CD. For example:</li><br />
wget ftp://ftp.rediris.es/mirror/ubuntu-releases/16.04/ubuntu-16.04-server-i386.iso<br />
cp ubuntu-16.04-server-i386.iso /almacen/iso<br />
Note: use 'server' or 'desktop' CD versions depending on the system you want to create.<br />
<li>Create the virtual machine with:</li><br />
vnx --create-rootfs vnx_rootfs_kvm_ubuntu.qcow2 --install-media /almacen/iso/ubuntu-16.04-server-i386.iso --mem 512M<br />
Note: add '''"--arch x86_64"''' option for 64 bits virtual machines<br />
<li>Follow Ubuntu installation menus to install a basic system with ssh server.</li><br />
<li>Configure a serial console on ttyS0 (skip this step for 15.04 or later releases):</li><br />
cd /etc/init<br />
cp tty1.conf ttyS0.conf<br />
sed -i -e 's/tty1/ttyS0/' ttyS0.conf<br />
<li>Activate startup traces on serial console by editting /etc/default/grub file and setting the GRUB_CMDLINE_LINUX_DEFAULT variable to "console=ttyS0". Also change the boot menu timeout to 0 (sometimes virtual machines get stopped on the boot menu when starting on high loaded systems):</li><br />
GRUB_CMDLINE_LINUX_DEFAULT="console=ttyS0"<br />
GRUB_TIMEOUT=0<br />
GRUB_RECORDFAIL_TIMEOUT=1<br />
<!--li>Only for Ubuntu 15.10 or later releases (do not include it in 20.04, it conflicts with udev):</li><br />
GRUB_CMDLINE_LINUX="net.ifnames=0 biosdevname=0"<br />
--><br />
<li>Make grub process the previous changes:</li><br />
update-grub<br />
<li>Add a timeout to systemd-networkd-wait-online service to avoid long waits at startup. Edit /lib/systemd/system/systemd-networkd-wait-online.service and change the following line:</li><br />
ExecStart=/lib/systemd/systemd-networkd-wait-online --timeout 20<br />
<li>Finally, delete the net udev rules file and halt the system:</li><br />
rm /etc/udev/rules.d/70-persistent-net.rules<br />
halt -p<br />
</ul><br />
<br />
== Configuration ==<br />
<br />
<ul><br />
<li>Restart the system with the following command:</li><br />
vnx --modify-rootfs vnx_rootfs_kvm_ubuntu.qcow2 --update-aced --mem 512M<br />
Note: add '''"--arch x86_64"''' option for 64 bits virtual machines<br />
Note: ignore the errors "timeout waiting for response on VM socket". 768M are needed if you are installing a root filesystem with desktop interface<br />
<li>Access the system through the text console to easy the copy-paste of commands:</li><br />
virsh console vnx_rootfs_kvm_ubuntu.qcow2<br />
<li>Access the console and sudo root:</li><br />
sudo su<br />
<li>Update the system</li><br />
apt-get update<br />
apt-get dist-upgrade<br />
<li>Install XML::DOM perl package and ACPI daemon:</li><br />
apt-get install libxml-libxml-perl libnetaddr-ip-perl acpid<br />
<li>For 17.10 or newer install ifupdown</li><br />
apt-get install ifupdown<br />
<!--li>Only for Ubuntu 10.04:</li><br />
<ul><br />
<li>create /media/cdrom* directories:</li><br />
mkdir /media/cdrom0<br />
mkdir /media/cdrom1<br />
ln -s /media/cdrom0 /media/cdrom<br />
ln -s /cdrom /media/cdrom<br />
<li>add the following lines to /etc/fstab:</li><br />
/dev/scd0 /media/cdrom0 udf,iso9660 user,noauto,exec,utf8 0 0<br />
/dev/scd1 /media/cdrom1 udf,iso9660 user,noauto,exec,utf8 0 0<br />
</ul--><br />
<li>Install VNX autoconfiguration daemon:</li><br />
mount /dev/sdb /mnt/<br />
perl /mnt/vnxaced-lf/install_vnxaced<br />
Change 'sdb' by 'vdb' in case virtio drivers are being used.<br />
<li>Edit /etc/network/interfaces file and comment all lines related to eth0, eth1, etc interfaces. Leave only the loopback (lo) interface.</li><br />
<li>For Ubuntu 22.04, uninstall cloud-init:</li><br />
apt remove cloud-init cloud-initramfs-copymods cloud-initramfs-dyn-netconf cloud-guest-utils netplan.io<br />
<li>Optional: install graphical user interface.</li><br />
<ul><br />
<li>Minimal:</li><br />
# recommended option<br />
sudo apt-get install lubuntu-desktop<br />
<br />
# old recipe not tested in later versions<br />
sudo apt-get install xorg gnome-core gksu gdm gnome-system-tools gnome-nettool firefox-gnome-support<br />
<li>Complete:</li><br />
sudo apt-get install ubuntu-desktop<br />
Note: to avoid nautilus being launched any time you remotely execute a command on the virtual machine using VNX (which interferes with the normal execution of commands), you should disable the start of programs when media insertion takes place. Go to "System settings->System->Details->Removable Media" and deselect the checkbox "Never prompt or start programs on media insertion".<br />
<!--<br />
nautilus automount feature. Just execute gconf-editor and create a variable "/apps/nautilus/preferences/media_automount" and set it to 0. <br />
This does not seem to work:<br />
gconftool-2 --direct --config-source xml:readwrite:/etc/gconf/gconf.xml.mandatory --type bool --set "/apps/nautilus/preferences/media_automount" "false"<br />
gconftool-2 --direct --config-source xml:readwrite:/etc/gconf/gconf.xml.mandatory --type bool --set "/apps/nautilus/preferences/media_automount_open" "false"<br />
--><br />
</ul><br />
<li>Optional: install other services:</li><br />
<ul><br />
<li>Apache server:</li><br />
sudo apt-get install apache2<br />
update-rc.d -f apache2 remove # to avoid automatic start in old versions<br />
systemctl disable apache2.service # to avoid automatic start in new versions<br />
<br />
<li>Other tools</li><br />
sudo apt-get install traceroute<br />
sudo apt-get install xterm # needed to have the 'resize' tool to resize consoles <br />
</ul><br />
<br />
<li>Create a file /etc/vnx_rootfs_version to store version number and informacion about modification:</li><br />
<pre><br />
VER=v0.25<br />
OS=Ubuntu 16.04 32 bits<br />
DESC=Basic Ubuntu 16.04 root filesystem without GUI<br />
</pre><br />
<br />
<li>Zero the image empty space to allow reducing the size of the image:</li><br />
dd if=/dev/zero of=/mytempfile<br />
rm -f /mytempfile<br />
<br />
<li>Stop the machine with vnx_halt:</li><br />
sudo vnx_halt<br />
<br />
<li>Reduce the size of the image:</li><br />
mv vnx_rootfs_kvm_ubuntu.qcow2 vnx_rootfs_kvm_ubuntu.qcow2.bak<br />
qemu-img convert -O qcow2 vnx_rootfs_kvm_ubuntu.qcow2.bak vnx_rootfs_kvm_ubuntu.qcow2<br />
<br />
</ul><br />
<br />
If everything went well, your root filesystem will be ready to be used with VNX. You can make a simple test using the simple_ubuntu.xml scenario distributed with VNX.<br />
<br />
== Installing additional software ==<br />
<br />
To install additional software or to modify your root file system, you just have to:<br />
<ul><br />
<li>Start a virtual machine from it:</li><br />
vnx --modify-rootfs vnx_rootfs_kvm_ubuntu.qcow2<br />
<li>Check network connectivity. Maybe you have to activate the network interface by hand:</li><br />
dhclient eth0<br />
Note: use "ip link show" to know which network interface to use.<br />
<li>Do the modifications you want.</li><br />
<li>Finally, halt the system using:</li><br />
vnx_halt<br />
</ul><br />
<br />
==== Examples ====<br />
<br />
<ul><br />
<li>dhcp server and relay:</li><br />
<ul><br />
<li>Install dhcp3 packages:</li><br />
apt-get install dhcp3-server dhcp3-relay<br />
<li>Disable autostart (optional):</li><br />
update-rc.d -f isc-dhcp-server remove<br />
update-rc.d -f isc-dhcp-relay remove<br />
</ul><br />
<br />
<br />
</ul><br />
<br />
== Updating VNXACED ==<br />
<br />
You can automatically update the VNXACE daemon with the following command:<br />
vnx --modify-rootfs vnx_rootfs_kvm_ubuntu.qcow2 --update-aced -y<br />
If VNXACE daemon is not updated automatically, you can do it manually by accessing the virtual machine console and type:<br />
mount /dev/sdb /mnt/<br />
perl /mnt/vnxaced-lf/install_vnxaced<br />
<br />
== Known problems ==<br />
<br />
<ul><br />
<li>Sometimes after restarting, the virtual machines stop at showing the grub menu and do not start until you manually choose one option. To avoid it, just follow the instructions here: http://www.linuxquestions.org/questions/linux-server-73/how-to-disable-grub-2-menu-even-after-server-crash-796562/. Beware that the changes you make to grub.cfg file are lost after executing "update-grub" command.<br />
</li><br />
<li>In Ubuntu 12.04 Desktop, graphical commands execution does not work. Command execution fails with "ERROR: no user logged on display :0.0" (see /var/log/vnxaced.log). If you just open a "terminal" window, commands work correctly (does not work if you open other applications; only when you start a terminal...).</li><br />
<li>Each time a cdrom is mounted (for example, whenever a command is executed on the virtual machine) the following error appears in the console:</li><br />
<pre><br />
Jul 27 22:33:31 vnx kernel: [ 4384.875886] ata1.01: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6<br />
Jul 27 22:33:31 vnx kernel: [ 4385.291374] ata1.01: BMDMA stat 0x5<br />
Jul 27 22:33:31 vnx kernel: [ 4385.493411] sr 0:0:1:0: [sr0] CDB: Read(10): 28 00 00 00 00 18 00 00 01 00<br />
Jul 27 22:33:31 vnx kernel: [ 4385.493460] ata1.01: cmd a0/01:00:00:00:08/00:00:00:00:00/b0 tag 0 dma 2048 in<br />
Jul 27 22:33:31 vnx kernel: [ 4385.493461] res 01/60:00:00:00:08/00:00:00:00:00/b0 Emask 0x3 (HSM violation)<br />
Jul 27 22:33:31 vnx kernel: [ 4386.263553] ata1.01: status: { ERR }<br />
</pre><br />
Despite of the error trace, the commands are executed correctly. This error does not appear on Ubuntu 9.10 filesystems.<br />
<br />
</ul></div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Vnx-rootfsubuntu&diff=2656
Vnx-rootfsubuntu
2023-01-03T17:22:28Z
<p>David: </p>
<hr />
<div>{{Title|How to create a KVM Ubuntu root filesystem for VNX}}<br />
<br />
== Basic installation ==<br />
<br />
Follow this procedure to create a KVM Ubuntu based root filesystem for VNX. The procedure has been tested with Ubuntu 9.10, 10.04, 10.10, 11.04, 12.04, 13.04, 13.10, 14.04, 14.10, 15.04, 15.10 and 16.04.<br />
<ul><br />
<li>Create the filesystem disk image:</li><br />
qemu-img create -f qcow2 vnx_rootfs_kvm_ubuntu.qcow2 20G<br />
<li>Get Ubuntu installation CD. For example:</li><br />
wget ftp://ftp.rediris.es/mirror/ubuntu-releases/16.04/ubuntu-16.04-server-i386.iso<br />
cp ubuntu-16.04-server-i386.iso /almacen/iso<br />
Note: use 'server' or 'desktop' CD versions depending on the system you want to create.<br />
<li>Create the virtual machine with:</li><br />
vnx --create-rootfs vnx_rootfs_kvm_ubuntu.qcow2 --install-media /almacen/iso/ubuntu-16.04-server-i386.iso --mem 512M<br />
Note: add '''"--arch x86_64"''' option for 64 bits virtual machines<br />
<li>Follow Ubuntu installation menus to install a basic system with ssh server.</li><br />
<li>Configure a serial console on ttyS0 (skip this step for 15.04 or later releases):</li><br />
cd /etc/init<br />
cp tty1.conf ttyS0.conf<br />
sed -i -e 's/tty1/ttyS0/' ttyS0.conf<br />
<li>Activate startup traces on serial console by editting /etc/default/grub file and setting the GRUB_CMDLINE_LINUX_DEFAULT variable to "console=ttyS0". Also change the boot menu timeout to 0 (sometimes virtual machines get stopped on the boot menu when starting on high loaded systems):</li><br />
GRUB_CMDLINE_LINUX_DEFAULT="console=ttyS0"<br />
GRUB_TIMEOUT=0<br />
GRUB_RECORDFAIL_TIMEOUT=1<br />
<!--li>Only for Ubuntu 15.10 or later releases (do not include it in 20.04, it conflicts with udev):</li><br />
GRUB_CMDLINE_LINUX="net.ifnames=0 biosdevname=0"<br />
--><br />
<li>Make grub process the previous changes:</li><br />
update-grub<br />
<li>Add a timeout to systemd-networkd-wait-online service to avoid long waits at startup. Edit /lib/systemd/system/systemd-networkd-wait-online.service and change the following line:</li><br />
ExecStart=/lib/systemd/systemd-networkd-wait-online --timeout 20<br />
<li>Finally, delete the net udev rules file and halt the system:</li><br />
rm /etc/udev/rules.d/70-persistent-net.rules<br />
halt -p<br />
</ul><br />
<br />
== Configuration ==<br />
<br />
<ul><br />
<li>Restart the system with the following command:</li><br />
vnx --modify-rootfs vnx_rootfs_kvm_ubuntu.qcow2 --update-aced --mem 512M<br />
Note: add '''"--arch x86_64"''' option for 64 bits virtual machines<br />
Note: ignore the errors "timeout waiting for response on VM socket". 768M are needed if you are installing a root filesystem with desktop interface<br />
<li>Access the system through the text console to easy the copy-paste of commands:</li><br />
virsh console vnx_rootfs_kvm_ubuntu.qcow2<br />
<li>Access the console and sudo root:</li><br />
sudo su<br />
<li>Update the system</li><br />
apt-get update<br />
apt-get dist-upgrade<br />
<li>Install XML::DOM perl package and ACPI daemon:</li><br />
apt-get install libxml-libxml-perl libnetaddr-ip-perl acpid<br />
<li>For 17.10 or newer install ifupdown</li><br />
apt-get install ifupdown<br />
<!--li>Only for Ubuntu 10.04:</li><br />
<ul><br />
<li>create /media/cdrom* directories:</li><br />
mkdir /media/cdrom0<br />
mkdir /media/cdrom1<br />
ln -s /media/cdrom0 /media/cdrom<br />
ln -s /cdrom /media/cdrom<br />
<li>add the following lines to /etc/fstab:</li><br />
/dev/scd0 /media/cdrom0 udf,iso9660 user,noauto,exec,utf8 0 0<br />
/dev/scd1 /media/cdrom1 udf,iso9660 user,noauto,exec,utf8 0 0<br />
</ul--><br />
<li>Install VNX autoconfiguration daemon:</li><br />
mount /dev/sdb /mnt/<br />
perl /mnt/vnxaced-lf/install_vnxaced<br />
Change 'sdb' by 'vdb' in case virtio drivers are being used.<br />
<li>Edit /etc/network/interfaces file and comment all lines related to eth0, eth1, etc interfaces. Leave only the loopback (lo) interface.</li><br />
<li>For Ubuntu 22.04, uninstall cloud-init:</li><br />
apt remove cloud-init cloud-initramfs-copymods cloud-initramfs-dyn-netconf<br />
<li>Optional: install graphical user interface.</li><br />
<ul><br />
<li>Minimal:</li><br />
# recommended option<br />
sudo apt-get install lubuntu-desktop<br />
<br />
# old recipe not tested in later versions<br />
sudo apt-get install xorg gnome-core gksu gdm gnome-system-tools gnome-nettool firefox-gnome-support<br />
<li>Complete:</li><br />
sudo apt-get install ubuntu-desktop<br />
Note: to avoid nautilus being launched any time you remotely execute a command on the virtual machine using VNX (which interferes with the normal execution of commands), you should disable the start of programs when media insertion takes place. Go to "System settings->System->Details->Removable Media" and deselect the checkbox "Never prompt or start programs on media insertion".<br />
<!--<br />
nautilus automount feature. Just execute gconf-editor and create a variable "/apps/nautilus/preferences/media_automount" and set it to 0. <br />
This does not seem to work:<br />
gconftool-2 --direct --config-source xml:readwrite:/etc/gconf/gconf.xml.mandatory --type bool --set "/apps/nautilus/preferences/media_automount" "false"<br />
gconftool-2 --direct --config-source xml:readwrite:/etc/gconf/gconf.xml.mandatory --type bool --set "/apps/nautilus/preferences/media_automount_open" "false"<br />
--><br />
</ul><br />
<li>Optional: install other services:</li><br />
<ul><br />
<li>Apache server:</li><br />
sudo apt-get install apache2<br />
update-rc.d -f apache2 remove # to avoid automatic start in old versions<br />
systemctl disable apache2.service # to avoid automatic start in new versions<br />
<br />
<li>Other tools</li><br />
sudo apt-get install traceroute<br />
sudo apt-get install xterm # needed to have the 'resize' tool to resize consoles <br />
</ul><br />
<br />
<li>Create a file /etc/vnx_rootfs_version to store version number and informacion about modification:</li><br />
<pre><br />
VER=v0.25<br />
OS=Ubuntu 16.04 32 bits<br />
DESC=Basic Ubuntu 16.04 root filesystem without GUI<br />
</pre><br />
<br />
<li>Zero the image empty space to allow reducing the size of the image:</li><br />
dd if=/dev/zero of=/mytempfile<br />
rm -f /mytempfile<br />
<br />
<li>Stop the machine with vnx_halt:</li><br />
sudo vnx_halt<br />
<br />
<li>Reduce the size of the image:</li><br />
mv vnx_rootfs_kvm_ubuntu.qcow2 vnx_rootfs_kvm_ubuntu.qcow2.bak<br />
qemu-img convert -O qcow2 vnx_rootfs_kvm_ubuntu.qcow2.bak vnx_rootfs_kvm_ubuntu.qcow2<br />
<br />
</ul><br />
<br />
If everything went well, your root filesystem will be ready to be used with VNX. You can make a simple test using the simple_ubuntu.xml scenario distributed with VNX.<br />
<br />
== Installing additional software ==<br />
<br />
To install additional software or to modify your root file system, you just have to:<br />
<ul><br />
<li>Start a virtual machine from it:</li><br />
vnx --modify-rootfs vnx_rootfs_kvm_ubuntu.qcow2<br />
<li>Check network connectivity. Maybe you have to activate the network interface by hand:</li><br />
dhclient eth0<br />
Note: use "ip link show" to know which network interface to use.<br />
<li>Do the modifications you want.</li><br />
<li>Finally, halt the system using:</li><br />
vnx_halt<br />
</ul><br />
<br />
==== Examples ====<br />
<br />
<ul><br />
<li>dhcp server and relay:</li><br />
<ul><br />
<li>Install dhcp3 packages:</li><br />
apt-get install dhcp3-server dhcp3-relay<br />
<li>Disable autostart (optional):</li><br />
update-rc.d -f isc-dhcp-server remove<br />
update-rc.d -f isc-dhcp-relay remove<br />
</ul><br />
<br />
<br />
</ul><br />
<br />
== Updating VNXACED ==<br />
<br />
You can automatically update the VNXACE daemon with the following command:<br />
vnx --modify-rootfs vnx_rootfs_kvm_ubuntu.qcow2 --update-aced -y<br />
If VNXACE daemon is not updated automatically, you can do it manually by accessing the virtual machine console and type:<br />
mount /dev/sdb /mnt/<br />
perl /mnt/vnxaced-lf/install_vnxaced<br />
<br />
== Known problems ==<br />
<br />
<ul><br />
<li>Sometimes after restarting, the virtual machines stop at showing the grub menu and do not start until you manually choose one option. To avoid it, just follow the instructions here: http://www.linuxquestions.org/questions/linux-server-73/how-to-disable-grub-2-menu-even-after-server-crash-796562/. Beware that the changes you make to grub.cfg file are lost after executing "update-grub" command.<br />
</li><br />
<li>In Ubuntu 12.04 Desktop, graphical commands execution does not work. Command execution fails with "ERROR: no user logged on display :0.0" (see /var/log/vnxaced.log). If you just open a "terminal" window, commands work correctly (does not work if you open other applications; only when you start a terminal...).</li><br />
<li>Each time a cdrom is mounted (for example, whenever a command is executed on the virtual machine) the following error appears in the console:</li><br />
<pre><br />
Jul 27 22:33:31 vnx kernel: [ 4384.875886] ata1.01: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6<br />
Jul 27 22:33:31 vnx kernel: [ 4385.291374] ata1.01: BMDMA stat 0x5<br />
Jul 27 22:33:31 vnx kernel: [ 4385.493411] sr 0:0:1:0: [sr0] CDB: Read(10): 28 00 00 00 00 18 00 00 01 00<br />
Jul 27 22:33:31 vnx kernel: [ 4385.493460] ata1.01: cmd a0/01:00:00:00:08/00:00:00:00:00/b0 tag 0 dma 2048 in<br />
Jul 27 22:33:31 vnx kernel: [ 4385.493461] res 01/60:00:00:00:08/00:00:00:00:00/b0 Emask 0x3 (HSM violation)<br />
Jul 27 22:33:31 vnx kernel: [ 4386.263553] ata1.01: status: { ERR }<br />
</pre><br />
Despite of the error trace, the commands are executed correctly. This error does not appear on Ubuntu 9.10 filesystems.<br />
<br />
</ul></div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Vnx-install-ubuntu3&diff=2655
Vnx-install-ubuntu3
2022-12-25T00:41:00Z
<p>David: </p>
<hr />
<div>{{Title|VNX Installation over Ubuntu}}<br />
<br />
This section describes the procedure for manually installing VNX over Ubuntu 13.*, 14.*, 15.*, 16.*, 17.*, 18.* and 20.*.<br />
<br />
Open a root shell window and follow these steps:<br />
<ul><br />
<br />
<li>Install all packages required (basic development, virtualization, perl libraries and auxiliar packages). In case of Ubuntu 18.04 and older versions change 'libvirt-clients' by 'libvirt-bin':</li><br />
sudo apt-get update<br />
sudo apt-get install \<br />
bash-completion bridge-utils curl eog expect genisoimage gnome-terminal \<br />
graphviz libappconfig-perl libdbi-perl liberror-perl libexception-class-perl \<br />
libfile-homedir-perl libio-pty-perl libmath-round-perl libnetaddr-ip-perl \<br />
libnet-ip-perl libnet-ipv6addr-perl libnet-pcap-perl libnet-telnet-perl \<br />
libreadonly-perl libswitch-perl libsys-virt-perl libterm-readline-perl-perl \<br />
libxml-checker-perl libxml-dom-perl libxml-libxml-perl \<br />
libxml-parser-perl libxml-tidy-perl lxc lxc-templates net-tools \<br />
openvswitch-switch picocom pv qemu-kvm screen tree uml-utilities virt-manager \<br />
virt-viewer vlan w3m wmctrl xdotool xfce4-terminal xterm lsof libvirt-clients <br />
<br />
<li>Tune libvirt configuration to work with VNX. In particular, edit /etc/libvirt/qemu.conf file and set the following parameters (see this simple [[Vnx-install-modify-qemuconf|script]] to do it):</li><br />
security_driver = "none"<br />
user = "root"<br />
group = "root"<br />
cgroup_device_acl = [<br />
"/dev/null", "/dev/full", "/dev/zero",<br />
"/dev/random", "/dev/urandom",<br />
"/dev/ptmx", "/dev/kvm", "/dev/kqemu",<br />
"/dev/rtc", "/dev/hpet", "/dev/vfio/vfio", "/dev/net/tun"<br />
]<br />
(you have to add "/dev/net/tun") and restart libvirtd for the changes to take effect:<br />
sudo restart libvirt-bin # for ubuntu 14.10 or older<br />
sudo systemctl restart libvirt-bin # for ubuntu 15.04 or 16.04<br />
sudo systemctl restart libvirtd # for ubuntu 18.04 or later<br />
<li>Check that libvirt is running correctly, for example, executing:</li><br />
sudo virsh list<br />
sudo virsh capabilities<br />
Note: Have a look at [[Vnx-install-trobleshooting|this document]] in case you get an error similar to this one: <br />
virsh: /usr/lib/libvirt.so.0: version LIBVIRT_PRIVATE-XXX not found (required by virsh)<br />
<br />
<li>Install VNX:</li><br />
mkdir /tmp/vnx-update<br />
cd /tmp/vnx-update<br />
rm -rf /tmp/vnx-update/vnx-*<br />
wget http://vnx.dit.upm.es/vnx/vnx-latest.tgz<br />
tar xfvz vnx-latest.tgz<br />
cd vnx-*-*<br />
sudo ./install_vnx<br />
<br />
<li>Restart apparmor:</li><br />
service apparmor restart # for Ubuntu 14.10 or older<br />
systemctl restart apparmor # for Ubuntu 15.04 or later <br />
<br />
<li>Create the VNX config file (/etc/vnx.conf). You just can move the sample config file:</li><br />
sudo mv /usr/share/vnx/etc/vnx.conf.sample /etc/vnx.conf<br />
<br />
<li>For Ubuntu 15.04 or newer: change parameter 'overlayfs_workdir_option' in vnx.conf to 'yes'</li><br />
[lxc]<br />
...<br />
overlayfs_workdir_option = 'yes'<br />
...<br />
<br />
<li>For Ubuntu 16.04 or later: change the LXC union_type to 'overlayfs'<br />
[lxc]<br />
...<br />
union_type='overlayfs'<br />
...<br />
<br />
<li>Download root file systems from http://vnx.dit.upm.es/vnx/filesystems and install them following these [[Vnx-install-root_fs|instructions]]</li><br />
<br />
<li>Optionally, enable bash-completion in your system to allow using VNX bash completion capabilities. For example, to enable it for all users in your system, just edit '/etc/bash.bashrc' and uncomment the following lines:</li><br />
<pre><br />
# enable bash completion in interactive shells<br />
if ! shopt -oq posix; then<br />
if [ -f /usr/share/bash-completion/bash_completion ]; then<br />
. /usr/share/bash-completion/bash_completion<br />
elif [ -f /etc/bash_completion ]; then<br />
. /etc/bash_completion<br />
fi<br />
fi<br />
</pre><br />
</ul><br />
<br />
=== Additional install steps for Dynamips support ===<br />
<br />
* Install Dynamips and Dynagen:<br />
apt-get install dynamips dynagen<br />
<br />
* Create a file /etc/init.d/dynamips (taken from http://7200emu.hacki.at/viewtopic.php?t=2198):<br />
<pre><br />
#!/bin/sh<br />
# Start/stop the dynamips program as a daemon.<br />
#<br />
### BEGIN INIT INFO<br />
# Provides: dynamips<br />
# Required-Start:<br />
# Required-Stop:<br />
# Default-Start: 2 3 4 5<br />
# Default-Stop: 0 1 6<br />
# Short-Description: Cisco hardware emulator daemon<br />
### END INIT INFO<br />
<br />
DAEMON=/usr/bin/dynamips<br />
NAME=dynamips<br />
PORT=7200<br />
PIDFILE=/var/run/$NAME.pid <br />
LOGFILE=/var/log/$NAME.log<br />
DESC="Cisco Emulator"<br />
SCRIPTNAME=/etc/init.d/$NAME<br />
<br />
test -f $DAEMON || exit 0<br />
<br />
. /lib/lsb/init-functions<br />
<br />
<br />
case "$1" in<br />
start) log_daemon_msg "Starting $DESC " "$NAME"<br />
start-stop-daemon --start --chdir /tmp --background --make-pidfile --pidfile $PIDFILE --name $NAME --startas $DAEMON -- -H $PORT -l $LOGFILE<br />
log_end_msg $?<br />
;;<br />
stop) log_daemon_msg "Stopping $DESC " "$NAME"<br />
start-stop-daemon --stop --quiet --pidfile $PIDFILE --name $NAME<br />
log_end_msg $?<br />
;;<br />
restart) log_daemon_msg "Restarting $DESC " "$NAME"<br />
start-stop-daemon --stop --retry 5 --quiet --pidfile $PIDFILE --name $NAME<br />
start-stop-daemon --start --chdir /tmp --background --make-pidfile --pidfile $PIDFILE --name $NAME --startas $DAEMON -- -H $PORT -l $LOGFILE<br />
log_end_msg $?<br />
;;<br />
status)<br />
status_of_proc -p $PIDFILE $DAEMON $NAME && exit 0 || exit $? <br />
#status $NAME<br />
#RETVAL=$?<br />
;; <br />
*) log_action_msg "Usage: $SCRIPTNAME {start|stop|restart|status}"<br />
exit 2<br />
;;<br />
esac<br />
exit 0<br />
<br />
</pre><br />
<br />
* Set execution permissions for the script and add it to system start-up:<br />
chmod +x /etc/init.d/dynamips<br />
update-rc.d dynamips defaults<br />
/etc/init.d/dynamips start<br />
<br />
* Download and install cisco IOS image:<br />
cd /usr/share/vnx/filesystems<br />
# Cisco image<br />
wget ... c3640-js-mz.124-19.image<br />
ln -s c3640-js-mz.124-19.image c3640<br />
<br />
* Calculate the idle-pc value for your computer following the procedure in http://dynagen.org/tutorial.htm:<br />
dynagen /usr/share/vnx/examples/R3640.net<br />
console R3640 # type 'no' to exit the config wizard and wait <br />
# for the router to completely start <br />
idlepc get R3640<br />
Once you know the idlepc value for your system, include it in /etc/vnx.conf file.</div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Vnx-rootfsubuntu&diff=2654
Vnx-rootfsubuntu
2021-11-01T13:21:24Z
<p>David: </p>
<hr />
<div>{{Title|How to create a KVM Ubuntu root filesystem for VNX}}<br />
<br />
== Basic installation ==<br />
<br />
Follow this procedure to create a KVM Ubuntu based root filesystem for VNX. The procedure has been tested with Ubuntu 9.10, 10.04, 10.10, 11.04, 12.04, 13.04, 13.10, 14.04, 14.10, 15.04, 15.10 and 16.04.<br />
<ul><br />
<li>Create the filesystem disk image:</li><br />
qemu-img create -f qcow2 vnx_rootfs_kvm_ubuntu.qcow2 20G<br />
<li>Get Ubuntu installation CD. For example:</li><br />
wget ftp://ftp.rediris.es/mirror/ubuntu-releases/16.04/ubuntu-16.04-server-i386.iso<br />
cp ubuntu-16.04-server-i386.iso /almacen/iso<br />
Note: use 'server' or 'desktop' CD versions depending on the system you want to create.<br />
<li>Create the virtual machine with:</li><br />
vnx --create-rootfs vnx_rootfs_kvm_ubuntu.qcow2 --install-media /almacen/iso/ubuntu-16.04-server-i386.iso --mem 512M<br />
Note: add '''"--arch x86_64"''' option for 64 bits virtual machines<br />
<li>Follow Ubuntu installation menus to install a basic system with ssh server.</li><br />
<li>Configure a serial console on ttyS0 (skip this step for 15.04 or later releases):</li><br />
cd /etc/init<br />
cp tty1.conf ttyS0.conf<br />
sed -i -e 's/tty1/ttyS0/' ttyS0.conf<br />
<li>Activate startup traces on serial console by editting /etc/default/grub file and setting the GRUB_CMDLINE_LINUX_DEFAULT variable to "console=ttyS0". Also change the boot menu timeout to 0 (sometimes virtual machines get stopped on the boot menu when starting on high loaded systems):</li><br />
GRUB_CMDLINE_LINUX_DEFAULT="console=ttyS0"<br />
GRUB_TIMEOUT=0<br />
GRUB_RECORDFAIL_TIMEOUT=1<br />
<!--li>Only for Ubuntu 15.10 or later releases (do not include it in 20.04, it conflicts with udev):</li><br />
GRUB_CMDLINE_LINUX="net.ifnames=0 biosdevname=0"<br />
--><br />
<li>Make grub process the previous changes:</li><br />
update-grub<br />
<li>Add a timeout to systemd-networkd-wait-online service to avoid long waits at startup. Edit /lib/systemd/system/systemd-networkd-wait-online.service and change the following line:</li><br />
ExecStart=/lib/systemd/systemd-networkd-wait-online --timeout 20<br />
<li>Finally, delete the net udev rules file and halt the system:</li><br />
rm /etc/udev/rules.d/70-persistent-net.rules<br />
halt -p<br />
</ul><br />
<br />
== Configuration ==<br />
<br />
<ul><br />
<li>Restart the system with the following command:</li><br />
vnx --modify-rootfs vnx_rootfs_kvm_ubuntu.qcow2 --update-aced --mem 512M<br />
Note: add '''"--arch x86_64"''' option for 64 bits virtual machines<br />
Note: ignore the errors "timeout waiting for response on VM socket". 768M are needed if you are installing a root filesystem with desktop interface<br />
<li>Access the system through the text console to easy the copy-paste of commands:</li><br />
virsh console vnx_rootfs_kvm_ubuntu.qcow2<br />
<li>Access the console and sudo root:</li><br />
sudo su<br />
<li>Update the system</li><br />
apt-get update<br />
apt-get dist-upgrade<br />
<li>Install XML::DOM perl package and ACPI daemon:</li><br />
apt-get install libxml-libxml-perl libnetaddr-ip-perl acpid<br />
<li>For 17.10 or newer install ifupdown</li><br />
apt-get install ifupdown<br />
<!--li>Only for Ubuntu 10.04:</li><br />
<ul><br />
<li>create /media/cdrom* directories:</li><br />
mkdir /media/cdrom0<br />
mkdir /media/cdrom1<br />
ln -s /media/cdrom0 /media/cdrom<br />
ln -s /cdrom /media/cdrom<br />
<li>add the following lines to /etc/fstab:</li><br />
/dev/scd0 /media/cdrom0 udf,iso9660 user,noauto,exec,utf8 0 0<br />
/dev/scd1 /media/cdrom1 udf,iso9660 user,noauto,exec,utf8 0 0<br />
</ul--><br />
<li>Install VNX autoconfiguration daemon:</li><br />
mount /dev/sdb /mnt/<br />
perl /mnt/vnxaced-lf/install_vnxaced<br />
Change 'sdb' by 'vdb' in case virtio drivers are being used.<br />
<li>Edit /etc/network/interfaces file and comment all lines related to eth0, eth1, etc interfaces. Leave only the loopback (lo) interface.</li><br />
<li>Optional: install graphical user interface.</li><br />
<ul><br />
<li>Minimal:</li><br />
# recommended option<br />
sudo apt-get install lubuntu-desktop<br />
<br />
# old recipe not tested in later versions<br />
sudo apt-get install xorg gnome-core gksu gdm gnome-system-tools gnome-nettool firefox-gnome-support<br />
<li>Complete:</li><br />
sudo apt-get install ubuntu-desktop<br />
Note: to avoid nautilus being launched any time you remotely execute a command on the virtual machine using VNX (which interferes with the normal execution of commands), you should disable the start of programs when media insertion takes place. Go to "System settings->System->Details->Removable Media" and deselect the checkbox "Never prompt or start programs on media insertion".<br />
<!--<br />
nautilus automount feature. Just execute gconf-editor and create a variable "/apps/nautilus/preferences/media_automount" and set it to 0. <br />
This does not seem to work:<br />
gconftool-2 --direct --config-source xml:readwrite:/etc/gconf/gconf.xml.mandatory --type bool --set "/apps/nautilus/preferences/media_automount" "false"<br />
gconftool-2 --direct --config-source xml:readwrite:/etc/gconf/gconf.xml.mandatory --type bool --set "/apps/nautilus/preferences/media_automount_open" "false"<br />
--><br />
</ul><br />
<li>Optional: install other services:</li><br />
<ul><br />
<li>Apache server:</li><br />
sudo apt-get install apache2<br />
update-rc.d -f apache2 remove # to avoid automatic start in old versions<br />
systemctl disable apache2.service # to avoid automatic start in new versions<br />
<br />
<li>Other tools</li><br />
sudo apt-get install traceroute<br />
sudo apt-get install xterm # needed to have the 'resize' tool to resize consoles <br />
</ul><br />
<br />
<li>Create a file /etc/vnx_rootfs_version to store version number and informacion about modification:</li><br />
<pre><br />
VER=v0.25<br />
OS=Ubuntu 16.04 32 bits<br />
DESC=Basic Ubuntu 16.04 root filesystem without GUI<br />
</pre><br />
<br />
<li>Zero the image empty space to allow reducing the size of the image:</li><br />
dd if=/dev/zero of=/mytempfile<br />
rm -f /mytempfile<br />
<br />
<li>Stop the machine with vnx_halt:</li><br />
sudo vnx_halt<br />
<br />
<li>Reduce the size of the image:</li><br />
mv vnx_rootfs_kvm_ubuntu.qcow2 vnx_rootfs_kvm_ubuntu.qcow2.bak<br />
qemu-img convert -O qcow2 vnx_rootfs_kvm_ubuntu.qcow2.bak vnx_rootfs_kvm_ubuntu.qcow2<br />
<br />
</ul><br />
<br />
If everything went well, your root filesystem will be ready to be used with VNX. You can make a simple test using the simple_ubuntu.xml scenario distributed with VNX.<br />
<br />
== Installing additional software ==<br />
<br />
To install additional software or to modify your root file system, you just have to:<br />
<ul><br />
<li>Start a virtual machine from it:</li><br />
vnx --modify-rootfs vnx_rootfs_kvm_ubuntu.qcow2<br />
<li>Check network connectivity. Maybe you have to activate the network interface by hand:</li><br />
dhclient eth0<br />
Note: use "ip link show" to know which network interface to use.<br />
<li>Do the modifications you want.</li><br />
<li>Finally, halt the system using:</li><br />
vnx_halt<br />
</ul><br />
<br />
==== Examples ====<br />
<br />
<ul><br />
<li>dhcp server and relay:</li><br />
<ul><br />
<li>Install dhcp3 packages:</li><br />
apt-get install dhcp3-server dhcp3-relay<br />
<li>Disable autostart (optional):</li><br />
update-rc.d -f isc-dhcp-server remove<br />
update-rc.d -f isc-dhcp-relay remove<br />
</ul><br />
<br />
<br />
</ul><br />
<br />
== Updating VNXACED ==<br />
<br />
You can automatically update the VNXACE daemon with the following command:<br />
vnx --modify-rootfs vnx_rootfs_kvm_ubuntu.qcow2 --update-aced -y<br />
If VNXACE daemon is not updated automatically, you can do it manually by accessing the virtual machine console and type:<br />
mount /dev/sdb /mnt/<br />
perl /mnt/vnxaced-lf/install_vnxaced<br />
<br />
== Known problems ==<br />
<br />
<ul><br />
<li>Sometimes after restarting, the virtual machines stop at showing the grub menu and do not start until you manually choose one option. To avoid it, just follow the instructions here: http://www.linuxquestions.org/questions/linux-server-73/how-to-disable-grub-2-menu-even-after-server-crash-796562/. Beware that the changes you make to grub.cfg file are lost after executing "update-grub" command.<br />
</li><br />
<li>In Ubuntu 12.04 Desktop, graphical commands execution does not work. Command execution fails with "ERROR: no user logged on display :0.0" (see /var/log/vnxaced.log). If you just open a "terminal" window, commands work correctly (does not work if you open other applications; only when you start a terminal...).</li><br />
<li>Each time a cdrom is mounted (for example, whenever a command is executed on the virtual machine) the following error appears in the console:</li><br />
<pre><br />
Jul 27 22:33:31 vnx kernel: [ 4384.875886] ata1.01: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6<br />
Jul 27 22:33:31 vnx kernel: [ 4385.291374] ata1.01: BMDMA stat 0x5<br />
Jul 27 22:33:31 vnx kernel: [ 4385.493411] sr 0:0:1:0: [sr0] CDB: Read(10): 28 00 00 00 00 18 00 00 01 00<br />
Jul 27 22:33:31 vnx kernel: [ 4385.493460] ata1.01: cmd a0/01:00:00:00:08/00:00:00:00:00/b0 tag 0 dma 2048 in<br />
Jul 27 22:33:31 vnx kernel: [ 4385.493461] res 01/60:00:00:00:08/00:00:00:00:00/b0 Emask 0x3 (HSM violation)<br />
Jul 27 22:33:31 vnx kernel: [ 4386.263553] ata1.01: status: { ERR }<br />
</pre><br />
Despite of the error trace, the commands are executed correctly. This error does not appear on Ubuntu 9.10 filesystems.<br />
<br />
</ul></div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Vnx-rootfsubuntu&diff=2653
Vnx-rootfsubuntu
2021-11-01T13:20:05Z
<p>David: </p>
<hr />
<div>{{Title|How to create a KVM Ubuntu root filesystem for VNX}}<br />
<br />
== Basic installation ==<br />
<br />
Follow this procedure to create a KVM Ubuntu based root filesystem for VNX. The procedure has been tested with Ubuntu 9.10, 10.04, 10.10, 11.04, 12.04, 13.04, 13.10, 14.04, 14.10, 15.04, 15.10 and 16.04.<br />
<ul><br />
<li>Create the filesystem disk image:</li><br />
qemu-img create -f qcow2 vnx_rootfs_kvm_ubuntu.qcow2 20G<br />
<li>Get Ubuntu installation CD. For example:</li><br />
wget ftp://ftp.rediris.es/mirror/ubuntu-releases/16.04/ubuntu-16.04-server-i386.iso<br />
cp ubuntu-16.04-server-i386.iso /almacen/iso<br />
Note: use 'server' or 'desktop' CD versions depending on the system you want to create.<br />
<li>Create the virtual machine with:</li><br />
vnx --create-rootfs vnx_rootfs_kvm_ubuntu.qcow2 --install-media /almacen/iso/ubuntu-16.04-server-i386.iso --mem 512M<br />
Note: add '''"--arch x86_64"''' option for 64 bits virtual machines<br />
<li>Follow Ubuntu installation menus to install a basic system with ssh server.</li><br />
<li>Configure a serial console on ttyS0 (skip this step for 15.04 or later releases):</li><br />
cd /etc/init<br />
cp tty1.conf ttyS0.conf<br />
sed -i -e 's/tty1/ttyS0/' ttyS0.conf<br />
<li>Activate startup traces on serial console by editting /etc/default/grub file and setting the GRUB_CMDLINE_LINUX_DEFAULT variable to "console=ttyS0". Also change the boot menu timeout to 0 (sometimes virtual machines get stopped on the boot menu when starting on high loaded systems):</li><br />
GRUB_CMDLINE_LINUX_DEFAULT="console=ttyS0"<br />
GRUB_TIMEOUT=0<br />
GRUB_RECORDFAIL_TIMEOUT=1<br />
<li>Only for Ubuntu 15.10 or later releases (does not work in 20.04):</li><br />
GRUB_CMDLINE_LINUX="net.ifnames=0 biosdevname=0"<br />
<li>Make grub process the previous changes:</li><br />
update-grub<br />
<li>Add a timeout to systemd-networkd-wait-online service to avoid long waits at startup. Edit /lib/systemd/system/systemd-networkd-wait-online.service and change the following line:</li><br />
ExecStart=/lib/systemd/systemd-networkd-wait-online --timeout 20<br />
<li>Finally, delete the net udev rules file and halt the system:</li><br />
rm /etc/udev/rules.d/70-persistent-net.rules<br />
halt -p<br />
</ul><br />
<br />
== Configuration ==<br />
<br />
<ul><br />
<li>Restart the system with the following command:</li><br />
vnx --modify-rootfs vnx_rootfs_kvm_ubuntu.qcow2 --update-aced --mem 512M<br />
Note: add '''"--arch x86_64"''' option for 64 bits virtual machines<br />
Note: ignore the errors "timeout waiting for response on VM socket". 768M are needed if you are installing a root filesystem with desktop interface<br />
<li>Access the system through the text console to easy the copy-paste of commands:</li><br />
virsh console vnx_rootfs_kvm_ubuntu.qcow2<br />
<li>Access the console and sudo root:</li><br />
sudo su<br />
<li>Update the system</li><br />
apt-get update<br />
apt-get dist-upgrade<br />
<li>Install XML::DOM perl package and ACPI daemon:</li><br />
apt-get install libxml-libxml-perl libnetaddr-ip-perl acpid<br />
<li>For 17.10 or newer install ifupdown</li><br />
apt-get install ifupdown<br />
<!--li>Only for Ubuntu 10.04:</li><br />
<ul><br />
<li>create /media/cdrom* directories:</li><br />
mkdir /media/cdrom0<br />
mkdir /media/cdrom1<br />
ln -s /media/cdrom0 /media/cdrom<br />
ln -s /cdrom /media/cdrom<br />
<li>add the following lines to /etc/fstab:</li><br />
/dev/scd0 /media/cdrom0 udf,iso9660 user,noauto,exec,utf8 0 0<br />
/dev/scd1 /media/cdrom1 udf,iso9660 user,noauto,exec,utf8 0 0<br />
</ul--><br />
<li>Install VNX autoconfiguration daemon:</li><br />
mount /dev/sdb /mnt/<br />
perl /mnt/vnxaced-lf/install_vnxaced<br />
Change 'sdb' by 'vdb' in case virtio drivers are being used.<br />
<li>Edit /etc/network/interfaces file and comment all lines related to eth0, eth1, etc interfaces. Leave only the loopback (lo) interface.</li><br />
<li>Optional: install graphical user interface.</li><br />
<ul><br />
<li>Minimal:</li><br />
# recommended option<br />
sudo apt-get install lubuntu-desktop<br />
<br />
# old recipe not tested in later versions<br />
sudo apt-get install xorg gnome-core gksu gdm gnome-system-tools gnome-nettool firefox-gnome-support<br />
<li>Complete:</li><br />
sudo apt-get install ubuntu-desktop<br />
Note: to avoid nautilus being launched any time you remotely execute a command on the virtual machine using VNX (which interferes with the normal execution of commands), you should disable the start of programs when media insertion takes place. Go to "System settings->System->Details->Removable Media" and deselect the checkbox "Never prompt or start programs on media insertion".<br />
<!--<br />
nautilus automount feature. Just execute gconf-editor and create a variable "/apps/nautilus/preferences/media_automount" and set it to 0. <br />
This does not seem to work:<br />
gconftool-2 --direct --config-source xml:readwrite:/etc/gconf/gconf.xml.mandatory --type bool --set "/apps/nautilus/preferences/media_automount" "false"<br />
gconftool-2 --direct --config-source xml:readwrite:/etc/gconf/gconf.xml.mandatory --type bool --set "/apps/nautilus/preferences/media_automount_open" "false"<br />
--><br />
</ul><br />
<li>Optional: install other services:</li><br />
<ul><br />
<li>Apache server:</li><br />
sudo apt-get install apache2<br />
update-rc.d -f apache2 remove # to avoid automatic start in old versions<br />
systemctl disable apache2.service # to avoid automatic start in new versions<br />
<br />
<li>Other tools</li><br />
sudo apt-get install traceroute<br />
sudo apt-get install xterm # needed to have the 'resize' tool to resize consoles <br />
</ul><br />
<br />
<li>Create a file /etc/vnx_rootfs_version to store version number and informacion about modification:</li><br />
<pre><br />
VER=v0.25<br />
OS=Ubuntu 16.04 32 bits<br />
DESC=Basic Ubuntu 16.04 root filesystem without GUI<br />
</pre><br />
<br />
<li>Zero the image empty space to allow reducing the size of the image:</li><br />
dd if=/dev/zero of=/mytempfile<br />
rm -f /mytempfile<br />
<br />
<li>Stop the machine with vnx_halt:</li><br />
sudo vnx_halt<br />
<br />
<li>Reduce the size of the image:</li><br />
mv vnx_rootfs_kvm_ubuntu.qcow2 vnx_rootfs_kvm_ubuntu.qcow2.bak<br />
qemu-img convert -O qcow2 vnx_rootfs_kvm_ubuntu.qcow2.bak vnx_rootfs_kvm_ubuntu.qcow2<br />
<br />
</ul><br />
<br />
If everything went well, your root filesystem will be ready to be used with VNX. You can make a simple test using the simple_ubuntu.xml scenario distributed with VNX.<br />
<br />
== Installing additional software ==<br />
<br />
To install additional software or to modify your root file system, you just have to:<br />
<ul><br />
<li>Start a virtual machine from it:</li><br />
vnx --modify-rootfs vnx_rootfs_kvm_ubuntu.qcow2<br />
<li>Check network connectivity. Maybe you have to activate the network interface by hand:</li><br />
dhclient eth0<br />
Note: use "ip link show" to know which network interface to use.<br />
<li>Do the modifications you want.</li><br />
<li>Finally, halt the system using:</li><br />
vnx_halt<br />
</ul><br />
<br />
==== Examples ====<br />
<br />
<ul><br />
<li>dhcp server and relay:</li><br />
<ul><br />
<li>Install dhcp3 packages:</li><br />
apt-get install dhcp3-server dhcp3-relay<br />
<li>Disable autostart (optional):</li><br />
update-rc.d -f isc-dhcp-server remove<br />
update-rc.d -f isc-dhcp-relay remove<br />
</ul><br />
<br />
<br />
</ul><br />
<br />
== Updating VNXACED ==<br />
<br />
You can automatically update the VNXACE daemon with the following command:<br />
vnx --modify-rootfs vnx_rootfs_kvm_ubuntu.qcow2 --update-aced -y<br />
If VNXACE daemon is not updated automatically, you can do it manually by accessing the virtual machine console and type:<br />
mount /dev/sdb /mnt/<br />
perl /mnt/vnxaced-lf/install_vnxaced<br />
<br />
== Known problems ==<br />
<br />
<ul><br />
<li>Sometimes after restarting, the virtual machines stop at showing the grub menu and do not start until you manually choose one option. To avoid it, just follow the instructions here: http://www.linuxquestions.org/questions/linux-server-73/how-to-disable-grub-2-menu-even-after-server-crash-796562/. Beware that the changes you make to grub.cfg file are lost after executing "update-grub" command.<br />
</li><br />
<li>In Ubuntu 12.04 Desktop, graphical commands execution does not work. Command execution fails with "ERROR: no user logged on display :0.0" (see /var/log/vnxaced.log). If you just open a "terminal" window, commands work correctly (does not work if you open other applications; only when you start a terminal...).</li><br />
<li>Each time a cdrom is mounted (for example, whenever a command is executed on the virtual machine) the following error appears in the console:</li><br />
<pre><br />
Jul 27 22:33:31 vnx kernel: [ 4384.875886] ata1.01: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6<br />
Jul 27 22:33:31 vnx kernel: [ 4385.291374] ata1.01: BMDMA stat 0x5<br />
Jul 27 22:33:31 vnx kernel: [ 4385.493411] sr 0:0:1:0: [sr0] CDB: Read(10): 28 00 00 00 00 18 00 00 01 00<br />
Jul 27 22:33:31 vnx kernel: [ 4385.493460] ata1.01: cmd a0/01:00:00:00:08/00:00:00:00:00/b0 tag 0 dma 2048 in<br />
Jul 27 22:33:31 vnx kernel: [ 4385.493461] res 01/60:00:00:00:08/00:00:00:00:00/b0 Emask 0x3 (HSM violation)<br />
Jul 27 22:33:31 vnx kernel: [ 4386.263553] ata1.01: status: { ERR }<br />
</pre><br />
Despite of the error trace, the commands are executed correctly. This error does not appear on Ubuntu 9.10 filesystems.<br />
<br />
</ul></div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Vnx-extnet&diff=2652
Vnx-extnet
2021-10-19T15:03:55Z
<p>David: </p>
<hr />
<div>{{Title|Connecting a Virtual Machine to a external network}}<br />
<br />
There are several ways to connect a virtual machine included in a VNX scenario to an external network. Here we present two options, both based on seting up a bridge and connecting the external interface to that bridge:<br />
* Option 1 consist on changing the host networking configuration to manually connect the external interface to a permanent bridge. Virtual machines that need external connectivity will be simply connected by VNX to that bridge.<br />
* Option 2 consists on dynamically creating a bridge and connecting the external interface to that bridge during scenario creation.<br />
<br />
Option 1 is recommended one because:<br />
* Facilitates the coexistence of multiple VNX scenarios sharing the external connectivity, as well as other virtual machines not started with VNX.<br />
* Does not need to change the IP configuration of the external interface during scenario creation. When using bridge based configurations, the IP address of the external interface has to be assigned to the bridge, not to the interface as it is done normally. In case of using option 2, VNX has to reconfigure the external interface during scenario creation/release, which could led in some cases to lose the IP conectivity to the host.<br />
<br />
== Option 1: Static bridge associated with external interfce (recomended) ==<br />
<br />
<ul><br />
<li>Create a virtual bridge associated with the external interface (eth0 in this example). For example, for an Ubuntu host you have to edit '''/etc/network/interfaces''' file and change it in the following way (adapt the addresses and masks values to your case):</li><br />
auto lo<br />
iface lo inet loopback<br />
<br />
auto br0<br />
iface br0 inet static<br />
address 10.1.1.7<br />
netmask 255.255.255.0<br />
gateway 10.1.1.1<br />
bridge_ports eth0<br />
bridge_stp off<br />
bridge_maxwait 0<br />
bridge_fd 0<br />
<br />
This will create a new bridge, '''br0''', connected to the host interface '''eth0''' and assign the IP address '''10.1.1.7''' previously assigned to '''eth0''' to the new bridge:<br />
<pre><br />
$ ip addr show<br />
...<br />
3: eth0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc mq master br0 state UP qlen 1000<br />
link/ether 00:1e:4f:93:48:93 brd ff:ff:ff:ff:ff:ff<br />
inet 10.1.1.100/24 brd 10.1.1.255 scope global eth0<br />
4: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP <br />
link/ether 00:1e:4f:93:48:93 brd ff:ff:ff:ff:ff:ff<br />
inet 10.1.1.7/24 scope global br0<br />
...<br />
</pre><br />
<br />
<br />
'''IMPORTANT''': be aware that you are modifying the network configuration of your host interfaces and, in case of problems, you will loose the connectivity to the host. This modifications should be done from the host console to avoid problems.<br />
<br />
<li>Restart the host and check networking is working normally before continuing.</li><br />
<br />
<li>In the VNX scenario define a <net> with the name of the bridge:</li><br />
<net name="br0" mode="virtual_bridge" managed="no"/><br />
<br />
<li>Configure the interface of the VM you want to externally connect on that net:</li><br />
<pre><br />
<vm name="vm1" type="libvirt" subtype="kvm" os="linux"><br />
... <br />
<if id="1" net="br0"><br />
<ipv4>10.1.1.40/24</ipv4><br />
</if><br />
...<br />
</vm><br />
</pre><br />
<li>Start the scenario and check that the VM has connectivity with the external networks.</li><br />
</ul><br />
<br />
== Option 2: Use of external attribute of <if> tag ==<br />
<br />
<ul><br />
<li>Define a VNX <net> with an external attribute:</li><br />
<net name="Net0" mode="virtual_bridge" external="eth0"/><br />
<br />
<li>Configure VM interface on that net:</li><br />
<pre><br />
<vm name="vm1" type="libvirt" subtype="kvm" os="linux"><br />
... <br />
<if id="1" net="Net0"><br />
<ipv4>10.1.1.40/24</ipv4><br />
</if><br />
...<br />
</vm><br />
</pre><br />
<br />
<li>Define host configuration in <host> section:</li><br />
<pre><br />
<host><br />
<hostif net="Net0"><br />
<ipv4 mask="255.255.255.0">10.1.1.7</ipv4><br />
</hostif><br />
<route type="ipv4" gw="10.1.1.7">default</route><br />
<physicalif name="eth0" type="ipv4" ip="10.1.1.7" mask="255.255.255.0" gw="10.1.17"/><br />
</host><br />
</pre><br />
<li>Start the scenario and check that the VM has connectivity with the external networks.</li><br />
<br />
<br />
</ul></div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Vnx-install-ubuntu3&diff=2651
Vnx-install-ubuntu3
2021-10-12T10:33:12Z
<p>David: </p>
<hr />
<div>{{Title|VNX Installation over Ubuntu}}<br />
<br />
This section describes the procedure for manually installing VNX over Ubuntu 13.*, 14.*, 15.*, 16.*, 17.*, 18.* and 20.*.<br />
<br />
Open a root shell window and follow these steps:<br />
<ul><br />
<br />
<li>Install all packages required (basic development, virtualization, perl libraries and auxiliar packages). In case of Ubuntu 18.10 and 20.* change 'libvirt-bin' by 'libvirt-clients':</li><br />
sudo apt-get update<br />
sudo apt-get install \<br />
bash-completion bridge-utils curl eog expect genisoimage gnome-terminal \<br />
graphviz libappconfig-perl libdbi-perl liberror-perl libexception-class-perl \<br />
libfile-homedir-perl libio-pty-perl libmath-round-perl libnetaddr-ip-perl \<br />
libnet-ip-perl libnet-ipv6addr-perl libnet-pcap-perl libnet-telnet-perl \<br />
libreadonly-perl libswitch-perl libsys-virt-perl libterm-readline-perl-perl \<br />
libvirt-bin libxml-checker-perl libxml-dom-perl libxml-libxml-perl \<br />
libxml-parser-perl libxml-tidy-perl lxc lxc-templates net-tools \<br />
openvswitch-switch picocom pv qemu-kvm screen tree uml-utilities virt-manager \<br />
virt-viewer vlan w3m wmctrl xdotool xfce4-terminal xterm lsof<br />
<br />
<li>Tune libvirt configuration to work with VNX. In particular, edit /etc/libvirt/qemu.conf file and set the following parameters (see this simple [[Vnx-install-modify-qemuconf|script]] to do it):</li><br />
security_driver = "none"<br />
user = "root"<br />
group = "root"<br />
cgroup_device_acl = [<br />
"/dev/null", "/dev/full", "/dev/zero",<br />
"/dev/random", "/dev/urandom",<br />
"/dev/ptmx", "/dev/kvm", "/dev/kqemu",<br />
"/dev/rtc", "/dev/hpet", "/dev/vfio/vfio", "/dev/net/tun"<br />
]<br />
(you have to add "/dev/net/tun") and restart libvirtd for the changes to take effect:<br />
sudo restart libvirt-bin # for ubuntu 14.10 or older<br />
sudo systemctl restart libvirt-bin # for ubuntu 15.04 or 16.04<br />
sudo systemctl restart libvirtd # for ubuntu 18.04 or later<br />
<li>Check that libvirt is running correctly, for example, executing:</li><br />
sudo virsh list<br />
sudo virsh capabilities<br />
Note: Have a look at [[Vnx-install-trobleshooting|this document]] in case you get an error similar to this one: <br />
virsh: /usr/lib/libvirt.so.0: version LIBVIRT_PRIVATE-XXX not found (required by virsh)<br />
<br />
<li>Install VNX:</li><br />
mkdir /tmp/vnx-update<br />
cd /tmp/vnx-update<br />
rm -rf /tmp/vnx-update/vnx-*<br />
wget http://vnx.dit.upm.es/vnx/vnx-latest.tgz<br />
tar xfvz vnx-latest.tgz<br />
cd vnx-*-*<br />
sudo ./install_vnx<br />
<br />
<li>Restart apparmor:</li><br />
service apparmor restart # for Ubuntu 14.10 or older<br />
systemctl restart apparmor # for Ubuntu 15.04 or later <br />
<br />
<li>Create the VNX config file (/etc/vnx.conf). You just can move the sample config file:</li><br />
sudo mv /usr/share/vnx/etc/vnx.conf.sample /etc/vnx.conf<br />
<br />
<li>For Ubuntu 15.04 or newer: change parameter 'overlayfs_workdir_option' in vnx.conf to 'yes'</li><br />
[lxc]<br />
...<br />
overlayfs_workdir_option = 'yes'<br />
...<br />
<br />
<li>For Ubuntu 16.04 or later: change the LXC union_type to 'overlayfs'<br />
[lxc]<br />
...<br />
union_type='overlayfs'<br />
...<br />
<br />
<li>Download root file systems from http://vnx.dit.upm.es/vnx/filesystems and install them following these [[Vnx-install-root_fs|instructions]]</li><br />
<br />
<li>Optionally, enable bash-completion in your system to allow using VNX bash completion capabilities. For example, to enable it for all users in your system, just edit '/etc/bash.bashrc' and uncomment the following lines:</li><br />
<pre><br />
# enable bash completion in interactive shells<br />
if ! shopt -oq posix; then<br />
if [ -f /usr/share/bash-completion/bash_completion ]; then<br />
. /usr/share/bash-completion/bash_completion<br />
elif [ -f /etc/bash_completion ]; then<br />
. /etc/bash_completion<br />
fi<br />
fi<br />
</pre><br />
</ul><br />
<br />
=== Additional install steps for Dynamips support ===<br />
<br />
* Install Dynamips and Dynagen:<br />
apt-get install dynamips dynagen<br />
<br />
* Create a file /etc/init.d/dynamips (taken from http://7200emu.hacki.at/viewtopic.php?t=2198):<br />
<pre><br />
#!/bin/sh<br />
# Start/stop the dynamips program as a daemon.<br />
#<br />
### BEGIN INIT INFO<br />
# Provides: dynamips<br />
# Required-Start:<br />
# Required-Stop:<br />
# Default-Start: 2 3 4 5<br />
# Default-Stop: 0 1 6<br />
# Short-Description: Cisco hardware emulator daemon<br />
### END INIT INFO<br />
<br />
DAEMON=/usr/bin/dynamips<br />
NAME=dynamips<br />
PORT=7200<br />
PIDFILE=/var/run/$NAME.pid <br />
LOGFILE=/var/log/$NAME.log<br />
DESC="Cisco Emulator"<br />
SCRIPTNAME=/etc/init.d/$NAME<br />
<br />
test -f $DAEMON || exit 0<br />
<br />
. /lib/lsb/init-functions<br />
<br />
<br />
case "$1" in<br />
start) log_daemon_msg "Starting $DESC " "$NAME"<br />
start-stop-daemon --start --chdir /tmp --background --make-pidfile --pidfile $PIDFILE --name $NAME --startas $DAEMON -- -H $PORT -l $LOGFILE<br />
log_end_msg $?<br />
;;<br />
stop) log_daemon_msg "Stopping $DESC " "$NAME"<br />
start-stop-daemon --stop --quiet --pidfile $PIDFILE --name $NAME<br />
log_end_msg $?<br />
;;<br />
restart) log_daemon_msg "Restarting $DESC " "$NAME"<br />
start-stop-daemon --stop --retry 5 --quiet --pidfile $PIDFILE --name $NAME<br />
start-stop-daemon --start --chdir /tmp --background --make-pidfile --pidfile $PIDFILE --name $NAME --startas $DAEMON -- -H $PORT -l $LOGFILE<br />
log_end_msg $?<br />
;;<br />
status)<br />
status_of_proc -p $PIDFILE $DAEMON $NAME && exit 0 || exit $? <br />
#status $NAME<br />
#RETVAL=$?<br />
;; <br />
*) log_action_msg "Usage: $SCRIPTNAME {start|stop|restart|status}"<br />
exit 2<br />
;;<br />
esac<br />
exit 0<br />
<br />
</pre><br />
<br />
* Set execution permissions for the script and add it to system start-up:<br />
chmod +x /etc/init.d/dynamips<br />
update-rc.d dynamips defaults<br />
/etc/init.d/dynamips start<br />
<br />
* Download and install cisco IOS image:<br />
cd /usr/share/vnx/filesystems<br />
# Cisco image<br />
wget ... c3640-js-mz.124-19.image<br />
ln -s c3640-js-mz.124-19.image c3640<br />
<br />
* Calculate the idle-pc value for your computer following the procedure in http://dynagen.org/tutorial.htm:<br />
dynagen /usr/share/vnx/examples/R3640.net<br />
console R3640 # type 'no' to exit the config wizard and wait <br />
# for the router to completely start <br />
idlepc get R3640<br />
Once you know the idlepc value for your system, include it in /etc/vnx.conf file.</div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Vnx-install-ubuntu3&diff=2650
Vnx-install-ubuntu3
2021-10-12T10:29:44Z
<p>David: </p>
<hr />
<div>{{Title|VNX Installation over Ubuntu}}<br />
<br />
This section describes the procedure for manually installing VNX over Ubuntu 13.*, 14.*, 15.*, 16.*, 17.*, 18.* and 20.*.<br />
<br />
Open a root shell window and follow these steps:<br />
<ul><br />
<br />
<li>Install all packages required (basic development, virtualization, perl libraries and auxiliar packages). In case of Ubuntu 18.10 and 20.* change 'libvirt-bin' by 'libvirt-clients':</li><br />
sudo apt-get update<br />
sudo apt-get install \<br />
bash-completion bridge-utils curl eog expect genisoimage gnome-terminal \<br />
graphviz libappconfig-perl libdbi-perl liberror-perl libexception-class-perl \<br />
libfile-homedir-perl libio-pty-perl libmath-round-perl libnetaddr-ip-perl \<br />
libnet-ip-perl libnet-ipv6addr-perl libnet-pcap-perl libnet-telnet-perl \<br />
libreadonly-perl libswitch-perl libsys-virt-perl libterm-readline-perl-perl \<br />
libvirt-bin libxml-checker-perl libxml-dom-perl libxml-libxml-perl \<br />
libxml-parser-perl libxml-tidy-perl lxc lxc-templates net-tools \<br />
openvswitch-switch picocom pv qemu-kvm screen tree uml-utilities virt-manager \<br />
virt-viewer vlan w3m wmctrl xdotool xfce4-terminal xterm lsof<br />
<br />
<li>Tune libvirt configuration to work with VNX. In particular, edit /etc/libvirt/qemu.conf file and set the following parameters (see this simple [[Vnx-install-modify-qemuconf|script]] to do it):</li><br />
security_driver = "none"<br />
user = "root"<br />
group = "root"<br />
cgroup_device_acl = [<br />
"/dev/null", "/dev/full", "/dev/zero",<br />
"/dev/random", "/dev/urandom",<br />
"/dev/ptmx", "/dev/kvm", "/dev/kqemu",<br />
"/dev/rtc", "/dev/hpet", "/dev/vfio/vfio", "/dev/net/tun"<br />
]<br />
(you have to add "/dev/net/tun") and restart libvirtd for the changes to take effect:<br />
sudo restart libvirt-bin # for ubuntu 14.10 or older<br />
sudo systemctl restart libvirt-bin # for ubuntu 15.04 or later<br />
<li>Check that libvirt is running correctly, for example, executing:</li><br />
sudo virsh list<br />
sudo virsh capabilities<br />
Note: Have a look at [[Vnx-install-trobleshooting|this document]] in case you get an error similar to this one: <br />
virsh: /usr/lib/libvirt.so.0: version LIBVIRT_PRIVATE-XXX not found (required by virsh)<br />
<br />
<li>Install VNX:</li><br />
mkdir /tmp/vnx-update<br />
cd /tmp/vnx-update<br />
rm -rf /tmp/vnx-update/vnx-*<br />
wget http://vnx.dit.upm.es/vnx/vnx-latest.tgz<br />
tar xfvz vnx-latest.tgz<br />
cd vnx-*-*<br />
sudo ./install_vnx<br />
<br />
<li>Restart apparmor:</li><br />
service apparmor restart # for Ubuntu 14.10 or older<br />
systemctl restart apparmor # for Ubuntu 15.04 or later <br />
<br />
<li>Create the VNX config file (/etc/vnx.conf). You just can move the sample config file:</li><br />
sudo mv /usr/share/vnx/etc/vnx.conf.sample /etc/vnx.conf<br />
<br />
<li>For Ubuntu 15.04 or newer: change parameter 'overlayfs_workdir_option' in vnx.conf to 'yes'</li><br />
[lxc]<br />
...<br />
overlayfs_workdir_option = 'yes'<br />
...<br />
<br />
<li>For Ubuntu 16.04 or later: change the LXC union_type to 'overlayfs'<br />
[lxc]<br />
...<br />
union_type='overlayfs'<br />
...<br />
<br />
<li>Download root file systems from http://vnx.dit.upm.es/vnx/filesystems and install them following these [[Vnx-install-root_fs|instructions]]</li><br />
<br />
<li>Optionally, enable bash-completion in your system to allow using VNX bash completion capabilities. For example, to enable it for all users in your system, just edit '/etc/bash.bashrc' and uncomment the following lines:</li><br />
<pre><br />
# enable bash completion in interactive shells<br />
if ! shopt -oq posix; then<br />
if [ -f /usr/share/bash-completion/bash_completion ]; then<br />
. /usr/share/bash-completion/bash_completion<br />
elif [ -f /etc/bash_completion ]; then<br />
. /etc/bash_completion<br />
fi<br />
fi<br />
</pre><br />
</ul><br />
<br />
=== Additional install steps for Dynamips support ===<br />
<br />
* Install Dynamips and Dynagen:<br />
apt-get install dynamips dynagen<br />
<br />
* Create a file /etc/init.d/dynamips (taken from http://7200emu.hacki.at/viewtopic.php?t=2198):<br />
<pre><br />
#!/bin/sh<br />
# Start/stop the dynamips program as a daemon.<br />
#<br />
### BEGIN INIT INFO<br />
# Provides: dynamips<br />
# Required-Start:<br />
# Required-Stop:<br />
# Default-Start: 2 3 4 5<br />
# Default-Stop: 0 1 6<br />
# Short-Description: Cisco hardware emulator daemon<br />
### END INIT INFO<br />
<br />
DAEMON=/usr/bin/dynamips<br />
NAME=dynamips<br />
PORT=7200<br />
PIDFILE=/var/run/$NAME.pid <br />
LOGFILE=/var/log/$NAME.log<br />
DESC="Cisco Emulator"<br />
SCRIPTNAME=/etc/init.d/$NAME<br />
<br />
test -f $DAEMON || exit 0<br />
<br />
. /lib/lsb/init-functions<br />
<br />
<br />
case "$1" in<br />
start) log_daemon_msg "Starting $DESC " "$NAME"<br />
start-stop-daemon --start --chdir /tmp --background --make-pidfile --pidfile $PIDFILE --name $NAME --startas $DAEMON -- -H $PORT -l $LOGFILE<br />
log_end_msg $?<br />
;;<br />
stop) log_daemon_msg "Stopping $DESC " "$NAME"<br />
start-stop-daemon --stop --quiet --pidfile $PIDFILE --name $NAME<br />
log_end_msg $?<br />
;;<br />
restart) log_daemon_msg "Restarting $DESC " "$NAME"<br />
start-stop-daemon --stop --retry 5 --quiet --pidfile $PIDFILE --name $NAME<br />
start-stop-daemon --start --chdir /tmp --background --make-pidfile --pidfile $PIDFILE --name $NAME --startas $DAEMON -- -H $PORT -l $LOGFILE<br />
log_end_msg $?<br />
;;<br />
status)<br />
status_of_proc -p $PIDFILE $DAEMON $NAME && exit 0 || exit $? <br />
#status $NAME<br />
#RETVAL=$?<br />
;; <br />
*) log_action_msg "Usage: $SCRIPTNAME {start|stop|restart|status}"<br />
exit 2<br />
;;<br />
esac<br />
exit 0<br />
<br />
</pre><br />
<br />
* Set execution permissions for the script and add it to system start-up:<br />
chmod +x /etc/init.d/dynamips<br />
update-rc.d dynamips defaults<br />
/etc/init.d/dynamips start<br />
<br />
* Download and install cisco IOS image:<br />
cd /usr/share/vnx/filesystems<br />
# Cisco image<br />
wget ... c3640-js-mz.124-19.image<br />
ln -s c3640-js-mz.124-19.image c3640<br />
<br />
* Calculate the idle-pc value for your computer following the procedure in http://dynagen.org/tutorial.htm:<br />
dynagen /usr/share/vnx/examples/R3640.net<br />
console R3640 # type 'no' to exit the config wizard and wait <br />
# for the router to completely start <br />
idlepc get R3640<br />
Once you know the idlepc value for your system, include it in /etc/vnx.conf file.</div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Forums&diff=2649
Forums
2021-07-04T11:03:40Z
<p>David: </p>
<hr />
<div>{{Title|Contact & Forums}}<br />
__NOTOC__<br />
=== Twitter account ===<br />
<br />
Follow VNX news in twitter: https://twitter.com/vnx_upm<br />
<br />
<br />
<!--<br />
=== Users list ===<br />
For any comment, suggestion, doubt or problem report related to VNX/VNUML, you can send a message to the '''vnuml-users at dit.upm.es''' list (remember to change the "at" in the address by an "@"). <br />
<br />
You can <br />
[https://lists.dit.upm.es/mailman/listinfo/vnuml-users subscribe] to the list or just have a look at the [https://lists.dit.upm.es/pipermail/vnuml-users/ archived] messages.<br />
<br />
=== Developers list ===<br />
There is also a '''vnuml-devel at dit.upm.es''' list (remember to change the "at" in the address by an "@") to discuss development issues. <br />
<br />
You can <br />
[https://lists.dit.upm.es/mailman/listinfo/vnuml-devel subscribe] to the list or just have a look at the [https://lists.dit.upm.es/pipermail/vnuml-devel/ archived] messages.<br />
--><br />
=== VNX Team ===<br />
If you want to contact directly VNX development team, you can use the following address '''vnx at dit.upm.es'''.</div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Main_Page&diff=2648
Main Page
2020-09-06T23:47:22Z
<p>David: </p>
<hr />
<div>{{Title|Welcome to Virtual Networks over linuX (VNX) web site}}<br />
<br />
__TOC__<br />
<br />
<br />
==VNX Latest News==<br />
<br />
<ul><br />
<li>Follow VNX news in twitter: https://twitter.com/vnx_upm</li><br />
<li>See also the [[vnx-latest-features|latest features implemented]]</li><br />
</ul><br />
'''Sep 7th, 2020''' -- VyOS root filesystems for VNX (LXC and KVM) updated to version 1.3 of VyOS.<br />
<br />
'''Ago 26th, 2020''' -- LXC and KVM Ubuntu 20.04 root filesystems for VNX released (LXC and KVM).<br />
<br />
'''Ago 27th, 2019''' -- New '''Openstack Stein Laboratory''' virtual scenario released. See more details [[Vnx-labo-openstack-4nodes-classic-ovs-stein|here]]. <br />
<br />
'''Ago 31th, 2017''' - New KVM and LXC root filesystems based on Ubuntu 17.04 available (only 64 bits). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command.<br />
<br />
'''Ago 30th, 2017''' - Added support for '''VyOS network operating system'''. Now VNX supports the creation of virtual scenarios including VyOS based virtual machines (either KVM or LXC). See more details [[Vnx-latest-features|here]].<br />
<br />
'''Dec 29th, 2016''' -- New KVM root filesystems based on Kali 2016.2 distribution (https://www.kali.org/). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs -p kali" command. Note: use the <video>vmvga</video> tag in the virtual machine. See the simple_kali64_inet.xml example scenario in latest VNX version.<br />
<br />
'''Dec 11th, 2016''' -- New paper published about the experience of using VNX in networking laboratories:<br />
<ul><br />
<li>D. Fernández, F. J. Ruiz, L. Bellido, E. Pastor, O. Walid and V. Mateos, [http://www.ijee.ie/contents/c320616.html Enhancing Learning Experience in Computer Networking through a Virtualization-Based Laboratory Model], International Journal of Engineering Education Vol. 32, No. 6, pp. 2569–2584, 2016.</li><br />
</ul><br />
<br />
'''Nov 28th, 2016''' -- New KVM root filesystems based on Metasploitable2 distribution (http://r-7.co/Metasploitable2). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs -p metasploitable" command.<br />
<br />
'''Nov 2nd, 2016''' -- New KVM root filesystems based on Fedora 24 server available (only 64 bits version). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command.<br />
<br />
'''Oct 23th, 2016''' -- Openstack Mitaka test scenario released. See more details [[Vnx-labo-openstack-4nodes-classic-ovs-mitaka|here]].<br />
<br />
'''May 25th, 2016''' -- VPLS test scenario based on OpenBSD published. See more details [[Vnx-labo-vpls|here]].<br />
<br />
'''May 16th, 2016''' -- Support for virtio drivers in libvirt virtual machines implemented to improve performance. See more details [[vnx-latest-features|here]].<br />
<br />
'''May 7th, 2016''' -- New KVM and LXC root filesystems based on Ubuntu 16.04 available (have a look at the new Xubuntu version distributed). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command.<br />
<br />
'''May 2nd, 2016''' -- OpenBSD support: VNX now supports OpenBSD virtual machines thanks to Francisco Javier Ruiz contribution.<br />
<br />
'''March 21th, 2016''' -- See some very interesting SDN virtual scenarios prepared by Carlos Martín-Cleto for his Master's Thesis: https://github.com/cletomcj/vnx-sdn.<br />
<br />
'''February 21th, 2016''' -- New vagrant and virtualbox (OVA) [http://goo.gl/8RxXvA VNX demo virtual machines] available for easily test LXC based virtual scenarios (see instructions for [http://goo.gl/f9jnvA Vagrant] and [http://goo.gl/JdB9ik VirtualBox]).<br />
<br />
'''February 15th, 2016''' -- New KVM root filesystems based on Ubuntu 15.10 available (server and lubuntu in 32 and 64 bits versions). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command.<br />
<br />
'''February 14th, 2016''' -- New recipe to [http://web.dit.upm.es/vnxwiki/index.php/Vnx-install-fedora23 install VNX on Fedora]. Tested on a fresh copy of Fedora 23 workstation. <br />
<br />
'''July 24th, 2015''' -- New Openstack-Opendaylight laboratory scenarios available: https://goo.gl/JpxCnB. Designed to explore an OpenStack environment running OpenDaylight as the network management provider. Prepared by Raúl Álvarez Pinilla as a result of his Master's Thesis. <br />
<br />
'''June 16th, 2015''' -- New LXC root filesystems based on Ubuntu 15.04 available (32 and 64 bits). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command.<br />
<br />
'''June 10th, 2015''' -- New root filesystems based on REMnux: A Linux Toolkit for Reverse-Engineering and Analyzing Malware. Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command. Use simple_remnux.xml example scenario to test it.<br />
<br />
'''June 6th, 2015''' -- New interesting VNX scenario available: a [[Vnx-labo-fw|security lab]] designed to allow 16 student groups to work together configuring firewalls and using security related tools and distributions.<br />
<br />
'''April 25th, 2015''' -- New root filesystems based on Ubuntu 15.04 available (server and lubuntu in 32 and 64 bits versions). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command.<br />
<br />
'''March 16th, 2015''' -- Latest VNX versions include bash completion capabilities. Just use tab key to see the command line options available and help completiting option values. <br />
<br />
'''March 6th, 2015''' -- New root filesystems based on Kali 1.1.0 available (32 and 64 bits versions). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command. Update VNX version with 'vnx_update' command before using it.<br />
<br />
'''October 24th, 2014''' -- New root filesystems based on Ubuntu 14.10 available (server and lubuntu in 32 and 64 bits versions). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command.<br />
<br />
'''August 27th, 2014''' -- New functionality implemented to specify the position, size and desktop number where the VM console windows are shown by using .cvnx files. See more information [http://web.dit.upm.es/vnxwiki/index.php/Vnx-console-mgmt here].<br />
<br />
'''August 25th, 2014''' -- VNX now supports LXC virtual machines. See the VNX tutorial for LXC [http://web.dit.upm.es/vnxwiki/index.php/Vnx-tutorial-lxc here]. Additionally, see how to [http://web.dit.upm.es/vnxwiki/index.php/Vnx-rootfslxc create] or [http://web.dit.upm.es/vnxwiki/index.php/Vnx-modify-rootfs modify] a LXC root filesystem. <br />
<br />
'''June 27th, 2014''' -- Jorge Somavilla wins the [http://www.coit.es/descargar.php?idfichero=9461 ''Asociación de Telemática'' prize from COIT-AEIT] to his [[References#Final_Degree_Projects|Final Degree Project about VNX]]. See on [https://twitter.com/jsomav/status/484039220626210816 twitter].<br />
<br />
'''June 21th, 2014''' -- New root filesystem based on Kali Linux (old Backtrack) available (32 and 64 bits versions). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command. Use simple_kali.xml and simple_kali64.xml examples to test them (now the examples include direct Internet connection). Update VNX to latest version to use them.<br />
<br />
'''June 14th, 2014''' -- Follow VNX news in twitter: https://twitter.com/vnx_upm<br />
<br />
'''June 14th, 2014''' -- A Vagrant virtual machine to easily test VNX has been created. See [[Vnx-tutorial-vagrant|how to use it]] and [[Vnx-create-vagrant-vm|how it has been created]]<br />
<br />
'''October 17th, 2012''' -- New root filesystem added based on [http://www.caine-live.net/ CAINE (Computer Aided INvestigative Environment)] Ubuntu distribution. Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command. A new simple_caine.xml example has been added to latest VNX distribution to easy testing it.<br />
<br />
'''June 19th, 2012''' -- A new updated Debian root filesystem has been created, as well as a new UML kernel (ver 3.3.8) to work with it. See [[Vnx-install-root_fs#UML_root_filesystems|how to download and install them]] and the recipes followed for their creation: [[Vnx-rootfsdebian|rootfs]] and [[Vnx-rootfs-uml-kernel|kernel]]. To create the kernel, the traditional UML exec extension kernel patch has been updated to work with kernel 3.3.8. You can find the new kernel patch [http://vnx.dit.upm.es/vnx/kernels/mconsole-exec-3.3.8.patch here]<br />
<br />
'''May 31th, 2012''' -- New beta version of VNX (2.0b.2243) released including distributed deployment capabilities (EDIV). See the [[Docintro|documentation]] for more information.<br />
<br />
'''May 24th, 2012''' -- Jorge Somavilla wins the TNC2012 student poster competition. Read the full story [http://www.terena.org/news/fullstory.php?news_id=3168 here], [http://www.rediris.es/anuncios/2012/20120525_0.html.es here] or [http://www.upm.es/institucional/UPM/CanalUPM/Noticias/2532c60f455e7310VgnVCM10000009c7648aRCRD here]<br />
<br />
==About VNX==<br />
<br />
'''VNX''' is a general purpose open-source virtualization tool designed to help building virtual network testbeds automatically. It allows the definition and automatic deployment of network scenarios made of virtual machines of different types (Linux, Windows, FreeBSD, Olive or Dynamips routers, etc) interconnected following a user-defined topology, possibly connected to external networks.<br />
<br />
'''VNX''' has been developed by the <br />
<!-- [http://www.dit.upm.es/rsti Telecommunication and Internet Networks and Services (RSTI)] research group of the --><br />
[http://www.dit.upm.es Telematics Engineering Department (DIT)] of the [http://www.upm.es/internacional Technical University of Madrid (UPM)].<br />
<br />
'''VNX''' is a useful tool for testing network applications/services over complex testbeds made of virtual nodes and networks, as well as for creating complex network laboratories to allow students to interact with realistic network scenarios. As other similar tools aimed to create virtual network scenarios (like GNS3, NetKit, MLN or Marionnet), VNX provides a way to manage testbeds avoiding the investment and management complexity needed to create them using real equipment.<br />
<br />
'''VNX''' is made of two main parts: <br />
* an XML language that allows describing the virtual network scenario (VNX specification language)<br />
* the VNX program, that parses the scenario description and builds and manages the virtual scenario over a Linux machine<br />
<br />
'''VNX''' comes with a distributed version (EDIV) that allows the deployment of virtual scenarios over clusters of Linux servers, improving the scalability to scenarios made of tenths or even hundreds of virtual machines.<br />
<br />
'''VNX''' is built over the long experience of a previous tool named [http://www.dit.upm.es/vnuml VNUML (Virtual Networks over User Mode Linux)] and brings important new functionalities that overcome the most important limitations VNUML tool had:<br />
* Integration of new virtualization platforms to allow virtual machines running other operating systems (Windows, FreeBSD, etc) apart from Linux. In this sense:<br />
** VNX uses [http://libvirt.org libvirt] to interact with the virtualization capabilities of the host, allowing the use of most of the virtualization platforms available for Linux (KVM, Xen, etc)<br />
** Integrates [http://www.ipflow.utc.fr/blog/ Dynamips] and Olive router virtualization platforms to allow limited emulation of CISCO and Juniper routers<br />
** Integrates also Linux Containers (LXC) support<br />
* Individual management of virtual machines <br />
* Autoconfiguration and command execution capabilities for several operating systems: Linux, FreeBSD and Windows (XP and 7)<br />
* Integration of [http://openvswitch.org/ Openvswitch] with support for VLAN configuration, inter-switches connections and SDN parameter configuration (controller ip address, mode, Openflow version, etc.).<br />
<br />
'''VNX''' has been developed with the help and support of several people and companies. See the [[VNXteam|VNX team page]] for details.</div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Main_Page&diff=2647
Main Page
2020-09-06T23:24:42Z
<p>David: </p>
<hr />
<div>{{Title|Welcome to Virtual Networks over linuX (VNX) web site}}<br />
<br />
__TOC__<br />
<br />
<br />
==VNX Latest News==<br />
<br />
<ul><br />
<li>Follow VNX news in twitter: https://twitter.com/vnx_upm</li><br />
<li>See also the [[vnx-latest-features|latest features implemented]]</li><br />
</ul><br />
'''Sep 7th, 2020''' -- VyOS root filesystems for VNX (LXC and KVM) updated to version 1.3 of VyOS.<br />
<br />
'''Ago 27th, 2019''' -- New '''Openstack Stein Laboratory''' virtual scenario released. See more details [[Vnx-labo-openstack-4nodes-classic-ovs-stein|here]]. <br />
<br />
'''Ago 31th, 2017''' - New KVM and LXC root filesystems based on Ubuntu 17.04 available (only 64 bits). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command.<br />
<br />
'''Ago 30th, 2017''' - Added support for '''VyOS network operating system'''. Now VNX supports the creation of virtual scenarios including VyOS based virtual machines (either KVM or LXC). See more details [[Vnx-latest-features|here]].<br />
<br />
'''Dec 29th, 2016''' -- New KVM root filesystems based on Kali 2016.2 distribution (https://www.kali.org/). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs -p kali" command. Note: use the <video>vmvga</video> tag in the virtual machine. See the simple_kali64_inet.xml example scenario in latest VNX version.<br />
<br />
'''Dec 11th, 2016''' -- New paper published about the experience of using VNX in networking laboratories:<br />
<ul><br />
<li>D. Fernández, F. J. Ruiz, L. Bellido, E. Pastor, O. Walid and V. Mateos, [http://www.ijee.ie/contents/c320616.html Enhancing Learning Experience in Computer Networking through a Virtualization-Based Laboratory Model], International Journal of Engineering Education Vol. 32, No. 6, pp. 2569–2584, 2016.</li><br />
</ul><br />
<br />
'''Nov 28th, 2016''' -- New KVM root filesystems based on Metasploitable2 distribution (http://r-7.co/Metasploitable2). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs -p metasploitable" command.<br />
<br />
'''Nov 2nd, 2016''' -- New KVM root filesystems based on Fedora 24 server available (only 64 bits version). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command.<br />
<br />
'''Oct 23th, 2016''' -- Openstack Mitaka test scenario released. See more details [[Vnx-labo-openstack-4nodes-classic-ovs-mitaka|here]].<br />
<br />
'''May 25th, 2016''' -- VPLS test scenario based on OpenBSD published. See more details [[Vnx-labo-vpls|here]].<br />
<br />
'''May 16th, 2016''' -- Support for virtio drivers in libvirt virtual machines implemented to improve performance. See more details [[vnx-latest-features|here]].<br />
<br />
'''May 7th, 2016''' -- New KVM and LXC root filesystems based on Ubuntu 16.04 available (have a look at the new Xubuntu version distributed). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command.<br />
<br />
'''May 2nd, 2016''' -- OpenBSD support: VNX now supports OpenBSD virtual machines thanks to Francisco Javier Ruiz contribution.<br />
<br />
'''March 21th, 2016''' -- See some very interesting SDN virtual scenarios prepared by Carlos Martín-Cleto for his Master's Thesis: https://github.com/cletomcj/vnx-sdn.<br />
<br />
'''February 21th, 2016''' -- New vagrant and virtualbox (OVA) [http://goo.gl/8RxXvA VNX demo virtual machines] available for easily test LXC based virtual scenarios (see instructions for [http://goo.gl/f9jnvA Vagrant] and [http://goo.gl/JdB9ik VirtualBox]).<br />
<br />
'''February 15th, 2016''' -- New KVM root filesystems based on Ubuntu 15.10 available (server and lubuntu in 32 and 64 bits versions). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command.<br />
<br />
'''February 14th, 2016''' -- New recipe to [http://web.dit.upm.es/vnxwiki/index.php/Vnx-install-fedora23 install VNX on Fedora]. Tested on a fresh copy of Fedora 23 workstation. <br />
<br />
'''July 24th, 2015''' -- New Openstack-Opendaylight laboratory scenarios available: https://goo.gl/JpxCnB. Designed to explore an OpenStack environment running OpenDaylight as the network management provider. Prepared by Raúl Álvarez Pinilla as a result of his Master's Thesis. <br />
<br />
'''June 16th, 2015''' -- New LXC root filesystems based on Ubuntu 15.04 available (32 and 64 bits). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command.<br />
<br />
'''June 10th, 2015''' -- New root filesystems based on REMnux: A Linux Toolkit for Reverse-Engineering and Analyzing Malware. Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command. Use simple_remnux.xml example scenario to test it.<br />
<br />
'''June 6th, 2015''' -- New interesting VNX scenario available: a [[Vnx-labo-fw|security lab]] designed to allow 16 student groups to work together configuring firewalls and using security related tools and distributions.<br />
<br />
'''April 25th, 2015''' -- New root filesystems based on Ubuntu 15.04 available (server and lubuntu in 32 and 64 bits versions). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command.<br />
<br />
'''March 16th, 2015''' -- Latest VNX versions include bash completion capabilities. Just use tab key to see the command line options available and help completiting option values. <br />
<br />
'''March 6th, 2015''' -- New root filesystems based on Kali 1.1.0 available (32 and 64 bits versions). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command. Update VNX version with 'vnx_update' command before using it.<br />
<br />
'''October 24th, 2014''' -- New root filesystems based on Ubuntu 14.10 available (server and lubuntu in 32 and 64 bits versions). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command.<br />
<br />
'''August 27th, 2014''' -- New functionality implemented to specify the position, size and desktop number where the VM console windows are shown by using .cvnx files. See more information [http://web.dit.upm.es/vnxwiki/index.php/Vnx-console-mgmt here].<br />
<br />
'''August 25th, 2014''' -- VNX now supports LXC virtual machines. See the VNX tutorial for LXC [http://web.dit.upm.es/vnxwiki/index.php/Vnx-tutorial-lxc here]. Additionally, see how to [http://web.dit.upm.es/vnxwiki/index.php/Vnx-rootfslxc create] or [http://web.dit.upm.es/vnxwiki/index.php/Vnx-modify-rootfs modify] a LXC root filesystem. <br />
<br />
'''June 27th, 2014''' -- Jorge Somavilla wins the [http://www.coit.es/descargar.php?idfichero=9461 ''Asociación de Telemática'' prize from COIT-AEIT] to his [[References#Final_Degree_Projects|Final Degree Project about VNX]]. See on [https://twitter.com/jsomav/status/484039220626210816 twitter].<br />
<br />
'''June 21th, 2014''' -- New root filesystem based on Kali Linux (old Backtrack) available (32 and 64 bits versions). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command. Use simple_kali.xml and simple_kali64.xml examples to test them (now the examples include direct Internet connection). Update VNX to latest version to use them.<br />
<br />
'''June 14th, 2014''' -- Follow VNX news in twitter: https://twitter.com/vnx_upm<br />
<br />
'''June 14th, 2014''' -- A Vagrant virtual machine to easily test VNX has been created. See [[Vnx-tutorial-vagrant|how to use it]] and [[Vnx-create-vagrant-vm|how it has been created]]<br />
<br />
'''October 17th, 2012''' -- New root filesystem added based on [http://www.caine-live.net/ CAINE (Computer Aided INvestigative Environment)] Ubuntu distribution. Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command. A new simple_caine.xml example has been added to latest VNX distribution to easy testing it.<br />
<br />
'''June 19th, 2012''' -- A new updated Debian root filesystem has been created, as well as a new UML kernel (ver 3.3.8) to work with it. See [[Vnx-install-root_fs#UML_root_filesystems|how to download and install them]] and the recipes followed for their creation: [[Vnx-rootfsdebian|rootfs]] and [[Vnx-rootfs-uml-kernel|kernel]]. To create the kernel, the traditional UML exec extension kernel patch has been updated to work with kernel 3.3.8. You can find the new kernel patch [http://vnx.dit.upm.es/vnx/kernels/mconsole-exec-3.3.8.patch here]<br />
<br />
'''May 31th, 2012''' -- New beta version of VNX (2.0b.2243) released including distributed deployment capabilities (EDIV). See the [[Docintro|documentation]] for more information.<br />
<br />
'''May 24th, 2012''' -- Jorge Somavilla wins the TNC2012 student poster competition. Read the full story [http://www.terena.org/news/fullstory.php?news_id=3168 here], [http://www.rediris.es/anuncios/2012/20120525_0.html.es here] or [http://www.upm.es/institucional/UPM/CanalUPM/Noticias/2532c60f455e7310VgnVCM10000009c7648aRCRD here]<br />
<br />
==About VNX==<br />
<br />
'''VNX''' is a general purpose open-source virtualization tool designed to help building virtual network testbeds automatically. It allows the definition and automatic deployment of network scenarios made of virtual machines of different types (Linux, Windows, FreeBSD, Olive or Dynamips routers, etc) interconnected following a user-defined topology, possibly connected to external networks.<br />
<br />
'''VNX''' has been developed by the <br />
<!-- [http://www.dit.upm.es/rsti Telecommunication and Internet Networks and Services (RSTI)] research group of the --><br />
[http://www.dit.upm.es Telematics Engineering Department (DIT)] of the [http://www.upm.es/internacional Technical University of Madrid (UPM)].<br />
<br />
'''VNX''' is a useful tool for testing network applications/services over complex testbeds made of virtual nodes and networks, as well as for creating complex network laboratories to allow students to interact with realistic network scenarios. As other similar tools aimed to create virtual network scenarios (like GNS3, NetKit, MLN or Marionnet), VNX provides a way to manage testbeds avoiding the investment and management complexity needed to create them using real equipment.<br />
<br />
'''VNX''' is made of two main parts: <br />
* an XML language that allows describing the virtual network scenario (VNX specification language)<br />
* the VNX program, that parses the scenario description and builds and manages the virtual scenario over a Linux machine<br />
<br />
'''VNX''' comes with a distributed version (EDIV) that allows the deployment of virtual scenarios over clusters of Linux servers, improving the scalability to scenarios made of tenths or even hundreds of virtual machines.<br />
<br />
'''VNX''' is built over the long experience of a previous tool named [http://www.dit.upm.es/vnuml VNUML (Virtual Networks over User Mode Linux)] and brings important new functionalities that overcome the most important limitations VNUML tool had:<br />
* Integration of new virtualization platforms to allow virtual machines running other operating systems (Windows, FreeBSD, etc) apart from Linux. In this sense:<br />
** VNX uses [http://libvirt.org libvirt] to interact with the virtualization capabilities of the host, allowing the use of most of the virtualization platforms available for Linux (KVM, Xen, etc)<br />
** Integrates [http://www.ipflow.utc.fr/blog/ Dynamips] and Olive router virtualization platforms to allow limited emulation of CISCO and Juniper routers<br />
** Integrates also Linux Containers (LXC) support<br />
* Individual management of virtual machines <br />
* Autoconfiguration and command execution capabilities for several operating systems: Linux, FreeBSD and Windows (XP and 7)<br />
* Integration of [http://openvswitch.org/ Openvswitch] with support for VLAN configuration, inter-switches connections and SDN parameter configuration (controller ip address, mode, Openflow version, etc.).<br />
<br />
'''VNX''' has been developed with the help and support of several people and companies. See the [[VNXteam|VNX team page]] for details.</div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Vnx-install-ubuntu3&diff=2646
Vnx-install-ubuntu3
2020-06-19T23:32:43Z
<p>David: </p>
<hr />
<div>{{Title|VNX Installation over Ubuntu}}<br />
<br />
This section describes the procedure for manually installing VNX over Ubuntu 13.*, 14.*, 15.*, 16.*, 17.*, and 18.*.<br />
<br />
Open a root shell window and follow these steps:<br />
<ul><br />
<br />
<li>Install all packages required (basic development, virtualization, perl libraries and auxiliar packages). In case of Ubuntu 18.10 change 'libvirt-bin' by 'libvirt-clients':</li><br />
sudo apt-get update<br />
sudo apt-get install \<br />
bash-completion bridge-utils curl eog expect genisoimage gnome-terminal \<br />
graphviz libappconfig-perl libdbi-perl liberror-perl libexception-class-perl \<br />
libfile-homedir-perl libio-pty-perl libmath-round-perl libnetaddr-ip-perl \<br />
libnet-ip-perl libnet-ipv6addr-perl libnet-pcap-perl libnet-telnet-perl \<br />
libreadonly-perl libswitch-perl libsys-virt-perl libterm-readline-perl-perl \<br />
libvirt-bin libxml-checker-perl libxml-dom-perl libxml-libxml-perl \<br />
libxml-parser-perl libxml-tidy-perl lxc lxc-templates net-tools \<br />
openvswitch-switch picocom pv qemu-kvm screen tree uml-utilities virt-manager \<br />
virt-viewer vlan w3m wmctrl xdotool xfce4-terminal xterm lsof<br />
<br />
<li>Tune libvirt configuration to work with VNX. In particular, edit /etc/libvirt/qemu.conf file and set the following parameters (see this simple [[Vnx-install-modify-qemuconf|script]] to do it):</li><br />
security_driver = "none"<br />
user = "root"<br />
group = "root"<br />
cgroup_device_acl = [<br />
"/dev/null", "/dev/full", "/dev/zero",<br />
"/dev/random", "/dev/urandom",<br />
"/dev/ptmx", "/dev/kvm", "/dev/kqemu",<br />
"/dev/rtc", "/dev/hpet", "/dev/vfio/vfio", "/dev/net/tun"<br />
]<br />
(you have to add "/dev/net/tun") and restart libvirtd for the changes to take effect:<br />
sudo restart libvirt-bin # for ubuntu 14.10 or older<br />
sudo systemctl restart libvirt-bin # for ubuntu 15.04 or later<br />
<li>Check that libvirt is running correctly, for example, executing:</li><br />
sudo virsh list<br />
sudo virsh capabilities<br />
Note: Have a look at [[Vnx-install-trobleshooting|this document]] in case you get an error similar to this one: <br />
virsh: /usr/lib/libvirt.so.0: version LIBVIRT_PRIVATE-XXX not found (required by virsh)<br />
<br />
<li>Install VNX:</li><br />
mkdir /tmp/vnx-update<br />
cd /tmp/vnx-update<br />
rm -rf /tmp/vnx-update/vnx-*<br />
wget http://vnx.dit.upm.es/vnx/vnx-latest.tgz<br />
tar xfvz vnx-latest.tgz<br />
cd vnx-*-*<br />
sudo ./install_vnx<br />
<br />
<li>Restart apparmor:</li><br />
service apparmor restart # for Ubuntu 14.10 or older<br />
systemctl restart apparmor # for Ubuntu 15.04 or later <br />
<br />
<li>Create the VNX config file (/etc/vnx.conf). You just can move the sample config file:</li><br />
sudo mv /usr/share/vnx/etc/vnx.conf.sample /etc/vnx.conf<br />
<br />
<li>For Ubuntu 15.04 or newer: change parameter 'overlayfs_workdir_option' in vnx.conf to 'yes'</li><br />
[lxc]<br />
...<br />
overlayfs_workdir_option = 'yes'<br />
...<br />
<br />
<li>For Ubuntu 16.04 or later: change the LXC union_type to 'overlayfs'<br />
[lxc]<br />
...<br />
union_type='overlayfs'<br />
...<br />
<br />
<li>Download root file systems from http://vnx.dit.upm.es/vnx/filesystems and install them following these [[Vnx-install-root_fs|instructions]]</li><br />
<br />
<li>Optionally, enable bash-completion in your system to allow using VNX bash completion capabilities. For example, to enable it for all users in your system, just edit '/etc/bash.bashrc' and uncomment the following lines:</li><br />
<pre><br />
# enable bash completion in interactive shells<br />
if ! shopt -oq posix; then<br />
if [ -f /usr/share/bash-completion/bash_completion ]; then<br />
. /usr/share/bash-completion/bash_completion<br />
elif [ -f /etc/bash_completion ]; then<br />
. /etc/bash_completion<br />
fi<br />
fi<br />
</pre><br />
</ul><br />
<br />
=== Additional install steps for Dynamips support ===<br />
<br />
* Install Dynamips and Dynagen:<br />
apt-get install dynamips dynagen<br />
<br />
* Create a file /etc/init.d/dynamips (taken from http://7200emu.hacki.at/viewtopic.php?t=2198):<br />
<pre><br />
#!/bin/sh<br />
# Start/stop the dynamips program as a daemon.<br />
#<br />
### BEGIN INIT INFO<br />
# Provides: dynamips<br />
# Required-Start:<br />
# Required-Stop:<br />
# Default-Start: 2 3 4 5<br />
# Default-Stop: 0 1 6<br />
# Short-Description: Cisco hardware emulator daemon<br />
### END INIT INFO<br />
<br />
DAEMON=/usr/bin/dynamips<br />
NAME=dynamips<br />
PORT=7200<br />
PIDFILE=/var/run/$NAME.pid <br />
LOGFILE=/var/log/$NAME.log<br />
DESC="Cisco Emulator"<br />
SCRIPTNAME=/etc/init.d/$NAME<br />
<br />
test -f $DAEMON || exit 0<br />
<br />
. /lib/lsb/init-functions<br />
<br />
<br />
case "$1" in<br />
start) log_daemon_msg "Starting $DESC " "$NAME"<br />
start-stop-daemon --start --chdir /tmp --background --make-pidfile --pidfile $PIDFILE --name $NAME --startas $DAEMON -- -H $PORT -l $LOGFILE<br />
log_end_msg $?<br />
;;<br />
stop) log_daemon_msg "Stopping $DESC " "$NAME"<br />
start-stop-daemon --stop --quiet --pidfile $PIDFILE --name $NAME<br />
log_end_msg $?<br />
;;<br />
restart) log_daemon_msg "Restarting $DESC " "$NAME"<br />
start-stop-daemon --stop --retry 5 --quiet --pidfile $PIDFILE --name $NAME<br />
start-stop-daemon --start --chdir /tmp --background --make-pidfile --pidfile $PIDFILE --name $NAME --startas $DAEMON -- -H $PORT -l $LOGFILE<br />
log_end_msg $?<br />
;;<br />
status)<br />
status_of_proc -p $PIDFILE $DAEMON $NAME && exit 0 || exit $? <br />
#status $NAME<br />
#RETVAL=$?<br />
;; <br />
*) log_action_msg "Usage: $SCRIPTNAME {start|stop|restart|status}"<br />
exit 2<br />
;;<br />
esac<br />
exit 0<br />
<br />
</pre><br />
<br />
* Set execution permissions for the script and add it to system start-up:<br />
chmod +x /etc/init.d/dynamips<br />
update-rc.d dynamips defaults<br />
/etc/init.d/dynamips start<br />
<br />
* Download and install cisco IOS image:<br />
cd /usr/share/vnx/filesystems<br />
# Cisco image<br />
wget ... c3640-js-mz.124-19.image<br />
ln -s c3640-js-mz.124-19.image c3640<br />
<br />
* Calculate the idle-pc value for your computer following the procedure in http://dynagen.org/tutorial.htm:<br />
dynagen /usr/share/vnx/examples/R3640.net<br />
console R3640 # type 'no' to exit the config wizard and wait <br />
# for the router to completely start <br />
idlepc get R3640<br />
Once you know the idlepc value for your system, include it in /etc/vnx.conf file.</div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Vnx-install-ubuntu3&diff=2645
Vnx-install-ubuntu3
2019-09-21T18:37:20Z
<p>David: </p>
<hr />
<div>{{Title|VNX Installation over Ubuntu}}<br />
<br />
This section describes the procedure for manually installing VNX over Ubuntu 13.*, 14.*, 15.*, 16.*, 17.*, and 18.*.<br />
<br />
Open a root shell window and follow these steps:<br />
<ul><br />
<br />
<li>Install all packages required (basic development, virtualization, perl libraries and auxiliar packages). In case of Ubuntu 18.10 change 'libvirt-bin' by 'libvirt-clients':</li><br />
sudo apt-get update<br />
sudo apt-get install \<br />
bash-completion bridge-utils curl eog expect genisoimage gnome-terminal \<br />
graphviz libappconfig-perl libdbi-perl liberror-perl libexception-class-perl \<br />
libfile-homedir-perl libio-pty-perl libmath-round-perl libnetaddr-ip-perl \<br />
libnet-ip-perl libnet-ipv6addr-perl libnet-pcap-perl libnet-telnet-perl \<br />
libreadonly-perl libswitch-perl libsys-virt-perl libterm-readline-perl-perl \<br />
libvirt-bin libxml-checker-perl libxml-dom-perl libxml-libxml-perl \<br />
libxml-parser-perl libxml-tidy-perl lxc lxc-templates net-tools \<br />
openvswitch-switch picocom pv qemu-kvm screen tree uml-utilities virt-manager \<br />
virt-viewer vlan w3m wmctrl xdotool xfce4-terminal xterm lsof<br />
<br />
<li>Tune libvirt configuration to work with VNX. In particular, edit /etc/libvirt/qemu.conf file and set the following parameters (see this simple [[Vnx-install-modify-qemuconf|script]] to do it):</li><br />
security_driver = "none"<br />
user = "root"<br />
group = "root"<br />
cgroup_device_acl = [<br />
"/dev/null", "/dev/full", "/dev/zero",<br />
"/dev/random", "/dev/urandom",<br />
"/dev/ptmx", "/dev/kvm", "/dev/kqemu",<br />
"/dev/rtc", "/dev/hpet", "/dev/vfio/vfio", "/dev/net/tun"<br />
]<br />
(you have to add "/dev/net/tun") and restart libvirtd for the changes to take effect:<br />
sudo restart libvirt-bin # for ubuntu 14.10 or older<br />
sudo systemctl restart libvirt-bin # for ubuntu 15.04 or later<br />
<li>Check that libvirt is running correctly, for example, executing:</li><br />
sudo virsh list<br />
sudo virsh capabilities<br />
Note: Have a look at [[Vnx-install-trobleshooting|this document]] in case you get an error similar to this one: <br />
virsh: /usr/lib/libvirt.so.0: version LIBVIRT_PRIVATE-XXX not found (required by virsh)<br />
<br />
<li>Install VNX:</li><br />
mkdir /tmp/vnx-update<br />
cd /tmp/vnx-update<br />
rm -rf /tmp/vnx-update/vnx-*<br />
wget http://vnx.dit.upm.es/vnx/vnx-latest.tgz<br />
tar xfvz vnx-latest.tgz<br />
cd vnx-*-*<br />
sudo ./install_vnx<br />
<br />
<li>Restart apparmor:</li><br />
service apparmor restart # for Ubuntu 14.10 or older<br />
systemctl restart apparmor # for Ubuntu 15.04 or later <br />
<br />
<li>Create the VNX config file (/etc/vnx.conf). You just can move the sample config file:</li><br />
sudo mv /usr/share/vnx/etc/vnx.conf.sample /etc/vnx.conf<br />
<br />
<li>For Ubuntu 15.04 or newer: change parameter 'overlayfs_workdir_option' in vnx.conf to 'yes'</li><br />
[lxc]<br />
...<br />
overlayfs_workdir_option = 'yes'<br />
...<br />
<br />
<li>For Ubuntu 16.04 or later: change the LXC union_type to 'overlayfs'<br />
[lxc]<br />
...<br />
union_type='overlayfs'<br />
...<br />
<br />
<li>Download root file systems from http://idefix.dit.upm.es/download/vnx/filesystems and install them following these [[Vnx-install-root_fs|instructions]]</li><br />
<br />
<li>Optionally, enable bash-completion in your system to allow using VNX bash completion capabilities. For example, to enable it for all users in your system, just edit '/etc/bash.bashrc' and uncomment the following lines:</li><br />
<pre><br />
# enable bash completion in interactive shells<br />
if ! shopt -oq posix; then<br />
if [ -f /usr/share/bash-completion/bash_completion ]; then<br />
. /usr/share/bash-completion/bash_completion<br />
elif [ -f /etc/bash_completion ]; then<br />
. /etc/bash_completion<br />
fi<br />
fi<br />
</pre><br />
</ul><br />
<br />
=== Additional install steps for Dynamips support ===<br />
<br />
* Install Dynamips and Dynagen:<br />
apt-get install dynamips dynagen<br />
<br />
* Create a file /etc/init.d/dynamips (taken from http://7200emu.hacki.at/viewtopic.php?t=2198):<br />
<pre><br />
#!/bin/sh<br />
# Start/stop the dynamips program as a daemon.<br />
#<br />
### BEGIN INIT INFO<br />
# Provides: dynamips<br />
# Required-Start:<br />
# Required-Stop:<br />
# Default-Start: 2 3 4 5<br />
# Default-Stop: 0 1 6<br />
# Short-Description: Cisco hardware emulator daemon<br />
### END INIT INFO<br />
<br />
DAEMON=/usr/bin/dynamips<br />
NAME=dynamips<br />
PORT=7200<br />
PIDFILE=/var/run/$NAME.pid <br />
LOGFILE=/var/log/$NAME.log<br />
DESC="Cisco Emulator"<br />
SCRIPTNAME=/etc/init.d/$NAME<br />
<br />
test -f $DAEMON || exit 0<br />
<br />
. /lib/lsb/init-functions<br />
<br />
<br />
case "$1" in<br />
start) log_daemon_msg "Starting $DESC " "$NAME"<br />
start-stop-daemon --start --chdir /tmp --background --make-pidfile --pidfile $PIDFILE --name $NAME --startas $DAEMON -- -H $PORT -l $LOGFILE<br />
log_end_msg $?<br />
;;<br />
stop) log_daemon_msg "Stopping $DESC " "$NAME"<br />
start-stop-daemon --stop --quiet --pidfile $PIDFILE --name $NAME<br />
log_end_msg $?<br />
;;<br />
restart) log_daemon_msg "Restarting $DESC " "$NAME"<br />
start-stop-daemon --stop --retry 5 --quiet --pidfile $PIDFILE --name $NAME<br />
start-stop-daemon --start --chdir /tmp --background --make-pidfile --pidfile $PIDFILE --name $NAME --startas $DAEMON -- -H $PORT -l $LOGFILE<br />
log_end_msg $?<br />
;;<br />
status)<br />
status_of_proc -p $PIDFILE $DAEMON $NAME && exit 0 || exit $? <br />
#status $NAME<br />
#RETVAL=$?<br />
;; <br />
*) log_action_msg "Usage: $SCRIPTNAME {start|stop|restart|status}"<br />
exit 2<br />
;;<br />
esac<br />
exit 0<br />
<br />
</pre><br />
<br />
* Set execution permissions for the script and add it to system start-up:<br />
chmod +x /etc/init.d/dynamips<br />
update-rc.d dynamips defaults<br />
/etc/init.d/dynamips start<br />
<br />
* Download and install cisco IOS image:<br />
cd /usr/share/vnx/filesystems<br />
# Cisco image<br />
wget ... c3640-js-mz.124-19.image<br />
ln -s c3640-js-mz.124-19.image c3640<br />
<br />
* Calculate the idle-pc value for your computer following the procedure in http://dynagen.org/tutorial.htm:<br />
dynagen /usr/share/vnx/examples/R3640.net<br />
console R3640 # type 'no' to exit the config wizard and wait <br />
# for the router to completely start <br />
idlepc get R3640<br />
Once you know the idlepc value for your system, include it in /etc/vnx.conf file.</div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Vnx-rootfsubuntu&diff=2644
Vnx-rootfsubuntu
2019-08-28T17:32:42Z
<p>David: /* Configuration */</p>
<hr />
<div>{{Title|How to create a KVM Ubuntu root filesystem for VNX}}<br />
<br />
== Basic installation ==<br />
<br />
Follow this procedure to create a KVM Ubuntu based root filesystem for VNX. The procedure has been tested with Ubuntu 9.10, 10.04, 10.10, 11.04, 12.04, 13.04, 13.10, 14.04, 14.10, 15.04, 15.10 and 16.04.<br />
<ul><br />
<li>Create the filesystem disk image:</li><br />
qemu-img create -f qcow2 vnx_rootfs_kvm_ubuntu.qcow2 20G<br />
<li>Get Ubuntu installation CD. For example:</li><br />
wget ftp://ftp.rediris.es/mirror/ubuntu-releases/16.04/ubuntu-16.04-server-i386.iso<br />
cp ubuntu-16.04-server-i386.iso /almacen/iso<br />
Note: use 'server' or 'desktop' CD versions depending on the system you want to create.<br />
<li>Create the virtual machine with:</li><br />
vnx --create-rootfs vnx_rootfs_kvm_ubuntu.qcow2 --install-media /almacen/iso/ubuntu-16.04-server-i386.iso --mem 512M<br />
Note: add '''"--arch x86_64"''' option for 64 bits virtual machines<br />
<li>Follow Ubuntu installation menus to install a basic system with ssh server.</li><br />
<li>Configure a serial console on ttyS0 (skip this step for 15.04 or later releases):</li><br />
cd /etc/init<br />
cp tty1.conf ttyS0.conf<br />
sed -i -e 's/tty1/ttyS0/' ttyS0.conf<br />
<li>Activate startup traces on serial console by editting /etc/default/grub file and setting the GRUB_CMDLINE_LINUX_DEFAULT variable to "console=ttyS0". Also change the boot menu timeout to 0 (sometimes virtual machines get stopped on the boot menu when starting on high loaded systems):</li><br />
GRUB_CMDLINE_LINUX_DEFAULT="console=ttyS0"<br />
GRUB_TIMEOUT=0<br />
GRUB_RECORDFAIL_TIMEOUT=1<br />
<li>Only for Ubuntu 15.10 or later releases:</li><br />
GRUB_CMDLINE_LINUX="net.ifnames=0 biosdevname=0"<br />
<li>Make grub process the previous changes:</li><br />
update-grub<br />
<li>Add a timeout to systemd-networkd-wait-online service to avoid long waits at startup. Edit /lib/systemd/system/systemd-networkd-wait-online.service and change the following line:</li><br />
ExecStart=/lib/systemd/systemd-networkd-wait-online --timeout 20<br />
<li>Finally, delete the net udev rules file and halt the system:</li><br />
rm /etc/udev/rules.d/70-persistent-net.rules<br />
halt -p<br />
</ul><br />
<br />
== Configuration ==<br />
<br />
<ul><br />
<li>Restart the system with the following command:</li><br />
vnx --modify-rootfs vnx_rootfs_kvm_ubuntu.qcow2 --update-aced --mem 512M<br />
Note: add '''"--arch x86_64"''' option for 64 bits virtual machines<br />
Note: ignore the errors "timeout waiting for response on VM socket". 768M are needed if you are installing a root filesystem with desktop interface<br />
<li>Access the system through the text console to easy the copy-paste of commands:</li><br />
virsh console vnx_rootfs_kvm_ubuntu.qcow2<br />
<li>Access the console and sudo root:</li><br />
sudo su<br />
<li>Update the system</li><br />
apt-get update<br />
apt-get dist-upgrade<br />
<li>Install XML::DOM perl package and ACPI daemon:</li><br />
apt-get install libxml-libxml-perl libnetaddr-ip-perl acpid<br />
<li>For 17.10 or newer install ifupdown</li><br />
apt-get install ifupdown<br />
<!--li>Only for Ubuntu 10.04:</li><br />
<ul><br />
<li>create /media/cdrom* directories:</li><br />
mkdir /media/cdrom0<br />
mkdir /media/cdrom1<br />
ln -s /media/cdrom0 /media/cdrom<br />
ln -s /cdrom /media/cdrom<br />
<li>add the following lines to /etc/fstab:</li><br />
/dev/scd0 /media/cdrom0 udf,iso9660 user,noauto,exec,utf8 0 0<br />
/dev/scd1 /media/cdrom1 udf,iso9660 user,noauto,exec,utf8 0 0<br />
</ul--><br />
<li>Install VNX autoconfiguration daemon:</li><br />
mount /dev/sdb /mnt/<br />
perl /mnt/vnxaced-lf/install_vnxaced<br />
Change 'sdb' by 'vdb' in case virtio drivers are being used.<br />
<li>Edit /etc/network/interfaces file and comment all lines related to eth0, eth1, etc interfaces. Leave only the loopback (lo) interface.</li><br />
<li>Optional: install graphical user interface.</li><br />
<ul><br />
<li>Minimal:</li><br />
# recommended option<br />
sudo apt-get install lubuntu-desktop<br />
<br />
# old recipe not tested in later versions<br />
sudo apt-get install xorg gnome-core gksu gdm gnome-system-tools gnome-nettool firefox-gnome-support<br />
<li>Complete:</li><br />
sudo apt-get install ubuntu-desktop<br />
Note: to avoid nautilus being launched any time you remotely execute a command on the virtual machine using VNX (which interferes with the normal execution of commands), you should disable the start of programs when media insertion takes place. Go to "System settings->System->Details->Removable Media" and deselect the checkbox "Never prompt or start programs on media insertion".<br />
<!--<br />
nautilus automount feature. Just execute gconf-editor and create a variable "/apps/nautilus/preferences/media_automount" and set it to 0. <br />
This does not seem to work:<br />
gconftool-2 --direct --config-source xml:readwrite:/etc/gconf/gconf.xml.mandatory --type bool --set "/apps/nautilus/preferences/media_automount" "false"<br />
gconftool-2 --direct --config-source xml:readwrite:/etc/gconf/gconf.xml.mandatory --type bool --set "/apps/nautilus/preferences/media_automount_open" "false"<br />
--><br />
</ul><br />
<li>Optional: install other services:</li><br />
<ul><br />
<li>Apache server:</li><br />
sudo apt-get install apache2<br />
update-rc.d -f apache2 remove # to avoid automatic start in old versions<br />
systemctl disable apache2.service # to avoid automatic start in new versions<br />
<br />
<li>Other tools</li><br />
sudo apt-get install traceroute<br />
sudo apt-get install xterm # needed to have the 'resize' tool to resize consoles <br />
</ul><br />
<br />
<li>Create a file /etc/vnx_rootfs_version to store version number and informacion about modification:</li><br />
<pre><br />
VER=v0.25<br />
OS=Ubuntu 16.04 32 bits<br />
DESC=Basic Ubuntu 16.04 root filesystem without GUI<br />
</pre><br />
<br />
<li>Zero the image empty space to allow reducing the size of the image:</li><br />
dd if=/dev/zero of=/mytempfile<br />
rm -f /mytempfile<br />
<br />
<li>Stop the machine with vnx_halt:</li><br />
sudo vnx_halt<br />
<br />
<li>Reduce the size of the image:</li><br />
mv vnx_rootfs_kvm_ubuntu.qcow2 vnx_rootfs_kvm_ubuntu.qcow2.bak<br />
qemu-img convert -O qcow2 vnx_rootfs_kvm_ubuntu.qcow2.bak vnx_rootfs_kvm_ubuntu.qcow2<br />
<br />
</ul><br />
<br />
If everything went well, your root filesystem will be ready to be used with VNX. You can make a simple test using the simple_ubuntu.xml scenario distributed with VNX.<br />
<br />
== Installing additional software ==<br />
<br />
To install additional software or to modify your root file system, you just have to:<br />
<ul><br />
<li>Start a virtual machine from it:</li><br />
vnx --modify-rootfs vnx_rootfs_kvm_ubuntu.qcow2<br />
<li>Check network connectivity. Maybe you have to activate the network interface by hand:</li><br />
dhclient eth0<br />
Note: use "ip link show" to know which network interface to use.<br />
<li>Do the modifications you want.</li><br />
<li>Finally, halt the system using:</li><br />
vnx_halt<br />
</ul><br />
<br />
==== Examples ====<br />
<br />
<ul><br />
<li>dhcp server and relay:</li><br />
<ul><br />
<li>Install dhcp3 packages:</li><br />
apt-get install dhcp3-server dhcp3-relay<br />
<li>Disable autostart (optional):</li><br />
update-rc.d -f isc-dhcp-server remove<br />
update-rc.d -f isc-dhcp-relay remove<br />
</ul><br />
<br />
<br />
</ul><br />
<br />
== Updating VNXACED ==<br />
<br />
You can automatically update the VNXACE daemon with the following command:<br />
vnx --modify-rootfs vnx_rootfs_kvm_ubuntu.qcow2 --update-aced -y<br />
If VNXACE daemon is not updated automatically, you can do it manually by accessing the virtual machine console and type:<br />
mount /dev/sdb /mnt/<br />
perl /mnt/vnxaced-lf/install_vnxaced<br />
<br />
== Known problems ==<br />
<br />
<ul><br />
<li>Sometimes after restarting, the virtual machines stop at showing the grub menu and do not start until you manually choose one option. To avoid it, just follow the instructions here: http://www.linuxquestions.org/questions/linux-server-73/how-to-disable-grub-2-menu-even-after-server-crash-796562/. Beware that the changes you make to grub.cfg file are lost after executing "update-grub" command.<br />
</li><br />
<li>In Ubuntu 12.04 Desktop, graphical commands execution does not work. Command execution fails with "ERROR: no user logged on display :0.0" (see /var/log/vnxaced.log). If you just open a "terminal" window, commands work correctly (does not work if you open other applications; only when you start a terminal...).</li><br />
<li>Each time a cdrom is mounted (for example, whenever a command is executed on the virtual machine) the following error appears in the console:</li><br />
<pre><br />
Jul 27 22:33:31 vnx kernel: [ 4384.875886] ata1.01: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6<br />
Jul 27 22:33:31 vnx kernel: [ 4385.291374] ata1.01: BMDMA stat 0x5<br />
Jul 27 22:33:31 vnx kernel: [ 4385.493411] sr 0:0:1:0: [sr0] CDB: Read(10): 28 00 00 00 00 18 00 00 01 00<br />
Jul 27 22:33:31 vnx kernel: [ 4385.493460] ata1.01: cmd a0/01:00:00:00:08/00:00:00:00:00/b0 tag 0 dma 2048 in<br />
Jul 27 22:33:31 vnx kernel: [ 4385.493461] res 01/60:00:00:00:08/00:00:00:00:00/b0 Emask 0x3 (HSM violation)<br />
Jul 27 22:33:31 vnx kernel: [ 4386.263553] ata1.01: status: { ERR }<br />
</pre><br />
Despite of the error trace, the commands are executed correctly. This error does not appear on Ubuntu 9.10 filesystems.<br />
<br />
</ul></div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Main_Page&diff=2643
Main Page
2019-08-28T15:36:36Z
<p>David: /* VNX Latest News */</p>
<hr />
<div>{{Title|Welcome to Virtual Networks over linuX (VNX) web site}}<br />
<br />
__TOC__<br />
<br />
<br />
==VNX Latest News==<br />
<br />
<ul><br />
<li>Follow VNX news in twitter: https://twitter.com/vnx_upm</li><br />
<li>See also the [[vnx-latest-features|latest features implemented]]</li><br />
</ul><br />
'''Ago 27th, 2019''' -- New '''Openstack Stein Laboratory''' virtual scenario released. See more details [[Vnx-labo-openstack-4nodes-classic-ovs-stein|here]]. <br />
<br />
'''Ago 31th, 2017''' - New KVM and LXC root filesystems based on Ubuntu 17.04 available (only 64 bits). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command.<br />
<br />
'''Ago 30th, 2017''' - Added support for '''VyOS network operating system'''. Now VNX supports the creation of virtual scenarios including VyOS based virtual machines (either KVM or LXC). See more details [[Vnx-latest-features|here]].<br />
<br />
'''Dec 29th, 2016''' -- New KVM root filesystems based on Kali 2016.2 distribution (https://www.kali.org/). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs -p kali" command. Note: use the <video>vmvga</video> tag in the virtual machine. See the simple_kali64_inet.xml example scenario in latest VNX version.<br />
<br />
'''Dec 11th, 2016''' -- New paper published about the experience of using VNX in networking laboratories:<br />
<ul><br />
<li>D. Fernández, F. J. Ruiz, L. Bellido, E. Pastor, O. Walid and V. Mateos, [http://www.ijee.ie/contents/c320616.html Enhancing Learning Experience in Computer Networking through a Virtualization-Based Laboratory Model], International Journal of Engineering Education Vol. 32, No. 6, pp. 2569–2584, 2016.</li><br />
</ul><br />
<br />
'''Nov 28th, 2016''' -- New KVM root filesystems based on Metasploitable2 distribution (http://r-7.co/Metasploitable2). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs -p metasploitable" command.<br />
<br />
'''Nov 2nd, 2016''' -- New KVM root filesystems based on Fedora 24 server available (only 64 bits version). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command.<br />
<br />
'''Oct 23th, 2016''' -- Openstack Mitaka test scenario released. See more details [[Vnx-labo-openstack-4nodes-classic-ovs-mitaka|here]].<br />
<br />
'''May 25th, 2016''' -- VPLS test scenario based on OpenBSD published. See more details [[Vnx-labo-vpls|here]].<br />
<br />
'''May 16th, 2016''' -- Support for virtio drivers in libvirt virtual machines implemented to improve performance. See more details [[vnx-latest-features|here]].<br />
<br />
'''May 7th, 2016''' -- New KVM and LXC root filesystems based on Ubuntu 16.04 available (have a look at the new Xubuntu version distributed). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command.<br />
<br />
'''May 2nd, 2016''' -- OpenBSD support: VNX now supports OpenBSD virtual machines thanks to Francisco Javier Ruiz contribution.<br />
<br />
'''March 21th, 2016''' -- See some very interesting SDN virtual scenarios prepared by Carlos Martín-Cleto for his Master's Thesis: https://github.com/cletomcj/vnx-sdn.<br />
<br />
'''February 21th, 2016''' -- New vagrant and virtualbox (OVA) [http://goo.gl/8RxXvA VNX demo virtual machines] available for easily test LXC based virtual scenarios (see instructions for [http://goo.gl/f9jnvA Vagrant] and [http://goo.gl/JdB9ik VirtualBox]).<br />
<br />
'''February 15th, 2016''' -- New KVM root filesystems based on Ubuntu 15.10 available (server and lubuntu in 32 and 64 bits versions). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command.<br />
<br />
'''February 14th, 2016''' -- New recipe to [http://web.dit.upm.es/vnxwiki/index.php/Vnx-install-fedora23 install VNX on Fedora]. Tested on a fresh copy of Fedora 23 workstation. <br />
<br />
'''July 24th, 2015''' -- New Openstack-Opendaylight laboratory scenarios available: https://goo.gl/JpxCnB. Designed to explore an OpenStack environment running OpenDaylight as the network management provider. Prepared by Raúl Álvarez Pinilla as a result of his Master's Thesis. <br />
<br />
'''June 16th, 2015''' -- New LXC root filesystems based on Ubuntu 15.04 available (32 and 64 bits). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command.<br />
<br />
'''June 10th, 2015''' -- New root filesystems based on REMnux: A Linux Toolkit for Reverse-Engineering and Analyzing Malware. Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command. Use simple_remnux.xml example scenario to test it.<br />
<br />
'''June 6th, 2015''' -- New interesting VNX scenario available: a [[Vnx-labo-fw|security lab]] designed to allow 16 student groups to work together configuring firewalls and using security related tools and distributions.<br />
<br />
'''April 25th, 2015''' -- New root filesystems based on Ubuntu 15.04 available (server and lubuntu in 32 and 64 bits versions). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command.<br />
<br />
'''March 16th, 2015''' -- Latest VNX versions include bash completion capabilities. Just use tab key to see the command line options available and help completiting option values. <br />
<br />
'''March 6th, 2015''' -- New root filesystems based on Kali 1.1.0 available (32 and 64 bits versions). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command. Update VNX version with 'vnx_update' command before using it.<br />
<br />
'''October 24th, 2014''' -- New root filesystems based on Ubuntu 14.10 available (server and lubuntu in 32 and 64 bits versions). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command.<br />
<br />
'''August 27th, 2014''' -- New functionality implemented to specify the position, size and desktop number where the VM console windows are shown by using .cvnx files. See more information [http://web.dit.upm.es/vnxwiki/index.php/Vnx-console-mgmt here].<br />
<br />
'''August 25th, 2014''' -- VNX now supports LXC virtual machines. See the VNX tutorial for LXC [http://web.dit.upm.es/vnxwiki/index.php/Vnx-tutorial-lxc here]. Additionally, see how to [http://web.dit.upm.es/vnxwiki/index.php/Vnx-rootfslxc create] or [http://web.dit.upm.es/vnxwiki/index.php/Vnx-modify-rootfs modify] a LXC root filesystem. <br />
<br />
'''June 27th, 2014''' -- Jorge Somavilla wins the [http://www.coit.es/descargar.php?idfichero=9461 ''Asociación de Telemática'' prize from COIT-AEIT] to his [[References#Final_Degree_Projects|Final Degree Project about VNX]]. See on [https://twitter.com/jsomav/status/484039220626210816 twitter].<br />
<br />
'''June 21th, 2014''' -- New root filesystem based on Kali Linux (old Backtrack) available (32 and 64 bits versions). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command. Use simple_kali.xml and simple_kali64.xml examples to test them (now the examples include direct Internet connection). Update VNX to latest version to use them.<br />
<br />
'''June 14th, 2014''' -- Follow VNX news in twitter: https://twitter.com/vnx_upm<br />
<br />
'''June 14th, 2014''' -- A Vagrant virtual machine to easily test VNX has been created. See [[Vnx-tutorial-vagrant|how to use it]] and [[Vnx-create-vagrant-vm|how it has been created]]<br />
<br />
'''October 17th, 2012''' -- New root filesystem added based on [http://www.caine-live.net/ CAINE (Computer Aided INvestigative Environment)] Ubuntu distribution. Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command. A new simple_caine.xml example has been added to latest VNX distribution to easy testing it.<br />
<br />
'''June 19th, 2012''' -- A new updated Debian root filesystem has been created, as well as a new UML kernel (ver 3.3.8) to work with it. See [[Vnx-install-root_fs#UML_root_filesystems|how to download and install them]] and the recipes followed for their creation: [[Vnx-rootfsdebian|rootfs]] and [[Vnx-rootfs-uml-kernel|kernel]]. To create the kernel, the traditional UML exec extension kernel patch has been updated to work with kernel 3.3.8. You can find the new kernel patch [http://vnx.dit.upm.es/vnx/kernels/mconsole-exec-3.3.8.patch here]<br />
<br />
'''May 31th, 2012''' -- New beta version of VNX (2.0b.2243) released including distributed deployment capabilities (EDIV). See the [[Docintro|documentation]] for more information.<br />
<br />
'''May 24th, 2012''' -- Jorge Somavilla wins the TNC2012 student poster competition. Read the full story [http://www.terena.org/news/fullstory.php?news_id=3168 here], [http://www.rediris.es/anuncios/2012/20120525_0.html.es here] or [http://www.upm.es/institucional/UPM/CanalUPM/Noticias/2532c60f455e7310VgnVCM10000009c7648aRCRD here]<br />
<br />
==About VNX==<br />
<br />
'''VNX''' is a general purpose open-source virtualization tool designed to help building virtual network testbeds automatically. It allows the definition and automatic deployment of network scenarios made of virtual machines of different types (Linux, Windows, FreeBSD, Olive or Dynamips routers, etc) interconnected following a user-defined topology, possibly connected to external networks.<br />
<br />
'''VNX''' has been developed by the <br />
<!-- [http://www.dit.upm.es/rsti Telecommunication and Internet Networks and Services (RSTI)] research group of the --><br />
[http://www.dit.upm.es Telematics Engineering Department (DIT)] of the [http://www.upm.es/internacional Technical University of Madrid (UPM)].<br />
<br />
'''VNX''' is a useful tool for testing network applications/services over complex testbeds made of virtual nodes and networks, as well as for creating complex network laboratories to allow students to interact with realistic network scenarios. As other similar tools aimed to create virtual network scenarios (like GNS3, NetKit, MLN or Marionnet), VNX provides a way to manage testbeds avoiding the investment and management complexity needed to create them using real equipment.<br />
<br />
'''VNX''' is made of two main parts: <br />
* an XML language that allows describing the virtual network scenario (VNX specification language)<br />
* the VNX program, that parses the scenario description and builds and manages the virtual scenario over a Linux machine<br />
<br />
'''VNX''' comes with a distributed version (EDIV) that allows the deployment of virtual scenarios over clusters of Linux servers, improving the scalability to scenarios made of tenths or even hundreds of virtual machines.<br />
<br />
'''VNX''' is built over the long experience of a previous tool named [http://www.dit.upm.es/vnuml VNUML (Virtual Networks over User Mode Linux)] and brings important new functionalities that overcome the most important limitations VNUML tool had:<br />
* Integration of new virtualization platforms to allow virtual machines running other operating systems (Windows, FreeBSD, etc) apart from Linux. In this sense:<br />
** VNX uses [http://libvirt.org libvirt] to interact with the virtualization capabilities of the host, allowing the use of most of the virtualization platforms available for Linux (KVM, Xen, etc)<br />
** Integrates [http://www.ipflow.utc.fr/blog/ Dynamips] and Olive router virtualization platforms to allow limited emulation of CISCO and Juniper routers<br />
** Integrates also Linux Containers (LXC) support<br />
* Individual management of virtual machines <br />
* Autoconfiguration and command execution capabilities for several operating systems: Linux, FreeBSD and Windows (XP and 7)<br />
* Integration of [http://openvswitch.org/ Openvswitch] with support for VLAN configuration, inter-switches connections and SDN parameter configuration (controller ip address, mode, Openflow version, etc.).<br />
<br />
'''VNX''' has been developed with the help and support of several people and companies. See the [[VNXteam|VNX team page]] for details.</div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Vnx-labo-openstack&diff=2642
Vnx-labo-openstack
2019-08-28T15:12:38Z
<p>David: </p>
<hr />
<div>{{Title|VNX Openstack laboratories}}<br />
<br />
This is a set of Openstack tutorial scenarios designed to experiment with [http://openstack.org Openstack] free and open-source software platform for cloud-computing.<br />
<br />
Several tutorial scenarios are available covering Stein, Ocata, Mitaka, Liberty and Kilo Openstack versions and several deployment configurations:<br />
<br />
<ul><br />
<li>'''Openstack Stein:'''</li><br />
<ul><br />
<li>[[Vnx-labo-openstack-4nodes-classic-ovs-stein|Four-nodes-classic-openvswitch]]. A basic scenario using Openstack Stein (April 2019) made of four virtual machines: a controller, a network node and two compute nodes all based on LXC. </li><br />
</ul><br />
<br />
<li>'''Openstack Ocata:'''</li><br />
<ul><br />
<li>[[Vnx-labo-openstack-4nodes-classic-ovs-ocata|Four-nodes-classic-openvswitch]]. A basic scenario using Openstack Ocata made of four virtual machines: a controller, a network node and two compute nodes all based on LXC. The deployement scenario used is [http://docs.openstack.org/mitaka/networking-guide/scenario-classic-ovs.html Classic with Open vSwitch]</li><br />
</ul><br />
<br />
<li>'''Openstack Mitaka:'''</li><br />
<ul><br />
<li>[[Vnx-labo-openstack-4nodes-classic-ovs-mitaka|Four-nodes-classic-openvswitch]]. A basic scenario using Openstack Mitaka made of four virtual machines: a controller based on LXC and a network and two compute nodes based on KVM. The deployement scenario used is [http://docs.openstack.org/mitaka/networking-guide/scenario-classic-ovs.html Classic with Open vSwitch]</li><br />
</ul><br />
<br />
<li>'''Openstack Liberty:'''</li><br />
<ul><br />
<li>[[Vnx-labo-openstack-3nodes-basic-liberty|Liberty 3-nodes-basic]]. A basic scenario using Openstack Liberty made of three KVM virtual machines: a controller with networking capabilities and two compute nodes.</li><br />
<li>[[Vnx-labo-openstack-4nodes-basic-liberty|Liberty 4-nodes-legacy-openvswitch]]: a basic scenario using Openstack Liberty made of four virtual machines: a controller based on LXC and a network and two compute nodes based on KVM. The deployement scenario used is [http://docs.openstack.org/networking-guide/scenario_legacy_ovs.html Legacy with Open vSwitch]</li><br />
</ul><br />
<br />
<li>'''Openstack Kilo:'''</li><br />
<ul><br />
<li>[[Vnx-labo-openstack-4nodes-basic-kilo|Kilo 4-nodes-basic]]. A basic scenario using Openstack Kilo made of four virtual machines: a controller based on LXC and a network node and two compute nodes based on KVM.</li><br />
</ul><br />
<br />
</ul></div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Vnx-labo-openstack-4nodes-classic-ovs-stein&diff=2641
Vnx-labo-openstack-4nodes-classic-ovs-stein
2019-08-28T12:23:07Z
<p>David: /* Self Service networks example */</p>
<hr />
<div>{{Title|VNX Openstack Stein four nodes classic scenario using Open vSwitch}}<br />
<br />
== Introduction ==<br />
<br />
This is an Openstack tutorial scenario designed to experiment with Openstack free and open-source software platform for cloud-computing.<br />
<br />
The scenario is made of four virtual machines: a controller node, a network node and two compute nodes, all based on LXC. Optionally, new compute nodes can be added by starting additional VNX scenarios.<br />
<br />
Openstack version used is Stein (April 2019) over Ubuntu 18.04 LTS. The deployment scenario is the one that was named "Classic with Open vSwitch" and was described in previous versions of Openstack documentation (https://docs.openstack.org/liberty/networking-guide/scenario-classic-ovs.html). It is prepared to experiment either with the deployment of virtual machines using "Self Service Networks" provided by Openstack, or with the use of external "Provider networks".<br />
<br />
The scenario is similar to the ones developed by Raul Alvarez to test OpenDaylight-Openstack integration, but instead of using Devstack to configure Openstack nodes, the configuration is done by means of commands integrated into the VNX scenario following [https://docs.openstack.org/stein/install/ Openstack Stein installation recipes].<br />
<br />
[[File:Openstack_tutorial-stein.png|center|thumb|600px|<div align=center><br />
'''Figure 1: Openstack tutorial scenario'''</div>]]<br />
<br />
== Requirements ==<br />
<br />
To use the scenario you need a Linux computer (Ubuntu 16.04 or later recommended) with VNX software installed. At least 8GB of memory are needed to execute the scenario. <br />
<br />
See how to install VNX here: http://vnx.dit.upm.es/vnx/index.php/Vnx-install<br />
<br />
If already installed, update VNX to the latest version with:<br />
<br />
vnx_update<br />
<br />
== Installation ==<br />
<br />
Download the scenario with the virtual machines images included and unpack it:<br />
<br />
wget http://idefix.dit.upm.es/vnx/examples/openstack/openstack_lab-stein_4n_classic_ovs-v01-with-rootfs.tgz<br />
sudo vnx --unpack openstack_lab-stein_4n_classic_ovs-v01-with-rootfs.tgz<br />
<br />
== Starting the scenario ==<br />
<br />
Start the scenario and configure it and load an example cirros and ubuntu images with:<br />
cd openstack_lab-stein_4n_classic_ovs-v01<br />
# Start the scenario<br />
sudo vnx -f openstack_lab.xml -v --create<br />
# Wait for all consoles to have started and configure all Openstack services<br />
vnx -f openstack_lab.xml -v -x start-all<br />
# Load vm images in GLANCE<br />
vnx -f openstack_lab.xml -v -x load-img<br />
<br />
<br />
[[File:Tutorial-openstack-stein-4n-classic-ovs.png|center|thumb|600px|<div align=center><br />
'''Figure 2: Openstack tutorial detailed topology'''</div>]]<br />
<br />
Once started, you can connect to Openstack Dashboard (default/admin/xxxx) starting a browser and pointing it to the controller horizon page. For example:<br />
<br />
firefox 10.0.10.11/horizon<br />
<br />
== Self Service networks example ==<br />
<br />
Access the network topology Dashboard page (Project->Network->Network topology) and create a simple demo scenario inside Openstack:<br />
<br />
vnx -f openstack_lab.xml -v -x create-demo-scenario<br />
<br />
You should see the simple scenario as it is being created through the Dashboard.<br />
<br />
Once created you should be able to access vm1 console, to ping or ssh from the host to the vm1 or the opposite (see the floating IP assigned to vm1 in the Dashboard, probably 10.0.10.102).<br />
<br />
You can create a second Cirros virtual machine (vm2) or a third Ubuntu virtual machine (vm3) and test connectivity among all virtual machines with: <br />
<br />
vnx -f openstack_lab.xml -v -x create-demo-vm2<br />
vnx -f openstack_lab.xml -v -x create-demo-vm3<br />
<br />
To allow external Internet access from vm1 you hace to configure a NAT in the host. You can easily do it using '''vnx_config_nat''' command distributed with VNX. Just find out the name of the public network interface of your host (i.e eth0) and execute:<br />
<br />
vnx_config_nat ExtNet eth0<br />
<br />
Besides, you can access the Openstack controller by ssh from the host and execute management commands directly:<br />
slogin root@controller # root/xxxx<br />
source bin/admin-openrc.sh # Load admin credentials<br />
<br />
For example, to show the virtual machines started:<br />
openstack server list<br />
<br />
You can also execute that commands from the host where the virtual scenario is started. For that purpose you need to install openstack client first:<br />
<br />
pip install python-openstackclient<br />
source bin/admin-openrc.sh # Load admin credentials<br />
openstack server list<br />
<br />
== Provider networks example ==<br />
<br />
Compute nodes in the Openstack virtual lab scenario have two network interfaces for internal and external connections: <br />
* '''eth2''', connected to Tunnels network and used to connect with VMs in other compute nodes or routers in the network node<br />
* '''eth3''', connected to VLANs network and used to connect with VMs in other compute nodes and also to connect to external systems through the Providers networks infrastructure. <br />
<br />
To demonstrate how Openstack VMs can be connected with external systems though the VLAN network switches, an additional demo scenario is included. Just execute:<br />
vnx -f openstack_lab.xml -v -x create-vlan-demo-scenario<br />
<br />
That scenario will create two new networks and subnetworks associated with VLANs 1000 and 1001, and two VMs, vmA1 and vmB1 connected to that networks. You can see the scenario created through the openstack Dashboard.<br />
<br />
[[File:Openstack-provider-networks-example.png|center|thumb|600px|<div align=center><br />
'''Figure 3: Provider networks testing scenario'''</div>]]<br />
<br />
The commands used to create that networks and vms are the following (can be consulted also in the scenario XML file):<br />
<br />
<pre><br />
# Networks<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1000 vlan1000<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1001 vlan1001<br />
openstack subnet create --network vlan1000 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.2.0/24 --allocation-pool start=10.1.2.2,end=10.1.2.99 subvlan1000<br />
openstack subnet create --network vlan1001 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.3.0/24 --allocation-pool start=10.1.3.2,end=10.1.3.99 subvlan1001<br />
<br />
# VMs<br />
mkdir -p tmp<br />
openstack keypair create vmA1 > tmp/vmA1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vmA1 --nic net-id=vlan1000 --key-name vmA1<br />
openstack keypair create vmB1 > tmp/vmB1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vmB1 --nic net-id=vlan1001 --key-name vmB1<br />
</pre><br />
<br />
To demonstrate the connectivity of vmA1 and vmB1 to external systems connected on VLANs 1000/1001, you can start and additional virtual scenario which creates three additional systems: vmA2 (vlan 1000), vmB2 (vlan 1001) and vlan-router (connected to both vlans). To start it just execute:<br />
vnx -f openstack_lab-vms-vlan.xml -v -t<br />
<br />
[[File:Openstack_lab-vms-vlan-map.png|center|thumb|600px|<div align=center><br />
'''Figure 4: Provider networks demo external scenario'''</div>]]<br />
<br />
Once the scenario is started, you should be able to ping, traceroute and ssh among vmA1, vmB1, vmA2 and vmB2 using the following IP addresses:<br />
<ul><br />
<li>Virtual machines inside Openstack:</li><br />
<ul><br />
<li>vmA1 and vmB1: dynamic addresses assigned from 10.1.2.0/24. You can consult the addresses from Horizon or using the command:</li><br />
openstack server list<br />
</ul><br />
<li>Virtual machines outside Openstack:</li><br />
<ul><br />
<li>vmA2: 10.1.2.100/24</li><br />
<li>vmB2: 10.1.3.100/24</li><br />
</ul><br />
</ul><br />
<br />
Take into account that the pings from the exterior virtual machines to the internal ones is not allowed by the default security groups filters applied by Openstack.<br />
<br />
You can have a look at the virtual switch that supports the Openstack VLAN Network executing the following command in the host:<br />
ovs-vsctl show<br />
<br />
<br />
[[File:Vnx-demo-scenario-openstack-stein.png|center|thumb|600px|<div align=center><br />
'''Figure 5: Openstack Dashboard view of the demo virtual scenarios created'''</div>]]<br />
<br />
== Adding additional compute nodes ==<br />
<br />
Three additional VNX scenarios are provided to add new compute nodes to the scenario. <br />
<br />
For example, to start compute nodes 3 and 4, just:<br />
vnx -f openstack_lab-cmp34.xml -v -t<br />
# Wait for consoles to start<br />
vnx -f openstack_lab-cmp34.xml -v -x start-all<br />
<br />
After that, you can see the new compute nodes added <br />
by going to "Admin->Compute->Hypervisors->Compute host" option. However, the new compute nodes are not added yet to the list of Hypervisors in "Admin->Compute->Hypervisors->Hypervisor" option.<br />
<br />
To add them, just execute:<br />
vnx -f openstack_lab.xml -v -x discover-hosts<br />
<br />
The same procedure can be used to start nodes 5 and 6 (openstack_lab-cmp56.xml) and nodes 7 and 8 (openstack_lab-cmp78.xml).<br />
<br />
== Stopping or releasing the scenario ==<br />
<br />
To stop the scenario preserving the configuration and the changes made:<br />
<br />
vnx -f openstack_lab.xml -v --shutdown<br />
<br />
To start it again use:<br />
<br />
vnx -f openstack_lab.xml -v --start<br />
<br />
To stop the scenario destroying all the configuration and changes made:<br />
<br />
vnx -f openstack_lab.xml -v --destroy<br />
<br />
To unconfigure the NAT, just execute (change eth0 by the name of your external interface):<br />
<br />
vnx_config_nat -d ExtNet eth0<br />
<br />
== Other useful information ==<br />
<br />
To pack the scenario in a tgz file:<br />
<br />
bin/pack-scenario-with-rootfs # including rootfs<br />
bin/pack-scenario # without rootfs<br />
<br />
== Other Openstack Dashboard screen captures ==<br />
<br />
[[File:Vnx-demo-scenario-openstack-mitaka-compute-overview.png|center|thumb|600px|<div align=center><br />
'''Figure 6: Openstack Dashboard compute overview'''</div>]]<br />
<br />
[[File:Vnx-demo-scenario-openstack-mitaka-instances.png|center|thumb|600px|<div align=center><br />
'''Figure 7: Openstack Dashboard view of the demo virtual machines created'''</div>]]<br />
<br />
== XML specification of Openstack tutorial scenario ==<br />
<br />
<pre><br />
<?xml version="1.0" encoding="UTF-8"?><br />
<br />
<!--<br />
~~~~~~~~~~~~~~~~~~~~~~<br />
VNX Sample scenarios<br />
~~~~~~~~~~~~~~~~~~~~~~<br />
<br />
Name: openstack_tutorial-stein<br />
<br />
Description: This is an Openstack tutorial scenario designed to experiment with Openstack free and open-source <br />
software platform for cloud-computing. It is made of four LXC containers: <br />
- one controller<br />
- one network node<br />
- two compute nodes<br />
Openstack version used: Stein.<br />
The network configuration is based on the one named "Classic with Open vSwitch" described here:<br />
http://docs.openstack.org/liberty/networking-guide/scenario-classic-ovs.html<br />
<br />
Author: David Fernandez (david@dit.upm.es)<br />
<br />
This file is part of the Virtual Networks over LinuX (VNX) Project distribution. <br />
(www: http://www.dit.upm.es/vnx - e-mail: vnx@dit.upm.es) <br />
<br />
Copyright (C) 2019 Departamento de Ingenieria de Sistemas Telematicos (DIT)<br />
Universidad Politecnica de Madrid (UPM)<br />
SPAIN<br />
--><br />
<br />
<vnx xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"<br />
xsi:noNamespaceSchemaLocation="/usr/share/xml/vnx/vnx-2.00.xsd"><br />
<global><br />
<version>2.0</version><br />
<scenario_name>openstack_tutorial-stein</scenario_name><br />
<ssh_key>/root/.ssh/id_rsa.pub</ssh_key><br />
<automac offset="0"/><br />
<!--vm_mgmt type="none"/--><br />
<vm_mgmt type="private" network="10.20.0.0" mask="24" offset="0"><br />
<host_mapping /><br />
</vm_mgmt> <br />
<vm_defaults><br />
<console id="0" display="no"/><br />
<console id="1" display="yes"/><br />
</vm_defaults><br />
<cmd-seq seq="step1-6">step1,step2,step3,step3b,step4,step5,step6</cmd-seq><br />
<cmd-seq seq="step4">step41,step42,step43,step44</cmd-seq><br />
<cmd-seq seq="step5">step51,step52,step53</cmd-seq><br />
<!-- start-all for 'noconfig' scenario: all installation steps included --><br />
<!--cmd-seq seq="start-all">step1,step2,step3,step4,step5,step6</cmd-seq--><br />
<!-- start-all for configured scenario: only network and compute node steps included --><br />
<cmd-seq seq="step10">step101,step102</cmd-seq><br />
<cmd-seq seq="start-all">step00,step42,step43,step44,step52,step53</cmd-seq><br />
<cmd-seq seq="discover-hosts">step44</cmd-seq><br />
</global><br />
<br />
<net name="MgmtNet" mode="openvswitch" mtu="1450"/><br />
<net name="TunnNet" mode="openvswitch" mtu="1450"/><br />
<net name="ExtNet" mode="openvswitch" /><br />
<net name="VlanNet" mode="openvswitch" /><br />
<net name="virbr0" mode="virtual_bridge" managed="no"/><br />
<br />
<vm name="controller" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-controller</filesystem><br />
<mem>1G</mem><br />
<!--console id="0" display="yes"/--><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.11/24</ipv4><br />
</if><br />
<if id="2" net="ExtNet"><br />
<ipv4>10.0.10.11/24</ipv4><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-controller.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<filetree seq="on_boot" root="/root/">conf/controller/bin</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
chmod +x /root/bin/*<br />
</exec><br />
<exec seq="on_boot" type="verbatim"><br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
<br />
# Change owner of secret_key to horizon to avoid a 500 error when<br />
# accessing horizon (new problem arosed in v04)<br />
# See: https://ask.openstack.org/en/question/30059/getting-500-internal-server-error-while-accessing-horizon-dashboard-in-ubuntu-icehouse/<br />
chown horizon /var/lib/openstack-dashboard/secret_key<br />
<br />
# Stop nova services. Before being configured, they consume a lot of CPU<br />
service nova-scheduler stop<br />
service nova-api stop<br />
service nova-conductor stop<br />
</exec><br />
<br />
<exec seq="step00" type="verbatim"><br />
# Restart nova services<br />
service nova-scheduler start<br />
service nova-api start<br />
service nova-conductor start<br />
</exec><br />
<br />
<!-- STEP 1: Basic services --><br />
<filetree seq="step1" root="/etc/mysql/mariadb.conf.d/">conf/controller/mysql/mysqld_openstack.cnf</filetree><br />
<filetree seq="step1" root="/etc/">conf/controller/memcached/memcached.conf</filetree><br />
<filetree seq="step1" root="/etc/default/">conf/controller/etcd/etcd</filetree><br />
<!--filetree seq="step1" root="/etc/mongodb.conf">conf/controller/mongodb/mongodb.conf</filetree!--><br />
<exec seq="step1" type="verbatim"><br />
<br />
# Stop nova services. Before being configured, they consume a lot of CPU<br />
service nova-scheduler stop<br />
service nova-api stop<br />
service nova-conductor stop<br />
<br />
# Change all ocurrences of utf8mb4 to utf8. See comment above<br />
#for f in $( find /etc/mysql/mariadb.conf.d/ -type f ); do echo "Changing utf8mb4 to utf8 in file $f"; sed -i -e 's/utf8mb4/utf8/g' $f; done<br />
service mysql restart<br />
#mysql_secure_installation # to be run manually<br />
<br />
rabbitmqctl add_user openstack xxxx<br />
rabbitmqctl set_permissions openstack ".*" ".*" ".*" <br />
<br />
service memcached restart<br />
<br />
systemctl enable etcd<br />
systemctl start etcd<br />
<br />
#service mongodb stop<br />
#rm -f /var/lib/mongodb/journal/prealloc.*<br />
#service mongodb start<br />
</exec><br />
<br />
<!-- STEP 2: Identity service --><br />
<filetree seq="step2" root="/etc/keystone/">conf/controller/keystone/keystone.conf</filetree><br />
<filetree seq="step2" root="/root/bin/">conf/controller/keystone/admin-openrc.sh</filetree><br />
<filetree seq="step2" root="/root/bin/">conf/controller/keystone/demo-openrc.sh</filetree><br />
<exec seq="step2" type="verbatim"><br />
count=1; while ! mysqladmin ping ; do echo -n $count; echo ": waiting for mysql ..."; ((count++)) &amp;&amp; ((count==6)) &amp;&amp; echo "--" &amp;&amp; echo "-- ERROR: database not ready." &amp;&amp; echo "--" &amp;&amp; break; sleep 2; done<br />
</exec><br />
<exec seq="step2" type="verbatim"><br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE keystone;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
su -s /bin/sh -c "keystone-manage db_sync" keystone<br />
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone<br />
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone<br />
<br />
keystone-manage bootstrap --bootstrap-password xxxx \<br />
--bootstrap-admin-url http://controller:5000/v3/ \<br />
--bootstrap-internal-url http://controller:5000/v3/ \<br />
--bootstrap-public-url http://controller:5000/v3/ \<br />
--bootstrap-region-id RegionOne<br />
<br />
echo "ServerName controller" >> /etc/apache2/apache2.conf<br />
#ln -s /etc/apache2/sites-available/wsgi-keystone.conf /etc/apache2/sites-enabled<br />
service apache2 restart<br />
rm -f /var/lib/keystone/keystone.db<br />
sleep 5<br />
<br />
export OS_USERNAME=admin<br />
export OS_PASSWORD=xxxx<br />
export OS_PROJECT_NAME=admin<br />
export OS_USER_DOMAIN_NAME=Default<br />
export OS_PROJECT_DOMAIN_NAME=Default<br />
export OS_AUTH_URL=http://controller:5000/v3<br />
export OS_IDENTITY_API_VERSION=3<br />
<br />
# Create users and projects<br />
openstack project create --domain default --description "Service Project" service<br />
openstack project create --domain default --description "Demo Project" demo<br />
openstack user create --domain default --password=xxxx demo<br />
openstack role create user<br />
openstack role add --project demo --user demo user<br />
</exec><br />
<br />
<!-- <br />
STEP 3: Image service (Glance) <br />
--><br />
<filetree seq="step3" root="/etc/glance/">conf/controller/glance/glance-api.conf</filetree><br />
<filetree seq="step3" root="/etc/glance/">conf/controller/glance/glance-registry.conf</filetree><br />
<exec seq="step3" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE glance;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx glance<br />
openstack role add --project service --user glance admin<br />
openstack service create --name glance --description "OpenStack Image" image<br />
openstack endpoint create --region RegionOne image public http://controller:9292<br />
openstack endpoint create --region RegionOne image internal http://controller:9292<br />
openstack endpoint create --region RegionOne image admin http://controller:9292<br />
<br />
su -s /bin/sh -c "glance-manage db_sync" glance<br />
service glance-registry restart<br />
service glance-api restart<br />
#rm -f /var/lib/glance/glance.sqlite<br />
</exec><br />
<br />
<!-- <br />
STEP 3B: Placement service API<br />
--><br />
<filetree seq="step3b" root="/etc/placement/">conf/controller/placement/placement.conf</filetree><br />
<exec seq="step3b" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE placement;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx placement<br />
openstack role add --project service --user placement admin<br />
openstack service create --name placement --description "Placement API" placement<br />
openstack endpoint create --region RegionOne placement public http://controller:8778<br />
openstack endpoint create --region RegionOne placement internal http://controller:8778<br />
openstack endpoint create --region RegionOne placement admin http://controller:8778<br />
su -s /bin/sh -c "placement-manage db sync" placement<br />
service apache2 restart<br />
</exec><br />
<br />
<br />
<br />
<!-- STEP 4: Compute service (Nova) --><br />
<filetree seq="step41" root="/etc/nova/">conf/controller/nova/nova.conf</filetree><br />
<exec seq="step41" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova_api;"<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova;"<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova_cell0;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
<br />
openstack user create --domain default --password=xxxx nova<br />
openstack role add --project service --user nova admin<br />
openstack service create --name nova --description "OpenStack Compute" compute<br />
<br />
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1<br />
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1<br />
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1<br />
<br />
# Restart services stopped at step 1 to save CPU<br />
service nova-scheduler start<br />
service nova-api start<br />
service nova-conductor start<br />
<br />
su -s /bin/sh -c "nova-manage api_db sync" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova<br />
su -s /bin/sh -c "nova-manage db sync" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova<br />
<br />
service nova-api restart<br />
service nova-consoleauth restart<br />
service nova-scheduler restart<br />
service nova-conductor restart<br />
service nova-novncproxy restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<exec seq="step43" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Wait for compute1 hypervisor to be up<br />
while [ $( openstack hypervisor list --matching compute1 -f value -c State ) != 'up' ]; do echo "waiting for compute1 hypervisor..."; sleep 5; done<br />
</exec><br />
<exec seq="step43" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Wait for compute2 hypervisor to be up<br />
while [ $( openstack hypervisor list --matching compute2 -f value -c State ) != 'up' ]; do echo "waiting for compute2 hypervisor..."; sleep 5; done<br />
</exec><br />
<exec seq="step44" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
openstack hypervisor list<br />
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron) --><br />
<filetree seq="step51" root="/etc/neutron/">conf/controller/neutron/neutron.conf</filetree><br />
<!--filetree seq="step51" root="/etc/neutron/">conf/controller/neutron/neutron_lbaas.conf</filetree--><br />
<filetree seq="step51" root="/etc/neutron/plugins/ml2/">conf/controller/neutron/ml2_conf.ini</filetree><br />
<exec seq="step51" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE neutron;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx neutron<br />
openstack role add --project service --user neutron admin<br />
openstack service create --name neutron --description "OpenStack Networking" network<br />
openstack endpoint create --region RegionOne network public http://controller:9696<br />
openstack endpoint create --region RegionOne network internal http://controller:9696<br />
openstack endpoint create --region RegionOne network admin http://controller:9696<br />
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron<br />
<br />
# LBaaS<br />
neutron-db-manage --subproject neutron-lbaas upgrade head<br />
<br />
# FwaaS<br />
neutron-db-manage --subproject neutron-fwaas upgrade head<br />
<br />
# LBaaS Dashboard panels<br />
#git clone https://git.openstack.org/openstack/neutron-lbaas-dashboard<br />
#cd neutron-lbaas-dashboard<br />
#git checkout stable/mitaka<br />
#python setup.py install<br />
#cp neutron_lbaas_dashboard/enabled/_1481_project_ng_loadbalancersv2_panel.py /usr/share/openstack-dashboard/openstack_dashboard/local/enabled/<br />
#cd /usr/share/openstack-dashboard<br />
#./manage.py collectstatic --noinput<br />
#./manage.py compress<br />
#sudo service apache2 restart<br />
<br />
service nova-api restart<br />
service neutron-server restart<br />
</exec><br />
<br />
<!-- STEP 6: Dashboard service --><br />
<filetree seq="step6" root="/etc/openstack-dashboard/">conf/controller/dashboard/local_settings.py</filetree><br />
<exec seq="step6" type="verbatim"><br />
#chown www-data:www-data /var/lib/openstack-dashboard/secret_key<br />
rm /var/lib/openstack-dashboard/secret_key<br />
systemctl enable apache2<br />
service apache2 restart<br />
</exec><br />
<br />
<!-- STEP 7: Trove service --><br />
<cmd-seq seq="step7">step71,step72,step73</cmd-seq><br />
<exec seq="step71" type="verbatim"><br />
apt-get -y install python-trove python-troveclient python-glanceclient trove-common trove-api trove-taskmanager trove-conductor python-pip<br />
pip install trove-dashboard==7.0.0.0b2<br />
</exec><br />
<br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-conductor.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-taskmanager.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-guestagent.conf</filetree><br />
<exec seq="step72" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE trove;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
<br />
openstack user create --domain default --password xxxx trove<br />
openstack role add --project service --user trove admin<br />
openstack service create --name trove --description "Database" database<br />
openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s<br />
<br />
su -s /bin/sh -c "trove-manage db_sync" trove<br />
<br />
service trove-api restart<br />
service trove-taskmanager restart<br />
service trove-conductor restart<br />
<br />
# Install trove_dashboard<br />
cp -a /usr/local/lib/python2.7/dist-packages/trove_dashboard/enabled/* /usr/share/openstack-dashboard/openstack_dashboard/local/enabled/<br />
service apache2 restart<br />
<br />
</exec><br />
<br />
<exec seq="step73" type="verbatim"><br />
#wget -P /tmp/images http://tarballs.openstack.org/trove/images/ubuntu/mariadb.qcow2<br />
wget -P /tmp/images/ http://138.4.7.228/download/vnx/filesystems/ostack-images/trove/mariadb.qcow2<br />
glance image-create --name "trove-mariadb" --file /tmp/images/mariadb.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/mariadb.qcow2<br />
su -s /bin/sh -c "trove-manage --config-file /etc/trove/trove.conf datastore_update mysql ''" trove <br />
su -s /bin/sh -c "trove-manage --config-file /etc/trove/trove.conf datastore_version_update mysql mariadb mariadb glance_image_ID '' 1" trove<br />
<br />
# Create example database<br />
openstack flavor show m1.smaller >/dev/null 2>&amp;1 || openstack flavor create m1.smaller --ram 512 --disk 3 --vcpus 1 --id 6<br />
#trove create mysql_instance_1 m1.smaller --size 1 --databases myDB --users userA:xxxx --datastore_version mariadb --datastore mysql<br />
</exec><br />
<br />
<!-- STEP 8: Heat service --><br />
<!--cmd-seq seq="step8">step81,step82</cmd-seq--><br />
<br />
<filetree seq="step8" root="/etc/heat/">conf/controller/heat/heat.conf</filetree><br />
<filetree seq="step8" root="/root/heat/">conf/controller/heat/examples</filetree><br />
<exec seq="step8" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE heat;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password xxxx heat<br />
openstack role add --project service --user heat admin<br />
openstack service create --name heat --description "Orchestration" orchestration<br />
openstack service create --name heat-cfn --description "Orchestration" cloudformation<br />
openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1<br />
openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1<br />
openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1<br />
openstack domain create --description "Stack projects and users" heat<br />
openstack user create --domain heat --password xxxx heat_domain_admin<br />
openstack role add --domain heat --user-domain heat --user heat_domain_admin admin<br />
openstack role create heat_stack_owner<br />
openstack role add --project demo --user demo heat_stack_owner<br />
openstack role create heat_stack_user<br />
<br />
su -s /bin/sh -c "heat-manage db_sync" heat<br />
service heat-api restart<br />
service heat-api-cfn restart<br />
service heat-engine restart<br />
<br />
# Install Orchestration interface in Dashboard<br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get install -y gettext<br />
pip3 install heat-dashboard<br />
<br />
cd /root<br />
git clone https://github.com/openstack/heat-dashboard.git<br />
cd heat-dashboard/<br />
git checkout stable/stein<br />
cp heat_dashboard/enabled/_[1-9]*.py /usr/share/openstack-dashboard/openstack_dashboard/local/enabled<br />
python3 ./manage.py compilemessages<br />
cd /usr/share/openstack-dashboard<br />
DJANGO_SETTINGS_MODULE=openstack_dashboard.settings python3 manage.py collectstatic --noinput<br />
DJANGO_SETTINGS_MODULE=openstack_dashboard.settings python3 manage.py compress --force<br />
rm /var/lib/openstack-dashboard/secret_key<br />
service apache2 restart<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-heat" type="verbatim"><br />
source /root/bin/demo-openrc.sh<br />
<br />
# Create internal network<br />
openstack network create net-heat<br />
openstack subnet create --network net-heat --gateway 10.1.10.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.10.0/24 --allocation-pool start=10.1.10.8,end=10.1.10.100 subnet-heat<br />
<br />
mkdir -p /root/keys<br />
openstack keypair create key-heat > /root/keys/key-heat<br />
#export NET_ID=$(openstack network list | awk '/ net-heat / { print $2 }')<br />
export NET_ID=$( openstack network list --name net-heat -f value -c ID )<br />
openstack stack create -t /root/heat/examples/demo-template.yml --parameter "NetID=$NET_ID" stack<br />
<br />
</exec><br />
<br />
<br />
<!-- STEP 9: Tacker service --><br />
<cmd-seq seq="step9">step91,step92</cmd-seq><br />
<br />
<exec seq="step91" type="verbatim"><br />
apt-get -y install python-pip git<br />
pip install --upgrade pip<br />
</exec><br />
<br />
<filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/default-vim-config.yaml</filetree><br />
<filetree seq="step92" root="/root/tacker/">conf/controller/tacker/examples</filetree><br />
<exec seq="step92" type="verbatim"><br />
sed -i -e 's/.*"resource_types:OS::Nova::Flavor":.*/ "resource_types:OS::Nova::Flavor": "role:admin",/' /etc/heat/policy.json <br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE tacker;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password xxxx tacker<br />
openstack role add --project service --user tacker admin<br />
openstack service create --name tacker --description "Tacker Project" nfv-orchestration<br />
openstack endpoint create --region RegionOne nfv-orchestration public http://controller:9890/<br />
openstack endpoint create --region RegionOne nfv-orchestration internal http://controller:9890/<br />
openstack endpoint create --region RegionOne nfv-orchestration admin http://controller:9890/<br />
<br />
mkdir -p /root/tacker<br />
cd /root/tacker<br />
git clone https://github.com/openstack/tacker<br />
cd tacker<br />
git checkout stable/ocata<br />
pip install -r requirements.txt<br />
pip install tosca-parser<br />
python setup.py install<br />
mkdir -p /var/log/tacker<br />
<br />
/usr/local/bin/tacker-db-manage --config-file /usr/local/etc/tacker/tacker.conf upgrade head<br />
<br />
# Tacker client<br />
cd /root/tacker<br />
git clone https://github.com/openstack/python-tackerclient<br />
cd python-tackerclient<br />
git checkout stable/ocata<br />
python setup.py install<br />
<br />
# Tacker horizon<br />
cd /root/tacker<br />
git clone https://github.com/openstack/tacker-horizon<br />
cd tacker-horizon<br />
git checkout stable/ocata<br />
python setup.py install<br />
cp tacker_horizon/enabled/* /usr/share/openstack-dashboard/openstack_dashboard/enabled/<br />
service apache2 restart<br />
<br />
# Start tacker server<br />
mkdir -p /var/log/tacker<br />
nohup python /usr/local/bin/tacker-server \<br />
--config-file /usr/local/etc/tacker/tacker.conf \<br />
--log-file /var/log/tacker/tacker.log &amp;<br />
<br />
# Register default VIM<br />
tacker vim-register --is-default --config-file /usr/local/etc/tacker/default-vim-config.yaml \<br />
--description "Default VIM" "Openstack-VIM"<br />
<br />
</exec><br />
<exec seq="step93" type="verbatim"><br />
nohup python /usr/local/bin/tacker-server \<br />
--config-file /usr/local/etc/tacker/tacker.conf \<br />
--log-file /var/log/tacker/tacker.log &amp;<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-tacker" type="verbatim"><br />
source /root/bin/demo-openrc.sh<br />
<br />
# Create internal network<br />
openstack network create net-tacker<br />
openstack subnet create --network net-tacker --gateway 10.1.11.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.11.0/24 --allocation-pool start=10.1.11.8,end=10.1.11.100 subnet-tacker<br />
<br />
cd /root/tacker/examples<br />
tacker vnfd-create --vnfd-file sample-vnfd.yaml testd<br />
<br />
# Falla con error: <br />
# ERROR: Property error: : resources.VDU1.properties.image: : No module named v2.client<br />
tacker vnf-create --vnfd-id $( tacker vnfd-list | awk '/ testd / { print $2 }' ) test<br />
<br />
</exec><br />
<br />
<!--filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<exec seq="step92" type="verbatim"><br />
</exec--><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
#apt-get -y install python-pip<br />
#pip install --upgrade pip<br />
#pip install gnocchi[mysql,keystone] gnocchiclient<br />
apt-get -y install ceilometer-collector ceilometer-agent-central ceilometer-agent-notification python-ceilometerclient<br />
apt-get -y install gnocchi-common gnocchi-api gnocchi-metricd gnocchi-statsd python-gnocchiclient<br />
<br />
</exec><br />
<br />
<filetree seq="step102" root="/etc/ceilometer/">conf/controller/ceilometer/ceilometer.conf</filetree><br />
<!--filetree seq="step102" root="/etc/mongodb.conf">conf/controller/mongodb/mongodb.conf</filetree--><br />
<!--filetree seq="step102" root="/etc/apache2/sites-available/ceilometer.conf">conf/controller/ceilometer/apache/ceilometer.conf</filetree--><br />
<!--filetree seq="step102" root="/etc/apache2/sites-available/gnocchi.conf">conf/controller/ceilometer/apache/gnocchi.conf</filetree--><br />
<filetree seq="step102" root="/etc/gnocchi/">conf/controller/ceilometer/gnocchi.conf</filetree><br />
<filetree seq="step102" root="/etc/gnocchi/">conf/controller/ceilometer/api-paste.ini</filetree><br />
<exec seq="step102" type="verbatim"><br />
<br />
# Create gnocchi database<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE gnocchidb default character set utf8;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON gnocchidb.* TO 'gnocchi'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON gnocchidb.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
# Ceilometer<br />
source /root/bin/admin-openrc.sh <br />
openstack user create --domain default --password xxxx ceilometer<br />
openstack role add --project service --user ceilometer admin<br />
openstack service create --name ceilometer --description "Telemetry" metering<br />
openstack user create --domain default --password xxxx gnocchi<br />
openstack role add --project service --user gnocchi admin<br />
openstack service create --name gnocchi --description "Metric Service" metric<br />
openstack endpoint create --region RegionOne metric public http://controller:8041<br />
openstack endpoint create --region RegionOne metric internal http://controller:8041<br />
openstack endpoint create --region RegionOne metric admin http://controller:8041<br />
<br />
mkdir -p /var/cache/gnocchi<br />
chown gnocchi:gnocchi -R /var/cache/gnocchi<br />
mkdir -p /var/lib/gnocchi<br />
chown gnocchi:gnocchi -R /var/lib/gnocchi<br />
<br />
gnocchi-upgrade<br />
sed -i 's/8000/8041/g' /usr/bin/gnocchi-api<br />
# Correct error in gnocchi-api startup script<br />
sed -i -e 's/exec $DAEMON $DAEMON_ARGS/exec $DAEMON -- $DAEMON_ARGS/' /etc/init.d/gnocchi-api<br />
systemctl enable gnocchi-api.service gnocchi-metricd.service gnocchi-statsd.service<br />
systemctl start gnocchi-api.service gnocchi-metricd.service gnocchi-statsd.service<br />
<br />
ceilometer-upgrade --skip-metering-database<br />
service ceilometer-agent-central restart<br />
service ceilometer-agent-notification restart<br />
service ceilometer-collector restart<br />
<br />
# Enable Glance service meters<br />
crudini --set /etc/glance/glance-api.conf DEFAULT transport_url rabbit://openstack:xxxx@controller<br />
crudini --set /etc/glance/glance-api.conf oslo_messaging_notifications driver messagingv2<br />
crudini --set /etc/glance/glance-registry.conf DEFAULT transport_url rabbit://openstack:xxxx@controller<br />
crudini --set /etc/glance/glance-registry.conf oslo_messaging_notifications driver messagingv2<br />
<br />
service glance-registry restart<br />
service glance-api restart<br />
<br />
# Enable Neutron service meters<br />
crudini --set /etc/neutron/neutron.conf oslo_messaging_notifications driver messagingv2<br />
service neutron-server restart<br />
<br />
# Enable Heat service meters<br />
crudini --set /etc/heat/heat.conf oslo_messaging_notifications driver messagingv2<br />
service heat-api restart<br />
service heat-api-cfn restart<br />
service heat-engine restart<br />
<br />
#crudini --set /etc/glance/glance-api.conf DEFAULT rpc_backend rabbit<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_notifications driver messagingv2<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_host controller<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_userid openstack<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_password xxxx<br />
<br />
#crudini --set /etc/glance/glance-registry.conf DEFAULT rpc_backend rabbit<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_notifications driver messagingv2<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_host controller<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_userid openstack<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_password xxxx<br />
<br />
<br />
</exec><br />
<br />
<br />
<exec seq="load-img" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create flavors if not created<br />
openstack flavor show m1.nano >/dev/null 2>&amp;1 || openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano<br />
openstack flavor show m1.tiny >/dev/null 2>&amp;1 || openstack flavor create --id 1 --vcpus 1 --ram 512 --disk 1 m1.tiny<br />
openstack flavor show m1.smaller >/dev/null 2>&amp;1 || openstack flavor create --id 6 --vcpus 1 --ram 512 --disk 3 m1.smaller<br />
<br />
# CentOS image<br />
# Cirros image <br />
#wget -P /tmp/images http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img<br />
wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/cirros-0.3.4-x86_64-disk-vnx.qcow2<br />
glance image-create --name "cirros-0.3.4-x86_64-vnx" --file /tmp/images/cirros-0.3.4-x86_64-disk-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/cirros-0.3.4-x86_64-disk*.qcow2<br />
<br />
# Ubuntu image (trusty)<br />
#wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/trusty-server-cloudimg-amd64-disk1-vnx.qcow2<br />
#glance image-create --name "trusty-server-cloudimg-amd64-vnx" --file /tmp/images/trusty-server-cloudimg-amd64-disk1-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
#rm /tmp/images/trusty-server-cloudimg-amd64-disk1*.qcow2<br />
<br />
# Ubuntu image (xenial)<br />
wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/xenial-server-cloudimg-amd64-disk1-vnx.qcow2<br />
glance image-create --name "xenial-server-cloudimg-amd64-vnx" --file /tmp/images/xenial-server-cloudimg-amd64-disk1-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/xenial-server-cloudimg-amd64-disk1*.qcow2<br />
<br />
#wget -P /tmp/images http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2<br />
#glance image-create --name "CentOS-7-x86_64" --file /tmp/images/CentOS-7-x86_64-GenericCloud.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
#rm /tmp/images/CentOS-7-x86_64-GenericCloud.qcow2<br />
</exec><br />
<br />
<exec seq="create-demo-scenario" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create internal network<br />
#neutron net-create net0<br />
openstack network create net0<br />
#neutron subnet-create net0 10.1.1.0/24 --name subnet0 --gateway 10.1.1.1 --dns-nameserver 8.8.8.8<br />
openstack subnet create --network net0 --gateway 10.1.1.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.1.0/24 --allocation-pool start=10.1.1.8,end=10.1.1.100 subnet0<br />
<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm1 > /root/keys/vm1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm1 --nic net-id=net0 --key-name vm1<br />
<br />
# Create external network<br />
#neutron net-create ExtNet --provider:physical_network provider --provider:network_type flat --router:external --shared<br />
openstack network create --share --external --provider-physical-network provider --provider-network-type flat ExtNet<br />
#neutron subnet-create --name ExtSubnet --allocation-pool start=10.0.10.100,end=10.0.10.200 --dns-nameserver 10.0.10.1 --gateway 10.0.10.1 ExtNet 10.0.10.0/24<br />
openstack subnet create --network ExtNet --gateway 10.0.10.1 --dns-nameserver 10.0.10.1 --subnet-range 10.0.10.0/24 --allocation-pool start=10.0.10.100,end=10.0.10.200 ExtSubNet<br />
#neutron router-create r0<br />
openstack router create r0<br />
#neutron router-gateway-set r0 ExtNet<br />
openstack router set r0 --external-gateway ExtNet<br />
#neutron router-interface-add r0 subnet0<br />
openstack router add subnet r0 subnet0<br />
<br />
<br />
# Assign floating IP address to vm1<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm1<br />
openstack server add floating ip vm1 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
<br />
<br />
# Create security group rules to allow ICMP, SSH and WWW access<br />
openstack security group rule create --proto icmp --dst-port 0 default<br />
openstack security group rule create --proto tcp --dst-port 80 default<br />
openstack security group rule create --proto tcp --dst-port 22 default<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-vm2" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm2 > /root/keys/vm2<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm2 --nic net-id=net0 --key-name vm2<br />
# Assign floating IP address to vm2<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm2<br />
openstack server add floating ip vm2 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
</exec><br />
<br />
<exec seq="create-demo-vm3" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm3 > /root/keys/vm3<br />
openstack server create --flavor m1.smaller --image xenial-server-cloudimg-amd64-vnx vm3 --nic net-id=net0 --key-name vm3<br />
# Assign floating IP address to vm3<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm3<br />
openstack server add floating ip vm3 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
</exec><br />
<br />
<exec seq="create-vlan-demo-scenario" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create vlan based networks and subnetworks<br />
#neutron net-create vlan1000 --shared --provider:physical_network vlan --provider:network_type vlan --provider:segmentation_id 1000<br />
#neutron net-create vlan1001 --shared --provider:physical_network vlan --provider:network_type vlan --provider:segmentation_id 1001<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1000 vlan1000<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1001 vlan1001<br />
#neutron subnet-create vlan1000 10.1.2.0/24 --name vlan1000-subnet --allocation-pool start=10.1.2.2,end=10.1.2.99 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8<br />
#neutron subnet-create vlan1001 10.1.3.0/24 --name vlan1001-subnet --allocation-pool start=10.1.3.2,end=10.1.3.99 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8<br />
openstack subnet create --network vlan1000 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.2.0/24 --allocation-pool start=10.1.2.2,end=10.1.2.99 subvlan1000<br />
openstack subnet create --network vlan1001 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.3.0/24 --allocation-pool start=10.1.3.2,end=10.1.3.99 subvlan1001<br />
<br />
<br />
# Create virtual machine<br />
mkdir -p tmp<br />
openstack keypair create vm3 > tmp/vm3<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm3 --nic net-id=vlan1000 --key-name vm3<br />
openstack keypair create vm4 > tmp/vm4<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm4 --nic net-id=vlan1001 --key-name vm4<br />
<br />
<br />
# Create security group rules to allow ICMP, SSH and WWW access<br />
openstack security group rule create --proto icmp --dst-port 0 default<br />
openstack security group rule create --proto tcp --dst-port 80 default<br />
openstack security group rule create --proto tcp --dst-port 22 default<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
echo "--"<br />
echo "-- Keystone (identity)"<br />
echo "--"<br />
echo "Command: openstack --os-auth-url http://controller:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue"<br />
openstack --os-auth-url http://controller:35357/v3 \<br />
--os-project-domain-name default --os-user-domain-name default \<br />
--os-project-name admin --os-username admin token issue<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Glance (images)"<br />
echo "--"<br />
echo "Command: openstack image list"<br />
openstack image list<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Nova (compute)"<br />
echo "--"<br />
echo "Command: openstack compute service list"<br />
openstack compute service list<br />
echo "Command: openstack hypervisor service list"<br />
openstack hypervisor service list<br />
echo "Command: openstack catalog list"<br />
openstack catalog list<br />
echo "Command: nova-status upgrade check"<br />
nova-status upgrade check<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Neutron (network)"<br />
echo "--"<br />
echo "Command: openstack extension list --network"<br />
openstack extension list --network<br />
echo "Command: openstack network agent list"<br />
openstack network agent list<br />
echo "Command: openstack security group list"<br />
openstack security group list<br />
echo "Command: openstack security group rule list"<br />
openstack security group rule list<br />
</exec><br />
<br />
</vm><br />
<br />
<vm name="network" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-network</filesystem><br />
<!--vm name="network" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-network</filesystem--><br />
<mem>1G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.21/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.21/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="4" net="ExtNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<forwarding type="ip" /><br />
<forwarding type="ipv6" /><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts<br />
rm /root/hosts<br />
</exec><br />
<exec seq="on_boot" type="verbatim"><br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --><br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<filetree seq="on_boot" root="/root/">conf/network/bin</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
chmod +x /root/bin/*<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) --><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/neutron.conf</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/metadata_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/plugins/ml2/">conf/network/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step52" root="/etc/neutron/plugins/ml2/">conf/network/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/l3_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/dhcp_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/dnsmasq-neutron.conf</filetree><br />
<!--filetree seq="step52" root="/etc/neutron/">conf/network/neutron/fwaas_driver.ini</filetree--><br />
<!--filetree seq="step52" root="/etc/neutron/">conf/network/neutron/lbaas_agent.ini</filetree--><br />
<exec seq="step52" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
#ovs-vsctl add-br br-ex<br />
ovs-vsctl add-port br-provider eth4<br />
<br />
service neutron-lbaasv2-agent restart<br />
service openvswitch-switch restart<br />
<br />
service neutron-openvswitch-agent restart<br />
#service neutron-linuxbridge-agent restart<br />
service neutron-dhcp-agent restart<br />
service neutron-metadata-agent restart<br />
service neutron-l3-agent restart<br />
<br />
rm -f /var/lib/neutron/neutron.sqlite<br />
</exec><br />
<br />
</vm><br />
<br />
<br />
<vm name="compute1" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-compute</filesystem><br />
<!--vm name="compute1" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64" vcpu="2"--><br />
<!--filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-compute</filesystem--><br />
<mem>2G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.31/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.31/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
# Create /dev/net/tun device <br />
#mkdir -p /dev/net/<br />
#mknod -m 666 /dev/net/tun c 10 200<br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<!-- STEP 42: Compute service (Nova) --><br />
<filetree seq="step42" root="/etc/nova/">conf/compute1/nova/nova.conf</filetree><br />
<filetree seq="step42" root="/etc/nova/">conf/compute1/nova/nova-compute.conf</filetree><br />
<exec seq="step42" type="verbatim"><br />
service nova-compute restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron) --><br />
<filetree seq="step53" root="/etc/neutron/">conf/compute1/neutron/neutron.conf</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/ml2_conf.ini</filetree!--><br />
<exec seq="step53" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
service openvswitch-switch restart<br />
service nova-compute restart<br />
service neutron-openvswitch-agent restart<br />
</exec><br />
<br />
<!--filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<exec seq="step92" type="verbatim"><br />
</exec--><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get -y install ceilometer-agent-compute<br />
</exec><br />
<filetree seq="step102" root="/etc/ceilometer/">conf/compute2/ceilometer/ceilometer.conf</filetree><br />
<exec seq="step102" type="verbatim"><br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit True<br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit_period hour<br />
crudini --set /etc/nova/nova.conf DEFAULT notify_on_state_change vm_and_task_state<br />
crudini --set /etc/nova/nova.conf oslo_messaging_notifications driver messagingv2<br />
service ceilometer-agent-compute restart<br />
service nova-compute restart<br />
</exec><br />
<br />
</vm><br />
<br />
<vm name="compute2" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-compute</filesystem><br />
<!--vm name="compute2" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64" vcpu="2"--><br />
<!--filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-compute</filesystem!--><br />
<mem>2G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.32/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.32/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
# Create /dev/net/tun device <br />
#mkdir -p /dev/net/<br />
#mknod -m 666 /dev/net/tun c 10 200<br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<!-- STEP 42: Compute service (Nova) --><br />
<filetree seq="step42" root="/etc/nova/">conf/compute2/nova/nova.conf</filetree><br />
<filetree seq="step42" root="/etc/nova/">conf/compute2/nova/nova-compute.conf</filetree><br />
<exec seq="step42" type="verbatim"><br />
service nova-compute restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) --><br />
<filetree seq="step53" root="/etc/neutron/">conf/compute2/neutron/neutron.conf</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/ml2_conf.ini</filetree!--><br />
<exec seq="step53" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
service openvswitch-switch restart<br />
service nova-compute restart<br />
service neutron-openvswitch-agent restart<br />
</exec><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get -y install ceilometer-agent-compute<br />
</exec><br />
<filetree seq="step102" root="/etc/ceilometer/">conf/compute2/ceilometer/ceilometer.conf</filetree><br />
<exec seq="step102" type="verbatim"><br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit True<br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit_period hour<br />
crudini --set /etc/nova/nova.conf DEFAULT notify_on_state_change vm_and_task_state<br />
crudini --set /etc/nova/nova.conf oslo_messaging_notifications driver messagingv2<br />
service ceilometer-agent-compute restart<br />
service nova-compute restart<br />
</exec><br />
<br />
</vm><br />
<br />
<br />
<host><br />
<hostif net="ExtNet"><br />
<ipv4>10.0.10.1/24</ipv4><br />
</hostif><br />
<hostif net="MgmtNet"><br />
<ipv4>10.0.0.1/24</ipv4><br />
</hostif><br />
<exec seq="step00" type="verbatim"><br />
echo "--\n-- Waiting for all VMs to be ssh ready...\n--"<br />
</exec><br />
<exec seq="step00" type="verbatim"><br />
# Wait till ssh is accesible in all VMs<br />
while ! $( nc -z controller 22 ); do sleep 1; done<br />
while ! $( nc -z network 22 ); do sleep 1; done<br />
while ! $( nc -z compute1 22 ); do sleep 1; done<br />
while ! $( nc -z compute2 22 ); do sleep 1; done<br />
</exec><br />
<exec seq="step00" type="verbatim"><br />
echo "-- ...OK\n--"<br />
</exec><br />
</host><br />
<br />
</vnx></div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Vnx-rootfslxc&diff=2640
Vnx-rootfslxc
2019-08-28T00:36:18Z
<p>David: /* Basic installation */</p>
<hr />
<div>{{Title|How to create a LXC Ubuntu root filesystem for VNX}}<br />
<br />
== Basic installation ==<br />
<br />
Follow this procedure to create a Ubuntu based LXC root filesystem for VNX. The procedure has been tested with Ubuntu versions from 13.10 to 18.04. <br />
<br />
<ul><br />
<li>Create the rootfs with:</li><br />
lxc-create -t ubuntu -n vnx_rootfs_lxc_ubuntu-18.04<br />
Note1: The default username/password is ubuntu/ubuntu.<br><br />
Note2: This method creates an image with the same architecture (32 or 64 bits) as the host. To create a 32 bits image in a 64 bits host, the only method known to work is to follow the procedure described in this page inside a KVM 32 bits virtual machine.<br />
<li>Move the rootfs to VNX filesystems directory:</li><br />
mv /var/lib/lxc/vnx_rootfs_lxc_ubuntu-18.04/ /usr/share/vnx/filesystems/<br />
<li>If using Ubuntu 17.10 or newer (lxc version 2.1 or newer), convert the configuration file to old config format (VNX converts back to the newer format if needed when starting containers):</li><br />
lxc.rootfs.path -> lxc.rootfs<br />
lxc.uts.name -> lxc.utsname<br />
lxc.net.0.type -> lxc.network.type<br />
lxc.net.0.link -> lxc.network.link<br />
lxc.net.0.flags -> lxc.network.flags<br />
lxc.net.0.hwaddr -> lxc.network.hwaddr<br />
<li>Edit the rootfs configuration file (/usr/share/vnx/filesystems/vnx_rootfs_lxc_ubuntu-16.04/config) to reflect the directory change:</li><br />
lxc.rootfs = /usr/share/vnx/filesystems/vnx_rootfs_lxc_ubuntu-16.04/rootfs<br />
lxc.mount = /usr/share/vnx/filesystems/vnx_rootfs_lxc_ubuntu-16.04/fstab<br />
<li>Create fstab file (if not created):</li><br />
touch vnx_rootfs_lxc_ubuntu-16.04/fstab<br />
<li>Start the new rootfs to configure it and install new software:</li><br />
lxc-start -n vnx -F -f /usr/share/vnx/filesystems/vnx_rootfs_lxc_ubuntu-16.04/config<br />
<li>Once the VM has started, make login (ubuntu/ubuntu) and:</li><br />
<ul><br />
<li>Add VNX user and change the passwords:</li><br />
sudo adduser vnx<br />
sudo adduser vnx sudo<br />
sudo passwd root<br />
<li>Update and install software:</li><br />
sudo apt-get update<br />
sudo apt-get dist-upgrade<br />
sudo apt-get install aptsh openssh-server traceroute telnet nmap apache2 wget tcpdump net-tools ifupdown<br />
update-rc.d -f apache2 remove # to avoid automatic start<br />
systemctl disable apache2 # in newer systems<br />
<li>Change VM name in hosts and hostname files:</li><br />
sudo vi /etc/hosts # change name to vnx<br />
sudo vi /etc/hostname # "<br />
<li>Enable root access through textual consoles with:</li><br />
echo "pts/0" >> /etc/securetty<br />
echo "pts/1" >> /etc/securetty<br />
echo "pts/2" >> /etc/securetty<br />
echo "pts/3" >> /etc/securetty<br />
<li>'''Important:''' edit /etc/network/interfaces and coment "inet dhcp" lines to avoid delays at startup. Besides, edit /etc/init/failsafe.conf and change all "sleep XX" commands to "sleep 1".<br />
</ul><br />
<li>Disable auto-upgrades if enabled:</li><br />
sed -i -e 's/"1"/"0"/g' /etc/apt/apt.conf.d/20auto-upgrades<br />
<li>Delete the "mesg n" command in root's .profile to avoid the nasty message<br />
"mesg: ttyname failed: No such device" when executing commands with lxc-attach:</li><br />
sed -i '/^mesg n/d' /root/.profile<br />
<li>Exit and login again with user vnx/xxxx to delete ubuntu user:</li><br />
sudo deluser ubuntu<br />
<li>Stop the VM with:</li><br />
halt<br />
<br />
<li>If you want to pack the root filesystem into a tar file, use the following command:</li><br />
tar --numeric-owner -czpf vnx_rootfs_lxc_ubuntu-16.04-v025.tgz vnx_rootfs_lxc_ubuntu-16.04-v025<br />
<li>Optionally, create a short link to the rootfs:</li><br />
cd /usr/share/vnx/filesystems<br />
ln -s vnx_rootfs_lxc_ubuntu-16.04 rootfs_lxc<br />
</ul></div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Main_Page&diff=2639
Main Page
2019-08-27T12:31:18Z
<p>David: </p>
<hr />
<div>{{Title|Welcome to Virtual Networks over linuX (VNX) web site}}<br />
<br />
__TOC__<br />
<br />
<br />
==VNX Latest News==<br />
<br />
<ul><br />
<li>Follow VNX news in twitter: https://twitter.com/vnx_upm</li><br />
<li>See also the [[vnx-latest-features|latest features implemented]]</li><br />
</ul><br />
'''Ago 27th, 2019''' -- '''Openstack Stein laboratory''' virtual scenario released. See more details [[Vnx-labo-openstack-4nodes-classic-ovs-stein|here]]. <br />
<br />
'''Ago 31th, 2017''' - New KVM and LXC root filesystems based on Ubuntu 17.04 available (only 64 bits). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command.<br />
<br />
'''Ago 30th, 2017''' - Added support for '''VyOS network operating system'''. Now VNX supports the creation of virtual scenarios including VyOS based virtual machines (either KVM or LXC). See more details [[Vnx-latest-features|here]].<br />
<br />
'''Dec 29th, 2016''' -- New KVM root filesystems based on Kali 2016.2 distribution (https://www.kali.org/). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs -p kali" command. Note: use the <video>vmvga</video> tag in the virtual machine. See the simple_kali64_inet.xml example scenario in latest VNX version.<br />
<br />
'''Dec 11th, 2016''' -- New paper published about the experience of using VNX in networking laboratories:<br />
<ul><br />
<li>D. Fernández, F. J. Ruiz, L. Bellido, E. Pastor, O. Walid and V. Mateos, [http://www.ijee.ie/contents/c320616.html Enhancing Learning Experience in Computer Networking through a Virtualization-Based Laboratory Model], International Journal of Engineering Education Vol. 32, No. 6, pp. 2569–2584, 2016.</li><br />
</ul><br />
<br />
'''Nov 28th, 2016''' -- New KVM root filesystems based on Metasploitable2 distribution (http://r-7.co/Metasploitable2). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs -p metasploitable" command.<br />
<br />
'''Nov 2nd, 2016''' -- New KVM root filesystems based on Fedora 24 server available (only 64 bits version). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command.<br />
<br />
'''Oct 23th, 2016''' -- Openstack Mitaka test scenario released. See more details [[Vnx-labo-openstack-4nodes-classic-ovs-mitaka|here]].<br />
<br />
'''May 25th, 2016''' -- VPLS test scenario based on OpenBSD published. See more details [[Vnx-labo-vpls|here]].<br />
<br />
'''May 16th, 2016''' -- Support for virtio drivers in libvirt virtual machines implemented to improve performance. See more details [[vnx-latest-features|here]].<br />
<br />
'''May 7th, 2016''' -- New KVM and LXC root filesystems based on Ubuntu 16.04 available (have a look at the new Xubuntu version distributed). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command.<br />
<br />
'''May 2nd, 2016''' -- OpenBSD support: VNX now supports OpenBSD virtual machines thanks to Francisco Javier Ruiz contribution.<br />
<br />
'''March 21th, 2016''' -- See some very interesting SDN virtual scenarios prepared by Carlos Martín-Cleto for his Master's Thesis: https://github.com/cletomcj/vnx-sdn.<br />
<br />
'''February 21th, 2016''' -- New vagrant and virtualbox (OVA) [http://goo.gl/8RxXvA VNX demo virtual machines] available for easily test LXC based virtual scenarios (see instructions for [http://goo.gl/f9jnvA Vagrant] and [http://goo.gl/JdB9ik VirtualBox]).<br />
<br />
'''February 15th, 2016''' -- New KVM root filesystems based on Ubuntu 15.10 available (server and lubuntu in 32 and 64 bits versions). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command.<br />
<br />
'''February 14th, 2016''' -- New recipe to [http://web.dit.upm.es/vnxwiki/index.php/Vnx-install-fedora23 install VNX on Fedora]. Tested on a fresh copy of Fedora 23 workstation. <br />
<br />
'''July 24th, 2015''' -- New Openstack-Opendaylight laboratory scenarios available: https://goo.gl/JpxCnB. Designed to explore an OpenStack environment running OpenDaylight as the network management provider. Prepared by Raúl Álvarez Pinilla as a result of his Master's Thesis. <br />
<br />
'''June 16th, 2015''' -- New LXC root filesystems based on Ubuntu 15.04 available (32 and 64 bits). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command.<br />
<br />
'''June 10th, 2015''' -- New root filesystems based on REMnux: A Linux Toolkit for Reverse-Engineering and Analyzing Malware. Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command. Use simple_remnux.xml example scenario to test it.<br />
<br />
'''June 6th, 2015''' -- New interesting VNX scenario available: a [[Vnx-labo-fw|security lab]] designed to allow 16 student groups to work together configuring firewalls and using security related tools and distributions.<br />
<br />
'''April 25th, 2015''' -- New root filesystems based on Ubuntu 15.04 available (server and lubuntu in 32 and 64 bits versions). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command.<br />
<br />
'''March 16th, 2015''' -- Latest VNX versions include bash completion capabilities. Just use tab key to see the command line options available and help completiting option values. <br />
<br />
'''March 6th, 2015''' -- New root filesystems based on Kali 1.1.0 available (32 and 64 bits versions). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command. Update VNX version with 'vnx_update' command before using it.<br />
<br />
'''October 24th, 2014''' -- New root filesystems based on Ubuntu 14.10 available (server and lubuntu in 32 and 64 bits versions). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command.<br />
<br />
'''August 27th, 2014''' -- New functionality implemented to specify the position, size and desktop number where the VM console windows are shown by using .cvnx files. See more information [http://web.dit.upm.es/vnxwiki/index.php/Vnx-console-mgmt here].<br />
<br />
'''August 25th, 2014''' -- VNX now supports LXC virtual machines. See the VNX tutorial for LXC [http://web.dit.upm.es/vnxwiki/index.php/Vnx-tutorial-lxc here]. Additionally, see how to [http://web.dit.upm.es/vnxwiki/index.php/Vnx-rootfslxc create] or [http://web.dit.upm.es/vnxwiki/index.php/Vnx-modify-rootfs modify] a LXC root filesystem. <br />
<br />
'''June 27th, 2014''' -- Jorge Somavilla wins the [http://www.coit.es/descargar.php?idfichero=9461 ''Asociación de Telemática'' prize from COIT-AEIT] to his [[References#Final_Degree_Projects|Final Degree Project about VNX]]. See on [https://twitter.com/jsomav/status/484039220626210816 twitter].<br />
<br />
'''June 21th, 2014''' -- New root filesystem based on Kali Linux (old Backtrack) available (32 and 64 bits versions). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command. Use simple_kali.xml and simple_kali64.xml examples to test them (now the examples include direct Internet connection). Update VNX to latest version to use them.<br />
<br />
'''June 14th, 2014''' -- Follow VNX news in twitter: https://twitter.com/vnx_upm<br />
<br />
'''June 14th, 2014''' -- A Vagrant virtual machine to easily test VNX has been created. See [[Vnx-tutorial-vagrant|how to use it]] and [[Vnx-create-vagrant-vm|how it has been created]]<br />
<br />
'''October 17th, 2012''' -- New root filesystem added based on [http://www.caine-live.net/ CAINE (Computer Aided INvestigative Environment)] Ubuntu distribution. Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command. A new simple_caine.xml example has been added to latest VNX distribution to easy testing it.<br />
<br />
'''June 19th, 2012''' -- A new updated Debian root filesystem has been created, as well as a new UML kernel (ver 3.3.8) to work with it. See [[Vnx-install-root_fs#UML_root_filesystems|how to download and install them]] and the recipes followed for their creation: [[Vnx-rootfsdebian|rootfs]] and [[Vnx-rootfs-uml-kernel|kernel]]. To create the kernel, the traditional UML exec extension kernel patch has been updated to work with kernel 3.3.8. You can find the new kernel patch [http://vnx.dit.upm.es/vnx/kernels/mconsole-exec-3.3.8.patch here]<br />
<br />
'''May 31th, 2012''' -- New beta version of VNX (2.0b.2243) released including distributed deployment capabilities (EDIV). See the [[Docintro|documentation]] for more information.<br />
<br />
'''May 24th, 2012''' -- Jorge Somavilla wins the TNC2012 student poster competition. Read the full story [http://www.terena.org/news/fullstory.php?news_id=3168 here], [http://www.rediris.es/anuncios/2012/20120525_0.html.es here] or [http://www.upm.es/institucional/UPM/CanalUPM/Noticias/2532c60f455e7310VgnVCM10000009c7648aRCRD here]<br />
<br />
==About VNX==<br />
<br />
'''VNX''' is a general purpose open-source virtualization tool designed to help building virtual network testbeds automatically. It allows the definition and automatic deployment of network scenarios made of virtual machines of different types (Linux, Windows, FreeBSD, Olive or Dynamips routers, etc) interconnected following a user-defined topology, possibly connected to external networks.<br />
<br />
'''VNX''' has been developed by the <br />
<!-- [http://www.dit.upm.es/rsti Telecommunication and Internet Networks and Services (RSTI)] research group of the --><br />
[http://www.dit.upm.es Telematics Engineering Department (DIT)] of the [http://www.upm.es/internacional Technical University of Madrid (UPM)].<br />
<br />
'''VNX''' is a useful tool for testing network applications/services over complex testbeds made of virtual nodes and networks, as well as for creating complex network laboratories to allow students to interact with realistic network scenarios. As other similar tools aimed to create virtual network scenarios (like GNS3, NetKit, MLN or Marionnet), VNX provides a way to manage testbeds avoiding the investment and management complexity needed to create them using real equipment.<br />
<br />
'''VNX''' is made of two main parts: <br />
* an XML language that allows describing the virtual network scenario (VNX specification language)<br />
* the VNX program, that parses the scenario description and builds and manages the virtual scenario over a Linux machine<br />
<br />
'''VNX''' comes with a distributed version (EDIV) that allows the deployment of virtual scenarios over clusters of Linux servers, improving the scalability to scenarios made of tenths or even hundreds of virtual machines.<br />
<br />
'''VNX''' is built over the long experience of a previous tool named [http://www.dit.upm.es/vnuml VNUML (Virtual Networks over User Mode Linux)] and brings important new functionalities that overcome the most important limitations VNUML tool had:<br />
* Integration of new virtualization platforms to allow virtual machines running other operating systems (Windows, FreeBSD, etc) apart from Linux. In this sense:<br />
** VNX uses [http://libvirt.org libvirt] to interact with the virtualization capabilities of the host, allowing the use of most of the virtualization platforms available for Linux (KVM, Xen, etc)<br />
** Integrates [http://www.ipflow.utc.fr/blog/ Dynamips] and Olive router virtualization platforms to allow limited emulation of CISCO and Juniper routers<br />
** Integrates also Linux Containers (LXC) support<br />
* Individual management of virtual machines <br />
* Autoconfiguration and command execution capabilities for several operating systems: Linux, FreeBSD and Windows (XP and 7)<br />
* Integration of [http://openvswitch.org/ Openvswitch] with support for VLAN configuration, inter-switches connections and SDN parameter configuration (controller ip address, mode, Openflow version, etc.).<br />
<br />
'''VNX''' has been developed with the help and support of several people and companies. See the [[VNXteam|VNX team page]] for details.</div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Main_Page&diff=2638
Main Page
2019-08-27T12:25:03Z
<p>David: /* VNX Latest News */</p>
<hr />
<div>{{Title|Welcome to Virtual Networks over linuX (VNX) web site}}<br />
<br />
__TOC__<br />
<br />
<br />
==VNX Latest News==<br />
<br />
<ul><br />
<li>Follow VNX news in twitter: https://twitter.com/vnx_upm</li><br />
<li>See also the [[vnx-latest-features|latest features implemented]]</li><br />
</ul><br />
'''Ago 27th, 2019''' -- Openstack Stein laboratory virtual scenario released. See more details [[Vnx-labo-openstack-4nodes-classic-ovs-stein|here]]. <br />
<br />
'''Ago 31th, 2017''' - New KVM and LXC root filesystems based on Ubuntu 17.04 available (only 64 bits). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command.<br />
<br />
'''Ago 30th, 2017''' - Added support for '''VyOS network operating system'''. Now VNX supports the creation of virtual scenarios including VyOS based virtual machines (either KVM or LXC). See more details [[Vnx-latest-features|here]].<br />
<br />
'''Dec 29th, 2016''' -- New KVM root filesystems based on Kali 2016.2 distribution (https://www.kali.org/). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs -p kali" command. Note: use the <video>vmvga</video> tag in the virtual machine. See the simple_kali64_inet.xml example scenario in latest VNX version.<br />
<br />
'''Dec 11th, 2016''' -- New paper published about the experience of using VNX in networking laboratories:<br />
<ul><br />
<li>D. Fernández, F. J. Ruiz, L. Bellido, E. Pastor, O. Walid and V. Mateos, [http://www.ijee.ie/contents/c320616.html Enhancing Learning Experience in Computer Networking through a Virtualization-Based Laboratory Model], International Journal of Engineering Education Vol. 32, No. 6, pp. 2569–2584, 2016.</li><br />
</ul><br />
<br />
'''Nov 28th, 2016''' -- New KVM root filesystems based on Metasploitable2 distribution (http://r-7.co/Metasploitable2). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs -p metasploitable" command.<br />
<br />
'''Nov 2nd, 2016''' -- New KVM root filesystems based on Fedora 24 server available (only 64 bits version). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command.<br />
<br />
'''Oct 23th, 2016''' -- Openstack Mitaka test scenario released. See more details [[Vnx-labo-openstack-4nodes-classic-ovs-mitaka|here]].<br />
<br />
'''May 25th, 2016''' -- VPLS test scenario based on OpenBSD published. See more details [[Vnx-labo-vpls|here]].<br />
<br />
'''May 16th, 2016''' -- Support for virtio drivers in libvirt virtual machines implemented to improve performance. See more details [[vnx-latest-features|here]].<br />
<br />
'''May 7th, 2016''' -- New KVM and LXC root filesystems based on Ubuntu 16.04 available (have a look at the new Xubuntu version distributed). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command.<br />
<br />
'''May 2nd, 2016''' -- OpenBSD support: VNX now supports OpenBSD virtual machines thanks to Francisco Javier Ruiz contribution.<br />
<br />
'''March 21th, 2016''' -- See some very interesting SDN virtual scenarios prepared by Carlos Martín-Cleto for his Master's Thesis: https://github.com/cletomcj/vnx-sdn.<br />
<br />
'''February 21th, 2016''' -- New vagrant and virtualbox (OVA) [http://goo.gl/8RxXvA VNX demo virtual machines] available for easily test LXC based virtual scenarios (see instructions for [http://goo.gl/f9jnvA Vagrant] and [http://goo.gl/JdB9ik VirtualBox]).<br />
<br />
'''February 15th, 2016''' -- New KVM root filesystems based on Ubuntu 15.10 available (server and lubuntu in 32 and 64 bits versions). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command.<br />
<br />
'''February 14th, 2016''' -- New recipe to [http://web.dit.upm.es/vnxwiki/index.php/Vnx-install-fedora23 install VNX on Fedora]. Tested on a fresh copy of Fedora 23 workstation. <br />
<br />
'''July 24th, 2015''' -- New Openstack-Opendaylight laboratory scenarios available: https://goo.gl/JpxCnB. Designed to explore an OpenStack environment running OpenDaylight as the network management provider. Prepared by Raúl Álvarez Pinilla as a result of his Master's Thesis. <br />
<br />
'''June 16th, 2015''' -- New LXC root filesystems based on Ubuntu 15.04 available (32 and 64 bits). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command.<br />
<br />
'''June 10th, 2015''' -- New root filesystems based on REMnux: A Linux Toolkit for Reverse-Engineering and Analyzing Malware. Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command. Use simple_remnux.xml example scenario to test it.<br />
<br />
'''June 6th, 2015''' -- New interesting VNX scenario available: a [[Vnx-labo-fw|security lab]] designed to allow 16 student groups to work together configuring firewalls and using security related tools and distributions.<br />
<br />
'''April 25th, 2015''' -- New root filesystems based on Ubuntu 15.04 available (server and lubuntu in 32 and 64 bits versions). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command.<br />
<br />
'''March 16th, 2015''' -- Latest VNX versions include bash completion capabilities. Just use tab key to see the command line options available and help completiting option values. <br />
<br />
'''March 6th, 2015''' -- New root filesystems based on Kali 1.1.0 available (32 and 64 bits versions). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command. Update VNX version with 'vnx_update' command before using it.<br />
<br />
'''October 24th, 2014''' -- New root filesystems based on Ubuntu 14.10 available (server and lubuntu in 32 and 64 bits versions). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command.<br />
<br />
'''August 27th, 2014''' -- New functionality implemented to specify the position, size and desktop number where the VM console windows are shown by using .cvnx files. See more information [http://web.dit.upm.es/vnxwiki/index.php/Vnx-console-mgmt here].<br />
<br />
'''August 25th, 2014''' -- VNX now supports LXC virtual machines. See the VNX tutorial for LXC [http://web.dit.upm.es/vnxwiki/index.php/Vnx-tutorial-lxc here]. Additionally, see how to [http://web.dit.upm.es/vnxwiki/index.php/Vnx-rootfslxc create] or [http://web.dit.upm.es/vnxwiki/index.php/Vnx-modify-rootfs modify] a LXC root filesystem. <br />
<br />
'''June 27th, 2014''' -- Jorge Somavilla wins the [http://www.coit.es/descargar.php?idfichero=9461 ''Asociación de Telemática'' prize from COIT-AEIT] to his [[References#Final_Degree_Projects|Final Degree Project about VNX]]. See on [https://twitter.com/jsomav/status/484039220626210816 twitter].<br />
<br />
'''June 21th, 2014''' -- New root filesystem based on Kali Linux (old Backtrack) available (32 and 64 bits versions). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command. Use simple_kali.xml and simple_kali64.xml examples to test them (now the examples include direct Internet connection). Update VNX to latest version to use them.<br />
<br />
'''June 14th, 2014''' -- Follow VNX news in twitter: https://twitter.com/vnx_upm<br />
<br />
'''June 14th, 2014''' -- A Vagrant virtual machine to easily test VNX has been created. See [[Vnx-tutorial-vagrant|how to use it]] and [[Vnx-create-vagrant-vm|how it has been created]]<br />
<br />
'''October 17th, 2012''' -- New root filesystem added based on [http://www.caine-live.net/ CAINE (Computer Aided INvestigative Environment)] Ubuntu distribution. Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command. A new simple_caine.xml example has been added to latest VNX distribution to easy testing it.<br />
<br />
'''June 19th, 2012''' -- A new updated Debian root filesystem has been created, as well as a new UML kernel (ver 3.3.8) to work with it. See [[Vnx-install-root_fs#UML_root_filesystems|how to download and install them]] and the recipes followed for their creation: [[Vnx-rootfsdebian|rootfs]] and [[Vnx-rootfs-uml-kernel|kernel]]. To create the kernel, the traditional UML exec extension kernel patch has been updated to work with kernel 3.3.8. You can find the new kernel patch [http://vnx.dit.upm.es/vnx/kernels/mconsole-exec-3.3.8.patch here]<br />
<br />
'''May 31th, 2012''' -- New beta version of VNX (2.0b.2243) released including distributed deployment capabilities (EDIV). See the [[Docintro|documentation]] for more information.<br />
<br />
'''May 24th, 2012''' -- Jorge Somavilla wins the TNC2012 student poster competition. Read the full story [http://www.terena.org/news/fullstory.php?news_id=3168 here], [http://www.rediris.es/anuncios/2012/20120525_0.html.es here] or [http://www.upm.es/institucional/UPM/CanalUPM/Noticias/2532c60f455e7310VgnVCM10000009c7648aRCRD here]<br />
<br />
==About VNX==<br />
<br />
'''VNX''' is a general purpose open-source virtualization tool designed to help building virtual network testbeds automatically. It allows the definition and automatic deployment of network scenarios made of virtual machines of different types (Linux, Windows, FreeBSD, Olive or Dynamips routers, etc) interconnected following a user-defined topology, possibly connected to external networks.<br />
<br />
'''VNX''' has been developed by the <br />
<!-- [http://www.dit.upm.es/rsti Telecommunication and Internet Networks and Services (RSTI)] research group of the --><br />
[http://www.dit.upm.es Telematics Engineering Department (DIT)] of the [http://www.upm.es/internacional Technical University of Madrid (UPM)].<br />
<br />
'''VNX''' is a useful tool for testing network applications/services over complex testbeds made of virtual nodes and networks, as well as for creating complex network laboratories to allow students to interact with realistic network scenarios. As other similar tools aimed to create virtual network scenarios (like GNS3, NetKit, MLN or Marionnet), VNX provides a way to manage testbeds avoiding the investment and management complexity needed to create them using real equipment.<br />
<br />
'''VNX''' is made of two main parts: <br />
* an XML language that allows describing the virtual network scenario (VNX specification language)<br />
* the VNX program, that parses the scenario description and builds and manages the virtual scenario over a Linux machine<br />
<br />
'''VNX''' comes with a distributed version (EDIV) that allows the deployment of virtual scenarios over clusters of Linux servers, improving the scalability to scenarios made of tenths or even hundreds of virtual machines.<br />
<br />
'''VNX''' is built over the long experience of a previous tool named [http://www.dit.upm.es/vnuml VNUML (Virtual Networks over User Mode Linux)] and brings important new functionalities that overcome the most important limitations VNUML tool had:<br />
* Integration of new virtualization platforms to allow virtual machines running other operating systems (Windows, FreeBSD, etc) apart from Linux. In this sense:<br />
** VNX uses [http://libvirt.org libvirt] to interact with the virtualization capabilities of the host, allowing the use of most of the virtualization platforms available for Linux (KVM, Xen, etc)<br />
** Integrates [http://www.ipflow.utc.fr/blog/ Dynamips] and Olive router virtualization platforms to allow limited emulation of CISCO and Juniper routers<br />
** Integrates also Linux Containers (LXC) support<br />
* Individual management of virtual machines <br />
* Autoconfiguration and command execution capabilities for several operating systems: Linux, FreeBSD and Windows (XP and 7)<br />
* Integration of [http://openvswitch.org/ Openvswitch] with support for VLAN configuration, inter-switches connections and SDN parameter configuration (controller ip address, mode, Openflow version, etc.).<br />
<br />
'''VNX''' has been developed with the help and support of several people and companies. See the [[VNXteam|VNX team page]] for details.</div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Main_Page&diff=2637
Main Page
2019-08-27T12:24:41Z
<p>David: /* VNX Latest News */</p>
<hr />
<div>{{Title|Welcome to Virtual Networks over linuX (VNX) web site}}<br />
<br />
__TOC__<br />
<br />
<br />
==VNX Latest News==<br />
<br />
<ul><br />
<li>Follow VNX news in twitter: https://twitter.com/vnx_upm</li><br />
<li>See also the [[vnx-latest-features|latest features implemented]]</li><br />
</ul><br />
'''Ago 27th, 2019 -- Openstack Stein laboratory virtual scenario released. See more details [[Vnx-labo-openstack-4nodes-classic-ovs-stein|here]]. <br />
<br />
'''Ago 31th, 2017''' - New KVM and LXC root filesystems based on Ubuntu 17.04 available (only 64 bits). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command.<br />
<br />
'''Ago 30th, 2017''' - Added support for '''VyOS network operating system'''. Now VNX supports the creation of virtual scenarios including VyOS based virtual machines (either KVM or LXC). See more details [[Vnx-latest-features|here]].<br />
<br />
'''Dec 29th, 2016''' -- New KVM root filesystems based on Kali 2016.2 distribution (https://www.kali.org/). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs -p kali" command. Note: use the <video>vmvga</video> tag in the virtual machine. See the simple_kali64_inet.xml example scenario in latest VNX version.<br />
<br />
'''Dec 11th, 2016''' -- New paper published about the experience of using VNX in networking laboratories:<br />
<ul><br />
<li>D. Fernández, F. J. Ruiz, L. Bellido, E. Pastor, O. Walid and V. Mateos, [http://www.ijee.ie/contents/c320616.html Enhancing Learning Experience in Computer Networking through a Virtualization-Based Laboratory Model], International Journal of Engineering Education Vol. 32, No. 6, pp. 2569–2584, 2016.</li><br />
</ul><br />
<br />
'''Nov 28th, 2016''' -- New KVM root filesystems based on Metasploitable2 distribution (http://r-7.co/Metasploitable2). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs -p metasploitable" command.<br />
<br />
'''Nov 2nd, 2016''' -- New KVM root filesystems based on Fedora 24 server available (only 64 bits version). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command.<br />
<br />
'''Oct 23th, 2016''' -- Openstack Mitaka test scenario released. See more details [[Vnx-labo-openstack-4nodes-classic-ovs-mitaka|here]].<br />
<br />
'''May 25th, 2016''' -- VPLS test scenario based on OpenBSD published. See more details [[Vnx-labo-vpls|here]].<br />
<br />
'''May 16th, 2016''' -- Support for virtio drivers in libvirt virtual machines implemented to improve performance. See more details [[vnx-latest-features|here]].<br />
<br />
'''May 7th, 2016''' -- New KVM and LXC root filesystems based on Ubuntu 16.04 available (have a look at the new Xubuntu version distributed). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command.<br />
<br />
'''May 2nd, 2016''' -- OpenBSD support: VNX now supports OpenBSD virtual machines thanks to Francisco Javier Ruiz contribution.<br />
<br />
'''March 21th, 2016''' -- See some very interesting SDN virtual scenarios prepared by Carlos Martín-Cleto for his Master's Thesis: https://github.com/cletomcj/vnx-sdn.<br />
<br />
'''February 21th, 2016''' -- New vagrant and virtualbox (OVA) [http://goo.gl/8RxXvA VNX demo virtual machines] available for easily test LXC based virtual scenarios (see instructions for [http://goo.gl/f9jnvA Vagrant] and [http://goo.gl/JdB9ik VirtualBox]).<br />
<br />
'''February 15th, 2016''' -- New KVM root filesystems based on Ubuntu 15.10 available (server and lubuntu in 32 and 64 bits versions). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command.<br />
<br />
'''February 14th, 2016''' -- New recipe to [http://web.dit.upm.es/vnxwiki/index.php/Vnx-install-fedora23 install VNX on Fedora]. Tested on a fresh copy of Fedora 23 workstation. <br />
<br />
'''July 24th, 2015''' -- New Openstack-Opendaylight laboratory scenarios available: https://goo.gl/JpxCnB. Designed to explore an OpenStack environment running OpenDaylight as the network management provider. Prepared by Raúl Álvarez Pinilla as a result of his Master's Thesis. <br />
<br />
'''June 16th, 2015''' -- New LXC root filesystems based on Ubuntu 15.04 available (32 and 64 bits). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command.<br />
<br />
'''June 10th, 2015''' -- New root filesystems based on REMnux: A Linux Toolkit for Reverse-Engineering and Analyzing Malware. Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command. Use simple_remnux.xml example scenario to test it.<br />
<br />
'''June 6th, 2015''' -- New interesting VNX scenario available: a [[Vnx-labo-fw|security lab]] designed to allow 16 student groups to work together configuring firewalls and using security related tools and distributions.<br />
<br />
'''April 25th, 2015''' -- New root filesystems based on Ubuntu 15.04 available (server and lubuntu in 32 and 64 bits versions). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command.<br />
<br />
'''March 16th, 2015''' -- Latest VNX versions include bash completion capabilities. Just use tab key to see the command line options available and help completiting option values. <br />
<br />
'''March 6th, 2015''' -- New root filesystems based on Kali 1.1.0 available (32 and 64 bits versions). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command. Update VNX version with 'vnx_update' command before using it.<br />
<br />
'''October 24th, 2014''' -- New root filesystems based on Ubuntu 14.10 available (server and lubuntu in 32 and 64 bits versions). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command.<br />
<br />
'''August 27th, 2014''' -- New functionality implemented to specify the position, size and desktop number where the VM console windows are shown by using .cvnx files. See more information [http://web.dit.upm.es/vnxwiki/index.php/Vnx-console-mgmt here].<br />
<br />
'''August 25th, 2014''' -- VNX now supports LXC virtual machines. See the VNX tutorial for LXC [http://web.dit.upm.es/vnxwiki/index.php/Vnx-tutorial-lxc here]. Additionally, see how to [http://web.dit.upm.es/vnxwiki/index.php/Vnx-rootfslxc create] or [http://web.dit.upm.es/vnxwiki/index.php/Vnx-modify-rootfs modify] a LXC root filesystem. <br />
<br />
'''June 27th, 2014''' -- Jorge Somavilla wins the [http://www.coit.es/descargar.php?idfichero=9461 ''Asociación de Telemática'' prize from COIT-AEIT] to his [[References#Final_Degree_Projects|Final Degree Project about VNX]]. See on [https://twitter.com/jsomav/status/484039220626210816 twitter].<br />
<br />
'''June 21th, 2014''' -- New root filesystem based on Kali Linux (old Backtrack) available (32 and 64 bits versions). Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command. Use simple_kali.xml and simple_kali64.xml examples to test them (now the examples include direct Internet connection). Update VNX to latest version to use them.<br />
<br />
'''June 14th, 2014''' -- Follow VNX news in twitter: https://twitter.com/vnx_upm<br />
<br />
'''June 14th, 2014''' -- A Vagrant virtual machine to easily test VNX has been created. See [[Vnx-tutorial-vagrant|how to use it]] and [[Vnx-create-vagrant-vm|how it has been created]]<br />
<br />
'''October 17th, 2012''' -- New root filesystem added based on [http://www.caine-live.net/ CAINE (Computer Aided INvestigative Environment)] Ubuntu distribution. Download it from [http://vnx.dit.upm.es/vnx/filesystems here] or using "vnx_download_rootfs" command. A new simple_caine.xml example has been added to latest VNX distribution to easy testing it.<br />
<br />
'''June 19th, 2012''' -- A new updated Debian root filesystem has been created, as well as a new UML kernel (ver 3.3.8) to work with it. See [[Vnx-install-root_fs#UML_root_filesystems|how to download and install them]] and the recipes followed for their creation: [[Vnx-rootfsdebian|rootfs]] and [[Vnx-rootfs-uml-kernel|kernel]]. To create the kernel, the traditional UML exec extension kernel patch has been updated to work with kernel 3.3.8. You can find the new kernel patch [http://vnx.dit.upm.es/vnx/kernels/mconsole-exec-3.3.8.patch here]<br />
<br />
'''May 31th, 2012''' -- New beta version of VNX (2.0b.2243) released including distributed deployment capabilities (EDIV). See the [[Docintro|documentation]] for more information.<br />
<br />
'''May 24th, 2012''' -- Jorge Somavilla wins the TNC2012 student poster competition. Read the full story [http://www.terena.org/news/fullstory.php?news_id=3168 here], [http://www.rediris.es/anuncios/2012/20120525_0.html.es here] or [http://www.upm.es/institucional/UPM/CanalUPM/Noticias/2532c60f455e7310VgnVCM10000009c7648aRCRD here]<br />
<br />
==About VNX==<br />
<br />
'''VNX''' is a general purpose open-source virtualization tool designed to help building virtual network testbeds automatically. It allows the definition and automatic deployment of network scenarios made of virtual machines of different types (Linux, Windows, FreeBSD, Olive or Dynamips routers, etc) interconnected following a user-defined topology, possibly connected to external networks.<br />
<br />
'''VNX''' has been developed by the <br />
<!-- [http://www.dit.upm.es/rsti Telecommunication and Internet Networks and Services (RSTI)] research group of the --><br />
[http://www.dit.upm.es Telematics Engineering Department (DIT)] of the [http://www.upm.es/internacional Technical University of Madrid (UPM)].<br />
<br />
'''VNX''' is a useful tool for testing network applications/services over complex testbeds made of virtual nodes and networks, as well as for creating complex network laboratories to allow students to interact with realistic network scenarios. As other similar tools aimed to create virtual network scenarios (like GNS3, NetKit, MLN or Marionnet), VNX provides a way to manage testbeds avoiding the investment and management complexity needed to create them using real equipment.<br />
<br />
'''VNX''' is made of two main parts: <br />
* an XML language that allows describing the virtual network scenario (VNX specification language)<br />
* the VNX program, that parses the scenario description and builds and manages the virtual scenario over a Linux machine<br />
<br />
'''VNX''' comes with a distributed version (EDIV) that allows the deployment of virtual scenarios over clusters of Linux servers, improving the scalability to scenarios made of tenths or even hundreds of virtual machines.<br />
<br />
'''VNX''' is built over the long experience of a previous tool named [http://www.dit.upm.es/vnuml VNUML (Virtual Networks over User Mode Linux)] and brings important new functionalities that overcome the most important limitations VNUML tool had:<br />
* Integration of new virtualization platforms to allow virtual machines running other operating systems (Windows, FreeBSD, etc) apart from Linux. In this sense:<br />
** VNX uses [http://libvirt.org libvirt] to interact with the virtualization capabilities of the host, allowing the use of most of the virtualization platforms available for Linux (KVM, Xen, etc)<br />
** Integrates [http://www.ipflow.utc.fr/blog/ Dynamips] and Olive router virtualization platforms to allow limited emulation of CISCO and Juniper routers<br />
** Integrates also Linux Containers (LXC) support<br />
* Individual management of virtual machines <br />
* Autoconfiguration and command execution capabilities for several operating systems: Linux, FreeBSD and Windows (XP and 7)<br />
* Integration of [http://openvswitch.org/ Openvswitch] with support for VLAN configuration, inter-switches connections and SDN parameter configuration (controller ip address, mode, Openflow version, etc.).<br />
<br />
'''VNX''' has been developed with the help and support of several people and companies. See the [[VNXteam|VNX team page]] for details.</div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Vnx-labo-openstack-4nodes-classic-ovs-stein&diff=2636
Vnx-labo-openstack-4nodes-classic-ovs-stein
2019-08-27T11:54:40Z
<p>David: /* Stopping or releasing the scenario */</p>
<hr />
<div>{{Title|VNX Openstack Stein four nodes classic scenario using Open vSwitch}}<br />
<br />
== Introduction ==<br />
<br />
This is an Openstack tutorial scenario designed to experiment with Openstack free and open-source software platform for cloud-computing.<br />
<br />
The scenario is made of four virtual machines: a controller node, a network node and two compute nodes, all based on LXC. Optionally, new compute nodes can be added by starting additional VNX scenarios.<br />
<br />
Openstack version used is Stein (April 2019) over Ubuntu 18.04 LTS. The deployment scenario is the one that was named "Classic with Open vSwitch" and was described in previous versions of Openstack documentation (https://docs.openstack.org/liberty/networking-guide/scenario-classic-ovs.html). It is prepared to experiment either with the deployment of virtual machines using "Self Service Networks" provided by Openstack, or with the use of external "Provider networks".<br />
<br />
The scenario is similar to the ones developed by Raul Alvarez to test OpenDaylight-Openstack integration, but instead of using Devstack to configure Openstack nodes, the configuration is done by means of commands integrated into the VNX scenario following [https://docs.openstack.org/stein/install/ Openstack Stein installation recipes].<br />
<br />
[[File:Openstack_tutorial-stein.png|center|thumb|600px|<div align=center><br />
'''Figure 1: Openstack tutorial scenario'''</div>]]<br />
<br />
== Requirements ==<br />
<br />
To use the scenario you need a Linux computer (Ubuntu 16.04 or later recommended) with VNX software installed. At least 8GB of memory are needed to execute the scenario. <br />
<br />
See how to install VNX here: http://vnx.dit.upm.es/vnx/index.php/Vnx-install<br />
<br />
If already installed, update VNX to the latest version with:<br />
<br />
vnx_update<br />
<br />
== Installation ==<br />
<br />
Download the scenario with the virtual machines images included and unpack it:<br />
<br />
wget http://idefix.dit.upm.es/vnx/examples/openstack/openstack_lab-stein_4n_classic_ovs-v01-with-rootfs.tgz<br />
sudo vnx --unpack openstack_lab-stein_4n_classic_ovs-v01-with-rootfs.tgz<br />
<br />
== Starting the scenario ==<br />
<br />
Start the scenario and configure it and load an example cirros and ubuntu images with:<br />
cd openstack_lab-stein_4n_classic_ovs-v01<br />
# Start the scenario<br />
sudo vnx -f openstack_lab.xml -v --create<br />
# Wait for all consoles to have started and configure all Openstack services<br />
vnx -f openstack_lab.xml -v -x start-all<br />
# Load vm images in GLANCE<br />
vnx -f openstack_lab.xml -v -x load-img<br />
<br />
<br />
[[File:Tutorial-openstack-stein-4n-classic-ovs.png|center|thumb|600px|<div align=center><br />
'''Figure 2: Openstack tutorial detailed topology'''</div>]]<br />
<br />
Once started, you can connect to Openstack Dashboard (default/admin/xxxx) starting a browser and pointing it to the controller horizon page. For example:<br />
<br />
firefox 10.0.10.11/horizon<br />
<br />
== Self Service networks example ==<br />
<br />
Access the network topology Dashboard page (Project->Network->Network topology) and create a simple demo scenario inside Openstack:<br />
<br />
vnx -f openstack_lab.xml -v -x create-demo-scenario<br />
<br />
You should see the simple scenario as it is being created through the Dashboard.<br />
<br />
Once created you should be able to access vm1 console, to ping or ssh from the host to the vm1 or the opposite (see the floating IP assigned to vm1 in the Dashboard, probably 10.0.10.102).<br />
<br />
You can create a second Cirros virtual machine (vm2) or a third Ubuntu virtual machine and test connectivity among all virtual machines with: <br />
<br />
vnx -f openstack_lab.xml -v -x create-demo-vm2<br />
vnx -f openstack_lab.xml -v -x create-demo-vm3<br />
<br />
To allow external Internet access from vm1 you hace to configure a NAT in the host. You can easily do it using '''vnx_config_nat''' command distributed with VNX. Just find out the name of the public network interface of your host (i.e eth0) and execute:<br />
<br />
vnx_config_nat ExtNet eth0<br />
<br />
Besides, you can access the Openstack controller by ssh from the host and execute management commands directly:<br />
slogin root@controller # root/xxxx<br />
source bin/admin-openrc.sh # Load admin credentials<br />
<br />
For example, to show the virtual machines started:<br />
openstack server list<br />
<br />
You can also execute that commands from the host where the virtual scenario is started. For that purpose you need to install openstack client first:<br />
<br />
pip install python-openstackclient<br />
source bin/admin-openrc.sh # Load admin credentials<br />
openstack server list<br />
<br />
== Provider networks example ==<br />
<br />
Compute nodes in the Openstack virtual lab scenario have two network interfaces for internal and external connections: <br />
* '''eth2''', connected to Tunnels network and used to connect with VMs in other compute nodes or routers in the network node<br />
* '''eth3''', connected to VLANs network and used to connect with VMs in other compute nodes and also to connect to external systems through the Providers networks infrastructure. <br />
<br />
To demonstrate how Openstack VMs can be connected with external systems though the VLAN network switches, an additional demo scenario is included. Just execute:<br />
vnx -f openstack_lab.xml -v -x create-vlan-demo-scenario<br />
<br />
That scenario will create two new networks and subnetworks associated with VLANs 1000 and 1001, and two VMs, vmA1 and vmB1 connected to that networks. You can see the scenario created through the openstack Dashboard.<br />
<br />
[[File:Openstack-provider-networks-example.png|center|thumb|600px|<div align=center><br />
'''Figure 3: Provider networks testing scenario'''</div>]]<br />
<br />
The commands used to create that networks and vms are the following (can be consulted also in the scenario XML file):<br />
<br />
<pre><br />
# Networks<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1000 vlan1000<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1001 vlan1001<br />
openstack subnet create --network vlan1000 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.2.0/24 --allocation-pool start=10.1.2.2,end=10.1.2.99 subvlan1000<br />
openstack subnet create --network vlan1001 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.3.0/24 --allocation-pool start=10.1.3.2,end=10.1.3.99 subvlan1001<br />
<br />
# VMs<br />
mkdir -p tmp<br />
openstack keypair create vmA1 > tmp/vmA1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vmA1 --nic net-id=vlan1000 --key-name vmA1<br />
openstack keypair create vmB1 > tmp/vmB1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vmB1 --nic net-id=vlan1001 --key-name vmB1<br />
</pre><br />
<br />
To demonstrate the connectivity of vmA1 and vmB1 to external systems connected on VLANs 1000/1001, you can start and additional virtual scenario which creates three additional systems: vmA2 (vlan 1000), vmB2 (vlan 1001) and vlan-router (connected to both vlans). To start it just execute:<br />
vnx -f openstack_lab-vms-vlan.xml -v -t<br />
<br />
[[File:Openstack_lab-vms-vlan-map.png|center|thumb|600px|<div align=center><br />
'''Figure 4: Provider networks demo external scenario'''</div>]]<br />
<br />
Once the scenario is started, you should be able to ping, traceroute and ssh among vmA1, vmB1, vmA2 and vmB2 using the following IP addresses:<br />
<ul><br />
<li>Virtual machines inside Openstack:</li><br />
<ul><br />
<li>vmA1 and vmB1: dynamic addresses assigned from 10.1.2.0/24. You can consult the addresses from Horizon or using the command:</li><br />
openstack server list<br />
</ul><br />
<li>Virtual machines outside Openstack:</li><br />
<ul><br />
<li>vmA2: 10.1.2.100/24</li><br />
<li>vmB2: 10.1.3.100/24</li><br />
</ul><br />
</ul><br />
<br />
Take into account that the pings from the exterior virtual machines to the internal ones is not allowed by the default security groups filters applied by Openstack.<br />
<br />
You can have a look at the virtual switch that supports the Openstack VLAN Network executing the following command in the host:<br />
ovs-vsctl show<br />
<br />
<br />
[[File:Vnx-demo-scenario-openstack-stein.png|center|thumb|600px|<div align=center><br />
'''Figure 5: Openstack Dashboard view of the demo virtual scenarios created'''</div>]]<br />
<br />
== Adding additional compute nodes ==<br />
<br />
Three additional VNX scenarios are provided to add new compute nodes to the scenario. <br />
<br />
For example, to start compute nodes 3 and 4, just:<br />
vnx -f openstack_lab-cmp34.xml -v -t<br />
# Wait for consoles to start<br />
vnx -f openstack_lab-cmp34.xml -v -x start-all<br />
<br />
After that, you can see the new compute nodes added <br />
by going to "Admin->Compute->Hypervisors->Compute host" option. However, the new compute nodes are not added yet to the list of Hypervisors in "Admin->Compute->Hypervisors->Hypervisor" option.<br />
<br />
To add them, just execute:<br />
vnx -f openstack_lab.xml -v -x discover-hosts<br />
<br />
The same procedure can be used to start nodes 5 and 6 (openstack_lab-cmp56.xml) and nodes 7 and 8 (openstack_lab-cmp78.xml).<br />
<br />
== Stopping or releasing the scenario ==<br />
<br />
To stop the scenario preserving the configuration and the changes made:<br />
<br />
vnx -f openstack_lab.xml -v --shutdown<br />
<br />
To start it again use:<br />
<br />
vnx -f openstack_lab.xml -v --start<br />
<br />
To stop the scenario destroying all the configuration and changes made:<br />
<br />
vnx -f openstack_lab.xml -v --destroy<br />
<br />
To unconfigure the NAT, just execute (change eth0 by the name of your external interface):<br />
<br />
vnx_config_nat -d ExtNet eth0<br />
<br />
== Other useful information ==<br />
<br />
To pack the scenario in a tgz file:<br />
<br />
bin/pack-scenario-with-rootfs # including rootfs<br />
bin/pack-scenario # without rootfs<br />
<br />
== Other Openstack Dashboard screen captures ==<br />
<br />
[[File:Vnx-demo-scenario-openstack-mitaka-compute-overview.png|center|thumb|600px|<div align=center><br />
'''Figure 6: Openstack Dashboard compute overview'''</div>]]<br />
<br />
[[File:Vnx-demo-scenario-openstack-mitaka-instances.png|center|thumb|600px|<div align=center><br />
'''Figure 7: Openstack Dashboard view of the demo virtual machines created'''</div>]]<br />
<br />
== XML specification of Openstack tutorial scenario ==<br />
<br />
<pre><br />
<?xml version="1.0" encoding="UTF-8"?><br />
<br />
<!--<br />
~~~~~~~~~~~~~~~~~~~~~~<br />
VNX Sample scenarios<br />
~~~~~~~~~~~~~~~~~~~~~~<br />
<br />
Name: openstack_tutorial-stein<br />
<br />
Description: This is an Openstack tutorial scenario designed to experiment with Openstack free and open-source <br />
software platform for cloud-computing. It is made of four LXC containers: <br />
- one controller<br />
- one network node<br />
- two compute nodes<br />
Openstack version used: Stein.<br />
The network configuration is based on the one named "Classic with Open vSwitch" described here:<br />
http://docs.openstack.org/liberty/networking-guide/scenario-classic-ovs.html<br />
<br />
Author: David Fernandez (david@dit.upm.es)<br />
<br />
This file is part of the Virtual Networks over LinuX (VNX) Project distribution. <br />
(www: http://www.dit.upm.es/vnx - e-mail: vnx@dit.upm.es) <br />
<br />
Copyright (C) 2019 Departamento de Ingenieria de Sistemas Telematicos (DIT)<br />
Universidad Politecnica de Madrid (UPM)<br />
SPAIN<br />
--><br />
<br />
<vnx xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"<br />
xsi:noNamespaceSchemaLocation="/usr/share/xml/vnx/vnx-2.00.xsd"><br />
<global><br />
<version>2.0</version><br />
<scenario_name>openstack_tutorial-stein</scenario_name><br />
<ssh_key>/root/.ssh/id_rsa.pub</ssh_key><br />
<automac offset="0"/><br />
<!--vm_mgmt type="none"/--><br />
<vm_mgmt type="private" network="10.20.0.0" mask="24" offset="0"><br />
<host_mapping /><br />
</vm_mgmt> <br />
<vm_defaults><br />
<console id="0" display="no"/><br />
<console id="1" display="yes"/><br />
</vm_defaults><br />
<cmd-seq seq="step1-6">step1,step2,step3,step3b,step4,step5,step6</cmd-seq><br />
<cmd-seq seq="step4">step41,step42,step43,step44</cmd-seq><br />
<cmd-seq seq="step5">step51,step52,step53</cmd-seq><br />
<!-- start-all for 'noconfig' scenario: all installation steps included --><br />
<!--cmd-seq seq="start-all">step1,step2,step3,step4,step5,step6</cmd-seq--><br />
<!-- start-all for configured scenario: only network and compute node steps included --><br />
<cmd-seq seq="step10">step101,step102</cmd-seq><br />
<cmd-seq seq="start-all">step00,step42,step43,step44,step52,step53</cmd-seq><br />
<cmd-seq seq="discover-hosts">step44</cmd-seq><br />
</global><br />
<br />
<net name="MgmtNet" mode="openvswitch" mtu="1450"/><br />
<net name="TunnNet" mode="openvswitch" mtu="1450"/><br />
<net name="ExtNet" mode="openvswitch" /><br />
<net name="VlanNet" mode="openvswitch" /><br />
<net name="virbr0" mode="virtual_bridge" managed="no"/><br />
<br />
<vm name="controller" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-controller</filesystem><br />
<mem>1G</mem><br />
<!--console id="0" display="yes"/--><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.11/24</ipv4><br />
</if><br />
<if id="2" net="ExtNet"><br />
<ipv4>10.0.10.11/24</ipv4><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-controller.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<filetree seq="on_boot" root="/root/">conf/controller/bin</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
chmod +x /root/bin/*<br />
</exec><br />
<exec seq="on_boot" type="verbatim"><br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
<br />
# Change owner of secret_key to horizon to avoid a 500 error when<br />
# accessing horizon (new problem arosed in v04)<br />
# See: https://ask.openstack.org/en/question/30059/getting-500-internal-server-error-while-accessing-horizon-dashboard-in-ubuntu-icehouse/<br />
chown horizon /var/lib/openstack-dashboard/secret_key<br />
<br />
# Stop nova services. Before being configured, they consume a lot of CPU<br />
service nova-scheduler stop<br />
service nova-api stop<br />
service nova-conductor stop<br />
</exec><br />
<br />
<exec seq="step00" type="verbatim"><br />
# Restart nova services<br />
service nova-scheduler start<br />
service nova-api start<br />
service nova-conductor start<br />
</exec><br />
<br />
<!-- STEP 1: Basic services --><br />
<filetree seq="step1" root="/etc/mysql/mariadb.conf.d/">conf/controller/mysql/mysqld_openstack.cnf</filetree><br />
<filetree seq="step1" root="/etc/">conf/controller/memcached/memcached.conf</filetree><br />
<filetree seq="step1" root="/etc/default/">conf/controller/etcd/etcd</filetree><br />
<!--filetree seq="step1" root="/etc/mongodb.conf">conf/controller/mongodb/mongodb.conf</filetree!--><br />
<exec seq="step1" type="verbatim"><br />
<br />
# Stop nova services. Before being configured, they consume a lot of CPU<br />
service nova-scheduler stop<br />
service nova-api stop<br />
service nova-conductor stop<br />
<br />
# Change all ocurrences of utf8mb4 to utf8. See comment above<br />
#for f in $( find /etc/mysql/mariadb.conf.d/ -type f ); do echo "Changing utf8mb4 to utf8 in file $f"; sed -i -e 's/utf8mb4/utf8/g' $f; done<br />
service mysql restart<br />
#mysql_secure_installation # to be run manually<br />
<br />
rabbitmqctl add_user openstack xxxx<br />
rabbitmqctl set_permissions openstack ".*" ".*" ".*" <br />
<br />
service memcached restart<br />
<br />
systemctl enable etcd<br />
systemctl start etcd<br />
<br />
#service mongodb stop<br />
#rm -f /var/lib/mongodb/journal/prealloc.*<br />
#service mongodb start<br />
</exec><br />
<br />
<!-- STEP 2: Identity service --><br />
<filetree seq="step2" root="/etc/keystone/">conf/controller/keystone/keystone.conf</filetree><br />
<filetree seq="step2" root="/root/bin/">conf/controller/keystone/admin-openrc.sh</filetree><br />
<filetree seq="step2" root="/root/bin/">conf/controller/keystone/demo-openrc.sh</filetree><br />
<exec seq="step2" type="verbatim"><br />
count=1; while ! mysqladmin ping ; do echo -n $count; echo ": waiting for mysql ..."; ((count++)) &amp;&amp; ((count==6)) &amp;&amp; echo "--" &amp;&amp; echo "-- ERROR: database not ready." &amp;&amp; echo "--" &amp;&amp; break; sleep 2; done<br />
</exec><br />
<exec seq="step2" type="verbatim"><br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE keystone;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
su -s /bin/sh -c "keystone-manage db_sync" keystone<br />
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone<br />
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone<br />
<br />
keystone-manage bootstrap --bootstrap-password xxxx \<br />
--bootstrap-admin-url http://controller:5000/v3/ \<br />
--bootstrap-internal-url http://controller:5000/v3/ \<br />
--bootstrap-public-url http://controller:5000/v3/ \<br />
--bootstrap-region-id RegionOne<br />
<br />
echo "ServerName controller" >> /etc/apache2/apache2.conf<br />
#ln -s /etc/apache2/sites-available/wsgi-keystone.conf /etc/apache2/sites-enabled<br />
service apache2 restart<br />
rm -f /var/lib/keystone/keystone.db<br />
sleep 5<br />
<br />
export OS_USERNAME=admin<br />
export OS_PASSWORD=xxxx<br />
export OS_PROJECT_NAME=admin<br />
export OS_USER_DOMAIN_NAME=Default<br />
export OS_PROJECT_DOMAIN_NAME=Default<br />
export OS_AUTH_URL=http://controller:5000/v3<br />
export OS_IDENTITY_API_VERSION=3<br />
<br />
# Create users and projects<br />
openstack project create --domain default --description "Service Project" service<br />
openstack project create --domain default --description "Demo Project" demo<br />
openstack user create --domain default --password=xxxx demo<br />
openstack role create user<br />
openstack role add --project demo --user demo user<br />
</exec><br />
<br />
<!-- <br />
STEP 3: Image service (Glance) <br />
--><br />
<filetree seq="step3" root="/etc/glance/">conf/controller/glance/glance-api.conf</filetree><br />
<filetree seq="step3" root="/etc/glance/">conf/controller/glance/glance-registry.conf</filetree><br />
<exec seq="step3" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE glance;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx glance<br />
openstack role add --project service --user glance admin<br />
openstack service create --name glance --description "OpenStack Image" image<br />
openstack endpoint create --region RegionOne image public http://controller:9292<br />
openstack endpoint create --region RegionOne image internal http://controller:9292<br />
openstack endpoint create --region RegionOne image admin http://controller:9292<br />
<br />
su -s /bin/sh -c "glance-manage db_sync" glance<br />
service glance-registry restart<br />
service glance-api restart<br />
#rm -f /var/lib/glance/glance.sqlite<br />
</exec><br />
<br />
<!-- <br />
STEP 3B: Placement service API<br />
--><br />
<filetree seq="step3b" root="/etc/placement/">conf/controller/placement/placement.conf</filetree><br />
<exec seq="step3b" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE placement;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx placement<br />
openstack role add --project service --user placement admin<br />
openstack service create --name placement --description "Placement API" placement<br />
openstack endpoint create --region RegionOne placement public http://controller:8778<br />
openstack endpoint create --region RegionOne placement internal http://controller:8778<br />
openstack endpoint create --region RegionOne placement admin http://controller:8778<br />
su -s /bin/sh -c "placement-manage db sync" placement<br />
service apache2 restart<br />
</exec><br />
<br />
<br />
<br />
<!-- STEP 4: Compute service (Nova) --><br />
<filetree seq="step41" root="/etc/nova/">conf/controller/nova/nova.conf</filetree><br />
<exec seq="step41" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova_api;"<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova;"<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova_cell0;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
<br />
openstack user create --domain default --password=xxxx nova<br />
openstack role add --project service --user nova admin<br />
openstack service create --name nova --description "OpenStack Compute" compute<br />
<br />
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1<br />
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1<br />
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1<br />
<br />
# Restart services stopped at step 1 to save CPU<br />
service nova-scheduler start<br />
service nova-api start<br />
service nova-conductor start<br />
<br />
su -s /bin/sh -c "nova-manage api_db sync" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova<br />
su -s /bin/sh -c "nova-manage db sync" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova<br />
<br />
service nova-api restart<br />
service nova-consoleauth restart<br />
service nova-scheduler restart<br />
service nova-conductor restart<br />
service nova-novncproxy restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<exec seq="step43" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Wait for compute1 hypervisor to be up<br />
while [ $( openstack hypervisor list --matching compute1 -f value -c State ) != 'up' ]; do echo "waiting for compute1 hypervisor..."; sleep 5; done<br />
</exec><br />
<exec seq="step43" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Wait for compute2 hypervisor to be up<br />
while [ $( openstack hypervisor list --matching compute2 -f value -c State ) != 'up' ]; do echo "waiting for compute2 hypervisor..."; sleep 5; done<br />
</exec><br />
<exec seq="step44" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
openstack hypervisor list<br />
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron) --><br />
<filetree seq="step51" root="/etc/neutron/">conf/controller/neutron/neutron.conf</filetree><br />
<!--filetree seq="step51" root="/etc/neutron/">conf/controller/neutron/neutron_lbaas.conf</filetree--><br />
<filetree seq="step51" root="/etc/neutron/plugins/ml2/">conf/controller/neutron/ml2_conf.ini</filetree><br />
<exec seq="step51" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE neutron;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx neutron<br />
openstack role add --project service --user neutron admin<br />
openstack service create --name neutron --description "OpenStack Networking" network<br />
openstack endpoint create --region RegionOne network public http://controller:9696<br />
openstack endpoint create --region RegionOne network internal http://controller:9696<br />
openstack endpoint create --region RegionOne network admin http://controller:9696<br />
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron<br />
<br />
# LBaaS<br />
neutron-db-manage --subproject neutron-lbaas upgrade head<br />
<br />
# FwaaS<br />
neutron-db-manage --subproject neutron-fwaas upgrade head<br />
<br />
# LBaaS Dashboard panels<br />
#git clone https://git.openstack.org/openstack/neutron-lbaas-dashboard<br />
#cd neutron-lbaas-dashboard<br />
#git checkout stable/mitaka<br />
#python setup.py install<br />
#cp neutron_lbaas_dashboard/enabled/_1481_project_ng_loadbalancersv2_panel.py /usr/share/openstack-dashboard/openstack_dashboard/local/enabled/<br />
#cd /usr/share/openstack-dashboard<br />
#./manage.py collectstatic --noinput<br />
#./manage.py compress<br />
#sudo service apache2 restart<br />
<br />
service nova-api restart<br />
service neutron-server restart<br />
</exec><br />
<br />
<!-- STEP 6: Dashboard service --><br />
<filetree seq="step6" root="/etc/openstack-dashboard/">conf/controller/dashboard/local_settings.py</filetree><br />
<exec seq="step6" type="verbatim"><br />
#chown www-data:www-data /var/lib/openstack-dashboard/secret_key<br />
rm /var/lib/openstack-dashboard/secret_key<br />
systemctl enable apache2<br />
service apache2 restart<br />
</exec><br />
<br />
<!-- STEP 7: Trove service --><br />
<cmd-seq seq="step7">step71,step72,step73</cmd-seq><br />
<exec seq="step71" type="verbatim"><br />
apt-get -y install python-trove python-troveclient python-glanceclient trove-common trove-api trove-taskmanager trove-conductor python-pip<br />
pip install trove-dashboard==7.0.0.0b2<br />
</exec><br />
<br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-conductor.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-taskmanager.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-guestagent.conf</filetree><br />
<exec seq="step72" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE trove;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
<br />
openstack user create --domain default --password xxxx trove<br />
openstack role add --project service --user trove admin<br />
openstack service create --name trove --description "Database" database<br />
openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s<br />
<br />
su -s /bin/sh -c "trove-manage db_sync" trove<br />
<br />
service trove-api restart<br />
service trove-taskmanager restart<br />
service trove-conductor restart<br />
<br />
# Install trove_dashboard<br />
cp -a /usr/local/lib/python2.7/dist-packages/trove_dashboard/enabled/* /usr/share/openstack-dashboard/openstack_dashboard/local/enabled/<br />
service apache2 restart<br />
<br />
</exec><br />
<br />
<exec seq="step73" type="verbatim"><br />
#wget -P /tmp/images http://tarballs.openstack.org/trove/images/ubuntu/mariadb.qcow2<br />
wget -P /tmp/images/ http://138.4.7.228/download/vnx/filesystems/ostack-images/trove/mariadb.qcow2<br />
glance image-create --name "trove-mariadb" --file /tmp/images/mariadb.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/mariadb.qcow2<br />
su -s /bin/sh -c "trove-manage --config-file /etc/trove/trove.conf datastore_update mysql ''" trove <br />
su -s /bin/sh -c "trove-manage --config-file /etc/trove/trove.conf datastore_version_update mysql mariadb mariadb glance_image_ID '' 1" trove<br />
<br />
# Create example database<br />
openstack flavor show m1.smaller >/dev/null 2>&amp;1 || openstack flavor create m1.smaller --ram 512 --disk 3 --vcpus 1 --id 6<br />
#trove create mysql_instance_1 m1.smaller --size 1 --databases myDB --users userA:xxxx --datastore_version mariadb --datastore mysql<br />
</exec><br />
<br />
<!-- STEP 8: Heat service --><br />
<!--cmd-seq seq="step8">step81,step82</cmd-seq--><br />
<br />
<filetree seq="step8" root="/etc/heat/">conf/controller/heat/heat.conf</filetree><br />
<filetree seq="step8" root="/root/heat/">conf/controller/heat/examples</filetree><br />
<exec seq="step8" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE heat;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password xxxx heat<br />
openstack role add --project service --user heat admin<br />
openstack service create --name heat --description "Orchestration" orchestration<br />
openstack service create --name heat-cfn --description "Orchestration" cloudformation<br />
openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1<br />
openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1<br />
openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1<br />
openstack domain create --description "Stack projects and users" heat<br />
openstack user create --domain heat --password xxxx heat_domain_admin<br />
openstack role add --domain heat --user-domain heat --user heat_domain_admin admin<br />
openstack role create heat_stack_owner<br />
openstack role add --project demo --user demo heat_stack_owner<br />
openstack role create heat_stack_user<br />
<br />
su -s /bin/sh -c "heat-manage db_sync" heat<br />
service heat-api restart<br />
service heat-api-cfn restart<br />
service heat-engine restart<br />
<br />
# Install Orchestration interface in Dashboard<br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get install -y gettext<br />
pip3 install heat-dashboard<br />
<br />
cd /root<br />
git clone https://github.com/openstack/heat-dashboard.git<br />
cd heat-dashboard/<br />
git checkout stable/stein<br />
cp heat_dashboard/enabled/_[1-9]*.py /usr/share/openstack-dashboard/openstack_dashboard/local/enabled<br />
python3 ./manage.py compilemessages<br />
cd /usr/share/openstack-dashboard<br />
DJANGO_SETTINGS_MODULE=openstack_dashboard.settings python3 manage.py collectstatic --noinput<br />
DJANGO_SETTINGS_MODULE=openstack_dashboard.settings python3 manage.py compress --force<br />
rm /var/lib/openstack-dashboard/secret_key<br />
service apache2 restart<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-heat" type="verbatim"><br />
source /root/bin/demo-openrc.sh<br />
<br />
# Create internal network<br />
openstack network create net-heat<br />
openstack subnet create --network net-heat --gateway 10.1.10.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.10.0/24 --allocation-pool start=10.1.10.8,end=10.1.10.100 subnet-heat<br />
<br />
mkdir -p /root/keys<br />
openstack keypair create key-heat > /root/keys/key-heat<br />
#export NET_ID=$(openstack network list | awk '/ net-heat / { print $2 }')<br />
export NET_ID=$( openstack network list --name net-heat -f value -c ID )<br />
openstack stack create -t /root/heat/examples/demo-template.yml --parameter "NetID=$NET_ID" stack<br />
<br />
</exec><br />
<br />
<br />
<!-- STEP 9: Tacker service --><br />
<cmd-seq seq="step9">step91,step92</cmd-seq><br />
<br />
<exec seq="step91" type="verbatim"><br />
apt-get -y install python-pip git<br />
pip install --upgrade pip<br />
</exec><br />
<br />
<filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/default-vim-config.yaml</filetree><br />
<filetree seq="step92" root="/root/tacker/">conf/controller/tacker/examples</filetree><br />
<exec seq="step92" type="verbatim"><br />
sed -i -e 's/.*"resource_types:OS::Nova::Flavor":.*/ "resource_types:OS::Nova::Flavor": "role:admin",/' /etc/heat/policy.json <br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE tacker;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password xxxx tacker<br />
openstack role add --project service --user tacker admin<br />
openstack service create --name tacker --description "Tacker Project" nfv-orchestration<br />
openstack endpoint create --region RegionOne nfv-orchestration public http://controller:9890/<br />
openstack endpoint create --region RegionOne nfv-orchestration internal http://controller:9890/<br />
openstack endpoint create --region RegionOne nfv-orchestration admin http://controller:9890/<br />
<br />
mkdir -p /root/tacker<br />
cd /root/tacker<br />
git clone https://github.com/openstack/tacker<br />
cd tacker<br />
git checkout stable/ocata<br />
pip install -r requirements.txt<br />
pip install tosca-parser<br />
python setup.py install<br />
mkdir -p /var/log/tacker<br />
<br />
/usr/local/bin/tacker-db-manage --config-file /usr/local/etc/tacker/tacker.conf upgrade head<br />
<br />
# Tacker client<br />
cd /root/tacker<br />
git clone https://github.com/openstack/python-tackerclient<br />
cd python-tackerclient<br />
git checkout stable/ocata<br />
python setup.py install<br />
<br />
# Tacker horizon<br />
cd /root/tacker<br />
git clone https://github.com/openstack/tacker-horizon<br />
cd tacker-horizon<br />
git checkout stable/ocata<br />
python setup.py install<br />
cp tacker_horizon/enabled/* /usr/share/openstack-dashboard/openstack_dashboard/enabled/<br />
service apache2 restart<br />
<br />
# Start tacker server<br />
mkdir -p /var/log/tacker<br />
nohup python /usr/local/bin/tacker-server \<br />
--config-file /usr/local/etc/tacker/tacker.conf \<br />
--log-file /var/log/tacker/tacker.log &amp;<br />
<br />
# Register default VIM<br />
tacker vim-register --is-default --config-file /usr/local/etc/tacker/default-vim-config.yaml \<br />
--description "Default VIM" "Openstack-VIM"<br />
<br />
</exec><br />
<exec seq="step93" type="verbatim"><br />
nohup python /usr/local/bin/tacker-server \<br />
--config-file /usr/local/etc/tacker/tacker.conf \<br />
--log-file /var/log/tacker/tacker.log &amp;<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-tacker" type="verbatim"><br />
source /root/bin/demo-openrc.sh<br />
<br />
# Create internal network<br />
openstack network create net-tacker<br />
openstack subnet create --network net-tacker --gateway 10.1.11.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.11.0/24 --allocation-pool start=10.1.11.8,end=10.1.11.100 subnet-tacker<br />
<br />
cd /root/tacker/examples<br />
tacker vnfd-create --vnfd-file sample-vnfd.yaml testd<br />
<br />
# Falla con error: <br />
# ERROR: Property error: : resources.VDU1.properties.image: : No module named v2.client<br />
tacker vnf-create --vnfd-id $( tacker vnfd-list | awk '/ testd / { print $2 }' ) test<br />
<br />
</exec><br />
<br />
<!--filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<exec seq="step92" type="verbatim"><br />
</exec--><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
#apt-get -y install python-pip<br />
#pip install --upgrade pip<br />
#pip install gnocchi[mysql,keystone] gnocchiclient<br />
apt-get -y install ceilometer-collector ceilometer-agent-central ceilometer-agent-notification python-ceilometerclient<br />
apt-get -y install gnocchi-common gnocchi-api gnocchi-metricd gnocchi-statsd python-gnocchiclient<br />
<br />
</exec><br />
<br />
<filetree seq="step102" root="/etc/ceilometer/">conf/controller/ceilometer/ceilometer.conf</filetree><br />
<!--filetree seq="step102" root="/etc/mongodb.conf">conf/controller/mongodb/mongodb.conf</filetree--><br />
<!--filetree seq="step102" root="/etc/apache2/sites-available/ceilometer.conf">conf/controller/ceilometer/apache/ceilometer.conf</filetree--><br />
<!--filetree seq="step102" root="/etc/apache2/sites-available/gnocchi.conf">conf/controller/ceilometer/apache/gnocchi.conf</filetree--><br />
<filetree seq="step102" root="/etc/gnocchi/">conf/controller/ceilometer/gnocchi.conf</filetree><br />
<filetree seq="step102" root="/etc/gnocchi/">conf/controller/ceilometer/api-paste.ini</filetree><br />
<exec seq="step102" type="verbatim"><br />
<br />
# Create gnocchi database<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE gnocchidb default character set utf8;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON gnocchidb.* TO 'gnocchi'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON gnocchidb.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
# Ceilometer<br />
source /root/bin/admin-openrc.sh <br />
openstack user create --domain default --password xxxx ceilometer<br />
openstack role add --project service --user ceilometer admin<br />
openstack service create --name ceilometer --description "Telemetry" metering<br />
openstack user create --domain default --password xxxx gnocchi<br />
openstack role add --project service --user gnocchi admin<br />
openstack service create --name gnocchi --description "Metric Service" metric<br />
openstack endpoint create --region RegionOne metric public http://controller:8041<br />
openstack endpoint create --region RegionOne metric internal http://controller:8041<br />
openstack endpoint create --region RegionOne metric admin http://controller:8041<br />
<br />
mkdir -p /var/cache/gnocchi<br />
chown gnocchi:gnocchi -R /var/cache/gnocchi<br />
mkdir -p /var/lib/gnocchi<br />
chown gnocchi:gnocchi -R /var/lib/gnocchi<br />
<br />
gnocchi-upgrade<br />
sed -i 's/8000/8041/g' /usr/bin/gnocchi-api<br />
# Correct error in gnocchi-api startup script<br />
sed -i -e 's/exec $DAEMON $DAEMON_ARGS/exec $DAEMON -- $DAEMON_ARGS/' /etc/init.d/gnocchi-api<br />
systemctl enable gnocchi-api.service gnocchi-metricd.service gnocchi-statsd.service<br />
systemctl start gnocchi-api.service gnocchi-metricd.service gnocchi-statsd.service<br />
<br />
ceilometer-upgrade --skip-metering-database<br />
service ceilometer-agent-central restart<br />
service ceilometer-agent-notification restart<br />
service ceilometer-collector restart<br />
<br />
# Enable Glance service meters<br />
crudini --set /etc/glance/glance-api.conf DEFAULT transport_url rabbit://openstack:xxxx@controller<br />
crudini --set /etc/glance/glance-api.conf oslo_messaging_notifications driver messagingv2<br />
crudini --set /etc/glance/glance-registry.conf DEFAULT transport_url rabbit://openstack:xxxx@controller<br />
crudini --set /etc/glance/glance-registry.conf oslo_messaging_notifications driver messagingv2<br />
<br />
service glance-registry restart<br />
service glance-api restart<br />
<br />
# Enable Neutron service meters<br />
crudini --set /etc/neutron/neutron.conf oslo_messaging_notifications driver messagingv2<br />
service neutron-server restart<br />
<br />
# Enable Heat service meters<br />
crudini --set /etc/heat/heat.conf oslo_messaging_notifications driver messagingv2<br />
service heat-api restart<br />
service heat-api-cfn restart<br />
service heat-engine restart<br />
<br />
#crudini --set /etc/glance/glance-api.conf DEFAULT rpc_backend rabbit<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_notifications driver messagingv2<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_host controller<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_userid openstack<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_password xxxx<br />
<br />
#crudini --set /etc/glance/glance-registry.conf DEFAULT rpc_backend rabbit<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_notifications driver messagingv2<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_host controller<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_userid openstack<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_password xxxx<br />
<br />
<br />
</exec><br />
<br />
<br />
<exec seq="load-img" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create flavors if not created<br />
openstack flavor show m1.nano >/dev/null 2>&amp;1 || openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano<br />
openstack flavor show m1.tiny >/dev/null 2>&amp;1 || openstack flavor create --id 1 --vcpus 1 --ram 512 --disk 1 m1.tiny<br />
openstack flavor show m1.smaller >/dev/null 2>&amp;1 || openstack flavor create --id 6 --vcpus 1 --ram 512 --disk 3 m1.smaller<br />
<br />
# CentOS image<br />
# Cirros image <br />
#wget -P /tmp/images http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img<br />
wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/cirros-0.3.4-x86_64-disk-vnx.qcow2<br />
glance image-create --name "cirros-0.3.4-x86_64-vnx" --file /tmp/images/cirros-0.3.4-x86_64-disk-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/cirros-0.3.4-x86_64-disk*.qcow2<br />
<br />
# Ubuntu image (trusty)<br />
#wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/trusty-server-cloudimg-amd64-disk1-vnx.qcow2<br />
#glance image-create --name "trusty-server-cloudimg-amd64-vnx" --file /tmp/images/trusty-server-cloudimg-amd64-disk1-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
#rm /tmp/images/trusty-server-cloudimg-amd64-disk1*.qcow2<br />
<br />
# Ubuntu image (xenial)<br />
wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/xenial-server-cloudimg-amd64-disk1-vnx.qcow2<br />
glance image-create --name "xenial-server-cloudimg-amd64-vnx" --file /tmp/images/xenial-server-cloudimg-amd64-disk1-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/xenial-server-cloudimg-amd64-disk1*.qcow2<br />
<br />
#wget -P /tmp/images http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2<br />
#glance image-create --name "CentOS-7-x86_64" --file /tmp/images/CentOS-7-x86_64-GenericCloud.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
#rm /tmp/images/CentOS-7-x86_64-GenericCloud.qcow2<br />
</exec><br />
<br />
<exec seq="create-demo-scenario" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create internal network<br />
#neutron net-create net0<br />
openstack network create net0<br />
#neutron subnet-create net0 10.1.1.0/24 --name subnet0 --gateway 10.1.1.1 --dns-nameserver 8.8.8.8<br />
openstack subnet create --network net0 --gateway 10.1.1.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.1.0/24 --allocation-pool start=10.1.1.8,end=10.1.1.100 subnet0<br />
<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm1 > /root/keys/vm1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm1 --nic net-id=net0 --key-name vm1<br />
<br />
# Create external network<br />
#neutron net-create ExtNet --provider:physical_network provider --provider:network_type flat --router:external --shared<br />
openstack network create --share --external --provider-physical-network provider --provider-network-type flat ExtNet<br />
#neutron subnet-create --name ExtSubnet --allocation-pool start=10.0.10.100,end=10.0.10.200 --dns-nameserver 10.0.10.1 --gateway 10.0.10.1 ExtNet 10.0.10.0/24<br />
openstack subnet create --network ExtNet --gateway 10.0.10.1 --dns-nameserver 10.0.10.1 --subnet-range 10.0.10.0/24 --allocation-pool start=10.0.10.100,end=10.0.10.200 ExtSubNet<br />
#neutron router-create r0<br />
openstack router create r0<br />
#neutron router-gateway-set r0 ExtNet<br />
openstack router set r0 --external-gateway ExtNet<br />
#neutron router-interface-add r0 subnet0<br />
openstack router add subnet r0 subnet0<br />
<br />
<br />
# Assign floating IP address to vm1<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm1<br />
openstack server add floating ip vm1 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
<br />
<br />
# Create security group rules to allow ICMP, SSH and WWW access<br />
openstack security group rule create --proto icmp --dst-port 0 default<br />
openstack security group rule create --proto tcp --dst-port 80 default<br />
openstack security group rule create --proto tcp --dst-port 22 default<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-vm2" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm2 > /root/keys/vm2<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm2 --nic net-id=net0 --key-name vm2<br />
# Assign floating IP address to vm2<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm2<br />
openstack server add floating ip vm2 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
</exec><br />
<br />
<exec seq="create-demo-vm3" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm3 > /root/keys/vm3<br />
openstack server create --flavor m1.smaller --image xenial-server-cloudimg-amd64-vnx vm3 --nic net-id=net0 --key-name vm3<br />
# Assign floating IP address to vm3<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm3<br />
openstack server add floating ip vm3 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
</exec><br />
<br />
<exec seq="create-vlan-demo-scenario" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create vlan based networks and subnetworks<br />
#neutron net-create vlan1000 --shared --provider:physical_network vlan --provider:network_type vlan --provider:segmentation_id 1000<br />
#neutron net-create vlan1001 --shared --provider:physical_network vlan --provider:network_type vlan --provider:segmentation_id 1001<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1000 vlan1000<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1001 vlan1001<br />
#neutron subnet-create vlan1000 10.1.2.0/24 --name vlan1000-subnet --allocation-pool start=10.1.2.2,end=10.1.2.99 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8<br />
#neutron subnet-create vlan1001 10.1.3.0/24 --name vlan1001-subnet --allocation-pool start=10.1.3.2,end=10.1.3.99 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8<br />
openstack subnet create --network vlan1000 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.2.0/24 --allocation-pool start=10.1.2.2,end=10.1.2.99 subvlan1000<br />
openstack subnet create --network vlan1001 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.3.0/24 --allocation-pool start=10.1.3.2,end=10.1.3.99 subvlan1001<br />
<br />
<br />
# Create virtual machine<br />
mkdir -p tmp<br />
openstack keypair create vm3 > tmp/vm3<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm3 --nic net-id=vlan1000 --key-name vm3<br />
openstack keypair create vm4 > tmp/vm4<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm4 --nic net-id=vlan1001 --key-name vm4<br />
<br />
<br />
# Create security group rules to allow ICMP, SSH and WWW access<br />
openstack security group rule create --proto icmp --dst-port 0 default<br />
openstack security group rule create --proto tcp --dst-port 80 default<br />
openstack security group rule create --proto tcp --dst-port 22 default<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
echo "--"<br />
echo "-- Keystone (identity)"<br />
echo "--"<br />
echo "Command: openstack --os-auth-url http://controller:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue"<br />
openstack --os-auth-url http://controller:35357/v3 \<br />
--os-project-domain-name default --os-user-domain-name default \<br />
--os-project-name admin --os-username admin token issue<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Glance (images)"<br />
echo "--"<br />
echo "Command: openstack image list"<br />
openstack image list<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Nova (compute)"<br />
echo "--"<br />
echo "Command: openstack compute service list"<br />
openstack compute service list<br />
echo "Command: openstack hypervisor service list"<br />
openstack hypervisor service list<br />
echo "Command: openstack catalog list"<br />
openstack catalog list<br />
echo "Command: nova-status upgrade check"<br />
nova-status upgrade check<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Neutron (network)"<br />
echo "--"<br />
echo "Command: openstack extension list --network"<br />
openstack extension list --network<br />
echo "Command: openstack network agent list"<br />
openstack network agent list<br />
echo "Command: openstack security group list"<br />
openstack security group list<br />
echo "Command: openstack security group rule list"<br />
openstack security group rule list<br />
</exec><br />
<br />
</vm><br />
<br />
<vm name="network" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-network</filesystem><br />
<!--vm name="network" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-network</filesystem--><br />
<mem>1G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.21/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.21/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="4" net="ExtNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<forwarding type="ip" /><br />
<forwarding type="ipv6" /><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts<br />
rm /root/hosts<br />
</exec><br />
<exec seq="on_boot" type="verbatim"><br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --><br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<filetree seq="on_boot" root="/root/">conf/network/bin</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
chmod +x /root/bin/*<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) --><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/neutron.conf</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/metadata_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/plugins/ml2/">conf/network/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step52" root="/etc/neutron/plugins/ml2/">conf/network/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/l3_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/dhcp_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/dnsmasq-neutron.conf</filetree><br />
<!--filetree seq="step52" root="/etc/neutron/">conf/network/neutron/fwaas_driver.ini</filetree--><br />
<!--filetree seq="step52" root="/etc/neutron/">conf/network/neutron/lbaas_agent.ini</filetree--><br />
<exec seq="step52" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
#ovs-vsctl add-br br-ex<br />
ovs-vsctl add-port br-provider eth4<br />
<br />
service neutron-lbaasv2-agent restart<br />
service openvswitch-switch restart<br />
<br />
service neutron-openvswitch-agent restart<br />
#service neutron-linuxbridge-agent restart<br />
service neutron-dhcp-agent restart<br />
service neutron-metadata-agent restart<br />
service neutron-l3-agent restart<br />
<br />
rm -f /var/lib/neutron/neutron.sqlite<br />
</exec><br />
<br />
</vm><br />
<br />
<br />
<vm name="compute1" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-compute</filesystem><br />
<!--vm name="compute1" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64" vcpu="2"--><br />
<!--filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-compute</filesystem--><br />
<mem>2G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.31/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.31/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
# Create /dev/net/tun device <br />
#mkdir -p /dev/net/<br />
#mknod -m 666 /dev/net/tun c 10 200<br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<!-- STEP 42: Compute service (Nova) --><br />
<filetree seq="step42" root="/etc/nova/">conf/compute1/nova/nova.conf</filetree><br />
<filetree seq="step42" root="/etc/nova/">conf/compute1/nova/nova-compute.conf</filetree><br />
<exec seq="step42" type="verbatim"><br />
service nova-compute restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron) --><br />
<filetree seq="step53" root="/etc/neutron/">conf/compute1/neutron/neutron.conf</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/ml2_conf.ini</filetree!--><br />
<exec seq="step53" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
service openvswitch-switch restart<br />
service nova-compute restart<br />
service neutron-openvswitch-agent restart<br />
</exec><br />
<br />
<!--filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<exec seq="step92" type="verbatim"><br />
</exec--><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get -y install ceilometer-agent-compute<br />
</exec><br />
<filetree seq="step102" root="/etc/ceilometer/">conf/compute2/ceilometer/ceilometer.conf</filetree><br />
<exec seq="step102" type="verbatim"><br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit True<br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit_period hour<br />
crudini --set /etc/nova/nova.conf DEFAULT notify_on_state_change vm_and_task_state<br />
crudini --set /etc/nova/nova.conf oslo_messaging_notifications driver messagingv2<br />
service ceilometer-agent-compute restart<br />
service nova-compute restart<br />
</exec><br />
<br />
</vm><br />
<br />
<vm name="compute2" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-compute</filesystem><br />
<!--vm name="compute2" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64" vcpu="2"--><br />
<!--filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-compute</filesystem!--><br />
<mem>2G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.32/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.32/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
# Create /dev/net/tun device <br />
#mkdir -p /dev/net/<br />
#mknod -m 666 /dev/net/tun c 10 200<br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<!-- STEP 42: Compute service (Nova) --><br />
<filetree seq="step42" root="/etc/nova/">conf/compute2/nova/nova.conf</filetree><br />
<filetree seq="step42" root="/etc/nova/">conf/compute2/nova/nova-compute.conf</filetree><br />
<exec seq="step42" type="verbatim"><br />
service nova-compute restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) --><br />
<filetree seq="step53" root="/etc/neutron/">conf/compute2/neutron/neutron.conf</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/ml2_conf.ini</filetree!--><br />
<exec seq="step53" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
service openvswitch-switch restart<br />
service nova-compute restart<br />
service neutron-openvswitch-agent restart<br />
</exec><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get -y install ceilometer-agent-compute<br />
</exec><br />
<filetree seq="step102" root="/etc/ceilometer/">conf/compute2/ceilometer/ceilometer.conf</filetree><br />
<exec seq="step102" type="verbatim"><br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit True<br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit_period hour<br />
crudini --set /etc/nova/nova.conf DEFAULT notify_on_state_change vm_and_task_state<br />
crudini --set /etc/nova/nova.conf oslo_messaging_notifications driver messagingv2<br />
service ceilometer-agent-compute restart<br />
service nova-compute restart<br />
</exec><br />
<br />
</vm><br />
<br />
<br />
<host><br />
<hostif net="ExtNet"><br />
<ipv4>10.0.10.1/24</ipv4><br />
</hostif><br />
<hostif net="MgmtNet"><br />
<ipv4>10.0.0.1/24</ipv4><br />
</hostif><br />
<exec seq="step00" type="verbatim"><br />
echo "--\n-- Waiting for all VMs to be ssh ready...\n--"<br />
</exec><br />
<exec seq="step00" type="verbatim"><br />
# Wait till ssh is accesible in all VMs<br />
while ! $( nc -z controller 22 ); do sleep 1; done<br />
while ! $( nc -z network 22 ); do sleep 1; done<br />
while ! $( nc -z compute1 22 ); do sleep 1; done<br />
while ! $( nc -z compute2 22 ); do sleep 1; done<br />
</exec><br />
<exec seq="step00" type="verbatim"><br />
echo "-- ...OK\n--"<br />
</exec><br />
</host><br />
<br />
</vnx></div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Vnx-labo-openstack-4nodes-classic-ovs-stein&diff=2635
Vnx-labo-openstack-4nodes-classic-ovs-stein
2019-08-27T11:46:13Z
<p>David: /* Provider networks example */</p>
<hr />
<div>{{Title|VNX Openstack Stein four nodes classic scenario using Open vSwitch}}<br />
<br />
== Introduction ==<br />
<br />
This is an Openstack tutorial scenario designed to experiment with Openstack free and open-source software platform for cloud-computing.<br />
<br />
The scenario is made of four virtual machines: a controller node, a network node and two compute nodes, all based on LXC. Optionally, new compute nodes can be added by starting additional VNX scenarios.<br />
<br />
Openstack version used is Stein (April 2019) over Ubuntu 18.04 LTS. The deployment scenario is the one that was named "Classic with Open vSwitch" and was described in previous versions of Openstack documentation (https://docs.openstack.org/liberty/networking-guide/scenario-classic-ovs.html). It is prepared to experiment either with the deployment of virtual machines using "Self Service Networks" provided by Openstack, or with the use of external "Provider networks".<br />
<br />
The scenario is similar to the ones developed by Raul Alvarez to test OpenDaylight-Openstack integration, but instead of using Devstack to configure Openstack nodes, the configuration is done by means of commands integrated into the VNX scenario following [https://docs.openstack.org/stein/install/ Openstack Stein installation recipes].<br />
<br />
[[File:Openstack_tutorial-stein.png|center|thumb|600px|<div align=center><br />
'''Figure 1: Openstack tutorial scenario'''</div>]]<br />
<br />
== Requirements ==<br />
<br />
To use the scenario you need a Linux computer (Ubuntu 16.04 or later recommended) with VNX software installed. At least 8GB of memory are needed to execute the scenario. <br />
<br />
See how to install VNX here: http://vnx.dit.upm.es/vnx/index.php/Vnx-install<br />
<br />
If already installed, update VNX to the latest version with:<br />
<br />
vnx_update<br />
<br />
== Installation ==<br />
<br />
Download the scenario with the virtual machines images included and unpack it:<br />
<br />
wget http://idefix.dit.upm.es/vnx/examples/openstack/openstack_lab-stein_4n_classic_ovs-v01-with-rootfs.tgz<br />
sudo vnx --unpack openstack_lab-stein_4n_classic_ovs-v01-with-rootfs.tgz<br />
<br />
== Starting the scenario ==<br />
<br />
Start the scenario and configure it and load an example cirros and ubuntu images with:<br />
cd openstack_lab-stein_4n_classic_ovs-v01<br />
# Start the scenario<br />
sudo vnx -f openstack_lab.xml -v --create<br />
# Wait for all consoles to have started and configure all Openstack services<br />
vnx -f openstack_lab.xml -v -x start-all<br />
# Load vm images in GLANCE<br />
vnx -f openstack_lab.xml -v -x load-img<br />
<br />
<br />
[[File:Tutorial-openstack-stein-4n-classic-ovs.png|center|thumb|600px|<div align=center><br />
'''Figure 2: Openstack tutorial detailed topology'''</div>]]<br />
<br />
Once started, you can connect to Openstack Dashboard (default/admin/xxxx) starting a browser and pointing it to the controller horizon page. For example:<br />
<br />
firefox 10.0.10.11/horizon<br />
<br />
== Self Service networks example ==<br />
<br />
Access the network topology Dashboard page (Project->Network->Network topology) and create a simple demo scenario inside Openstack:<br />
<br />
vnx -f openstack_lab.xml -v -x create-demo-scenario<br />
<br />
You should see the simple scenario as it is being created through the Dashboard.<br />
<br />
Once created you should be able to access vm1 console, to ping or ssh from the host to the vm1 or the opposite (see the floating IP assigned to vm1 in the Dashboard, probably 10.0.10.102).<br />
<br />
You can create a second Cirros virtual machine (vm2) or a third Ubuntu virtual machine and test connectivity among all virtual machines with: <br />
<br />
vnx -f openstack_lab.xml -v -x create-demo-vm2<br />
vnx -f openstack_lab.xml -v -x create-demo-vm3<br />
<br />
To allow external Internet access from vm1 you hace to configure a NAT in the host. You can easily do it using '''vnx_config_nat''' command distributed with VNX. Just find out the name of the public network interface of your host (i.e eth0) and execute:<br />
<br />
vnx_config_nat ExtNet eth0<br />
<br />
Besides, you can access the Openstack controller by ssh from the host and execute management commands directly:<br />
slogin root@controller # root/xxxx<br />
source bin/admin-openrc.sh # Load admin credentials<br />
<br />
For example, to show the virtual machines started:<br />
openstack server list<br />
<br />
You can also execute that commands from the host where the virtual scenario is started. For that purpose you need to install openstack client first:<br />
<br />
pip install python-openstackclient<br />
source bin/admin-openrc.sh # Load admin credentials<br />
openstack server list<br />
<br />
== Provider networks example ==<br />
<br />
Compute nodes in the Openstack virtual lab scenario have two network interfaces for internal and external connections: <br />
* '''eth2''', connected to Tunnels network and used to connect with VMs in other compute nodes or routers in the network node<br />
* '''eth3''', connected to VLANs network and used to connect with VMs in other compute nodes and also to connect to external systems through the Providers networks infrastructure. <br />
<br />
To demonstrate how Openstack VMs can be connected with external systems though the VLAN network switches, an additional demo scenario is included. Just execute:<br />
vnx -f openstack_lab.xml -v -x create-vlan-demo-scenario<br />
<br />
That scenario will create two new networks and subnetworks associated with VLANs 1000 and 1001, and two VMs, vmA1 and vmB1 connected to that networks. You can see the scenario created through the openstack Dashboard.<br />
<br />
[[File:Openstack-provider-networks-example.png|center|thumb|600px|<div align=center><br />
'''Figure 3: Provider networks testing scenario'''</div>]]<br />
<br />
The commands used to create that networks and vms are the following (can be consulted also in the scenario XML file):<br />
<br />
<pre><br />
# Networks<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1000 vlan1000<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1001 vlan1001<br />
openstack subnet create --network vlan1000 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.2.0/24 --allocation-pool start=10.1.2.2,end=10.1.2.99 subvlan1000<br />
openstack subnet create --network vlan1001 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.3.0/24 --allocation-pool start=10.1.3.2,end=10.1.3.99 subvlan1001<br />
<br />
# VMs<br />
mkdir -p tmp<br />
openstack keypair create vmA1 > tmp/vmA1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vmA1 --nic net-id=vlan1000 --key-name vmA1<br />
openstack keypair create vmB1 > tmp/vmB1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vmB1 --nic net-id=vlan1001 --key-name vmB1<br />
</pre><br />
<br />
To demonstrate the connectivity of vmA1 and vmB1 to external systems connected on VLANs 1000/1001, you can start and additional virtual scenario which creates three additional systems: vmA2 (vlan 1000), vmB2 (vlan 1001) and vlan-router (connected to both vlans). To start it just execute:<br />
vnx -f openstack_lab-vms-vlan.xml -v -t<br />
<br />
[[File:Openstack_lab-vms-vlan-map.png|center|thumb|600px|<div align=center><br />
'''Figure 4: Provider networks demo external scenario'''</div>]]<br />
<br />
Once the scenario is started, you should be able to ping, traceroute and ssh among vmA1, vmB1, vmA2 and vmB2 using the following IP addresses:<br />
<ul><br />
<li>Virtual machines inside Openstack:</li><br />
<ul><br />
<li>vmA1 and vmB1: dynamic addresses assigned from 10.1.2.0/24. You can consult the addresses from Horizon or using the command:</li><br />
openstack server list<br />
</ul><br />
<li>Virtual machines outside Openstack:</li><br />
<ul><br />
<li>vmA2: 10.1.2.100/24</li><br />
<li>vmB2: 10.1.3.100/24</li><br />
</ul><br />
</ul><br />
<br />
Take into account that the pings from the exterior virtual machines to the internal ones is not allowed by the default security groups filters applied by Openstack.<br />
<br />
You can have a look at the virtual switch that supports the Openstack VLAN Network executing the following command in the host:<br />
ovs-vsctl show<br />
<br />
<br />
[[File:Vnx-demo-scenario-openstack-stein.png|center|thumb|600px|<div align=center><br />
'''Figure 5: Openstack Dashboard view of the demo virtual scenarios created'''</div>]]<br />
<br />
== Stopping or releasing the scenario ==<br />
<br />
To stop the scenario preserving the configuration and the changes made:<br />
<br />
vnx -f openstack_lab.xml -v --shutdown<br />
<br />
To start it again use:<br />
<br />
vnx -f openstack_lab.xml -v --start<br />
<br />
To stop the scenario destroying all the configuration and changes made:<br />
<br />
vnx -f openstack_lab.xml -v --destroy<br />
<br />
To unconfigure the NAT, just execute (change eth0 by the name of your external interface):<br />
<br />
vnx_config_nat -d ExtNet eth0<br />
<br />
== Other useful information ==<br />
<br />
To pack the scenario in a tgz file:<br />
<br />
bin/pack-scenario-with-rootfs # including rootfs<br />
bin/pack-scenario # without rootfs<br />
<br />
== Other Openstack Dashboard screen captures ==<br />
<br />
[[File:Vnx-demo-scenario-openstack-mitaka-compute-overview.png|center|thumb|600px|<div align=center><br />
'''Figure 6: Openstack Dashboard compute overview'''</div>]]<br />
<br />
[[File:Vnx-demo-scenario-openstack-mitaka-instances.png|center|thumb|600px|<div align=center><br />
'''Figure 7: Openstack Dashboard view of the demo virtual machines created'''</div>]]<br />
<br />
== XML specification of Openstack tutorial scenario ==<br />
<br />
<pre><br />
<?xml version="1.0" encoding="UTF-8"?><br />
<br />
<!--<br />
~~~~~~~~~~~~~~~~~~~~~~<br />
VNX Sample scenarios<br />
~~~~~~~~~~~~~~~~~~~~~~<br />
<br />
Name: openstack_tutorial-stein<br />
<br />
Description: This is an Openstack tutorial scenario designed to experiment with Openstack free and open-source <br />
software platform for cloud-computing. It is made of four LXC containers: <br />
- one controller<br />
- one network node<br />
- two compute nodes<br />
Openstack version used: Stein.<br />
The network configuration is based on the one named "Classic with Open vSwitch" described here:<br />
http://docs.openstack.org/liberty/networking-guide/scenario-classic-ovs.html<br />
<br />
Author: David Fernandez (david@dit.upm.es)<br />
<br />
This file is part of the Virtual Networks over LinuX (VNX) Project distribution. <br />
(www: http://www.dit.upm.es/vnx - e-mail: vnx@dit.upm.es) <br />
<br />
Copyright (C) 2019 Departamento de Ingenieria de Sistemas Telematicos (DIT)<br />
Universidad Politecnica de Madrid (UPM)<br />
SPAIN<br />
--><br />
<br />
<vnx xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"<br />
xsi:noNamespaceSchemaLocation="/usr/share/xml/vnx/vnx-2.00.xsd"><br />
<global><br />
<version>2.0</version><br />
<scenario_name>openstack_tutorial-stein</scenario_name><br />
<ssh_key>/root/.ssh/id_rsa.pub</ssh_key><br />
<automac offset="0"/><br />
<!--vm_mgmt type="none"/--><br />
<vm_mgmt type="private" network="10.20.0.0" mask="24" offset="0"><br />
<host_mapping /><br />
</vm_mgmt> <br />
<vm_defaults><br />
<console id="0" display="no"/><br />
<console id="1" display="yes"/><br />
</vm_defaults><br />
<cmd-seq seq="step1-6">step1,step2,step3,step3b,step4,step5,step6</cmd-seq><br />
<cmd-seq seq="step4">step41,step42,step43,step44</cmd-seq><br />
<cmd-seq seq="step5">step51,step52,step53</cmd-seq><br />
<!-- start-all for 'noconfig' scenario: all installation steps included --><br />
<!--cmd-seq seq="start-all">step1,step2,step3,step4,step5,step6</cmd-seq--><br />
<!-- start-all for configured scenario: only network and compute node steps included --><br />
<cmd-seq seq="step10">step101,step102</cmd-seq><br />
<cmd-seq seq="start-all">step00,step42,step43,step44,step52,step53</cmd-seq><br />
<cmd-seq seq="discover-hosts">step44</cmd-seq><br />
</global><br />
<br />
<net name="MgmtNet" mode="openvswitch" mtu="1450"/><br />
<net name="TunnNet" mode="openvswitch" mtu="1450"/><br />
<net name="ExtNet" mode="openvswitch" /><br />
<net name="VlanNet" mode="openvswitch" /><br />
<net name="virbr0" mode="virtual_bridge" managed="no"/><br />
<br />
<vm name="controller" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-controller</filesystem><br />
<mem>1G</mem><br />
<!--console id="0" display="yes"/--><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.11/24</ipv4><br />
</if><br />
<if id="2" net="ExtNet"><br />
<ipv4>10.0.10.11/24</ipv4><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-controller.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<filetree seq="on_boot" root="/root/">conf/controller/bin</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
chmod +x /root/bin/*<br />
</exec><br />
<exec seq="on_boot" type="verbatim"><br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
<br />
# Change owner of secret_key to horizon to avoid a 500 error when<br />
# accessing horizon (new problem arosed in v04)<br />
# See: https://ask.openstack.org/en/question/30059/getting-500-internal-server-error-while-accessing-horizon-dashboard-in-ubuntu-icehouse/<br />
chown horizon /var/lib/openstack-dashboard/secret_key<br />
<br />
# Stop nova services. Before being configured, they consume a lot of CPU<br />
service nova-scheduler stop<br />
service nova-api stop<br />
service nova-conductor stop<br />
</exec><br />
<br />
<exec seq="step00" type="verbatim"><br />
# Restart nova services<br />
service nova-scheduler start<br />
service nova-api start<br />
service nova-conductor start<br />
</exec><br />
<br />
<!-- STEP 1: Basic services --><br />
<filetree seq="step1" root="/etc/mysql/mariadb.conf.d/">conf/controller/mysql/mysqld_openstack.cnf</filetree><br />
<filetree seq="step1" root="/etc/">conf/controller/memcached/memcached.conf</filetree><br />
<filetree seq="step1" root="/etc/default/">conf/controller/etcd/etcd</filetree><br />
<!--filetree seq="step1" root="/etc/mongodb.conf">conf/controller/mongodb/mongodb.conf</filetree!--><br />
<exec seq="step1" type="verbatim"><br />
<br />
# Stop nova services. Before being configured, they consume a lot of CPU<br />
service nova-scheduler stop<br />
service nova-api stop<br />
service nova-conductor stop<br />
<br />
# Change all ocurrences of utf8mb4 to utf8. See comment above<br />
#for f in $( find /etc/mysql/mariadb.conf.d/ -type f ); do echo "Changing utf8mb4 to utf8 in file $f"; sed -i -e 's/utf8mb4/utf8/g' $f; done<br />
service mysql restart<br />
#mysql_secure_installation # to be run manually<br />
<br />
rabbitmqctl add_user openstack xxxx<br />
rabbitmqctl set_permissions openstack ".*" ".*" ".*" <br />
<br />
service memcached restart<br />
<br />
systemctl enable etcd<br />
systemctl start etcd<br />
<br />
#service mongodb stop<br />
#rm -f /var/lib/mongodb/journal/prealloc.*<br />
#service mongodb start<br />
</exec><br />
<br />
<!-- STEP 2: Identity service --><br />
<filetree seq="step2" root="/etc/keystone/">conf/controller/keystone/keystone.conf</filetree><br />
<filetree seq="step2" root="/root/bin/">conf/controller/keystone/admin-openrc.sh</filetree><br />
<filetree seq="step2" root="/root/bin/">conf/controller/keystone/demo-openrc.sh</filetree><br />
<exec seq="step2" type="verbatim"><br />
count=1; while ! mysqladmin ping ; do echo -n $count; echo ": waiting for mysql ..."; ((count++)) &amp;&amp; ((count==6)) &amp;&amp; echo "--" &amp;&amp; echo "-- ERROR: database not ready." &amp;&amp; echo "--" &amp;&amp; break; sleep 2; done<br />
</exec><br />
<exec seq="step2" type="verbatim"><br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE keystone;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
su -s /bin/sh -c "keystone-manage db_sync" keystone<br />
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone<br />
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone<br />
<br />
keystone-manage bootstrap --bootstrap-password xxxx \<br />
--bootstrap-admin-url http://controller:5000/v3/ \<br />
--bootstrap-internal-url http://controller:5000/v3/ \<br />
--bootstrap-public-url http://controller:5000/v3/ \<br />
--bootstrap-region-id RegionOne<br />
<br />
echo "ServerName controller" >> /etc/apache2/apache2.conf<br />
#ln -s /etc/apache2/sites-available/wsgi-keystone.conf /etc/apache2/sites-enabled<br />
service apache2 restart<br />
rm -f /var/lib/keystone/keystone.db<br />
sleep 5<br />
<br />
export OS_USERNAME=admin<br />
export OS_PASSWORD=xxxx<br />
export OS_PROJECT_NAME=admin<br />
export OS_USER_DOMAIN_NAME=Default<br />
export OS_PROJECT_DOMAIN_NAME=Default<br />
export OS_AUTH_URL=http://controller:5000/v3<br />
export OS_IDENTITY_API_VERSION=3<br />
<br />
# Create users and projects<br />
openstack project create --domain default --description "Service Project" service<br />
openstack project create --domain default --description "Demo Project" demo<br />
openstack user create --domain default --password=xxxx demo<br />
openstack role create user<br />
openstack role add --project demo --user demo user<br />
</exec><br />
<br />
<!-- <br />
STEP 3: Image service (Glance) <br />
--><br />
<filetree seq="step3" root="/etc/glance/">conf/controller/glance/glance-api.conf</filetree><br />
<filetree seq="step3" root="/etc/glance/">conf/controller/glance/glance-registry.conf</filetree><br />
<exec seq="step3" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE glance;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx glance<br />
openstack role add --project service --user glance admin<br />
openstack service create --name glance --description "OpenStack Image" image<br />
openstack endpoint create --region RegionOne image public http://controller:9292<br />
openstack endpoint create --region RegionOne image internal http://controller:9292<br />
openstack endpoint create --region RegionOne image admin http://controller:9292<br />
<br />
su -s /bin/sh -c "glance-manage db_sync" glance<br />
service glance-registry restart<br />
service glance-api restart<br />
#rm -f /var/lib/glance/glance.sqlite<br />
</exec><br />
<br />
<!-- <br />
STEP 3B: Placement service API<br />
--><br />
<filetree seq="step3b" root="/etc/placement/">conf/controller/placement/placement.conf</filetree><br />
<exec seq="step3b" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE placement;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx placement<br />
openstack role add --project service --user placement admin<br />
openstack service create --name placement --description "Placement API" placement<br />
openstack endpoint create --region RegionOne placement public http://controller:8778<br />
openstack endpoint create --region RegionOne placement internal http://controller:8778<br />
openstack endpoint create --region RegionOne placement admin http://controller:8778<br />
su -s /bin/sh -c "placement-manage db sync" placement<br />
service apache2 restart<br />
</exec><br />
<br />
<br />
<br />
<!-- STEP 4: Compute service (Nova) --><br />
<filetree seq="step41" root="/etc/nova/">conf/controller/nova/nova.conf</filetree><br />
<exec seq="step41" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova_api;"<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova;"<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova_cell0;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
<br />
openstack user create --domain default --password=xxxx nova<br />
openstack role add --project service --user nova admin<br />
openstack service create --name nova --description "OpenStack Compute" compute<br />
<br />
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1<br />
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1<br />
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1<br />
<br />
# Restart services stopped at step 1 to save CPU<br />
service nova-scheduler start<br />
service nova-api start<br />
service nova-conductor start<br />
<br />
su -s /bin/sh -c "nova-manage api_db sync" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova<br />
su -s /bin/sh -c "nova-manage db sync" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova<br />
<br />
service nova-api restart<br />
service nova-consoleauth restart<br />
service nova-scheduler restart<br />
service nova-conductor restart<br />
service nova-novncproxy restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<exec seq="step43" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Wait for compute1 hypervisor to be up<br />
while [ $( openstack hypervisor list --matching compute1 -f value -c State ) != 'up' ]; do echo "waiting for compute1 hypervisor..."; sleep 5; done<br />
</exec><br />
<exec seq="step43" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Wait for compute2 hypervisor to be up<br />
while [ $( openstack hypervisor list --matching compute2 -f value -c State ) != 'up' ]; do echo "waiting for compute2 hypervisor..."; sleep 5; done<br />
</exec><br />
<exec seq="step44" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
openstack hypervisor list<br />
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron) --><br />
<filetree seq="step51" root="/etc/neutron/">conf/controller/neutron/neutron.conf</filetree><br />
<!--filetree seq="step51" root="/etc/neutron/">conf/controller/neutron/neutron_lbaas.conf</filetree--><br />
<filetree seq="step51" root="/etc/neutron/plugins/ml2/">conf/controller/neutron/ml2_conf.ini</filetree><br />
<exec seq="step51" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE neutron;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx neutron<br />
openstack role add --project service --user neutron admin<br />
openstack service create --name neutron --description "OpenStack Networking" network<br />
openstack endpoint create --region RegionOne network public http://controller:9696<br />
openstack endpoint create --region RegionOne network internal http://controller:9696<br />
openstack endpoint create --region RegionOne network admin http://controller:9696<br />
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron<br />
<br />
# LBaaS<br />
neutron-db-manage --subproject neutron-lbaas upgrade head<br />
<br />
# FwaaS<br />
neutron-db-manage --subproject neutron-fwaas upgrade head<br />
<br />
# LBaaS Dashboard panels<br />
#git clone https://git.openstack.org/openstack/neutron-lbaas-dashboard<br />
#cd neutron-lbaas-dashboard<br />
#git checkout stable/mitaka<br />
#python setup.py install<br />
#cp neutron_lbaas_dashboard/enabled/_1481_project_ng_loadbalancersv2_panel.py /usr/share/openstack-dashboard/openstack_dashboard/local/enabled/<br />
#cd /usr/share/openstack-dashboard<br />
#./manage.py collectstatic --noinput<br />
#./manage.py compress<br />
#sudo service apache2 restart<br />
<br />
service nova-api restart<br />
service neutron-server restart<br />
</exec><br />
<br />
<!-- STEP 6: Dashboard service --><br />
<filetree seq="step6" root="/etc/openstack-dashboard/">conf/controller/dashboard/local_settings.py</filetree><br />
<exec seq="step6" type="verbatim"><br />
#chown www-data:www-data /var/lib/openstack-dashboard/secret_key<br />
rm /var/lib/openstack-dashboard/secret_key<br />
systemctl enable apache2<br />
service apache2 restart<br />
</exec><br />
<br />
<!-- STEP 7: Trove service --><br />
<cmd-seq seq="step7">step71,step72,step73</cmd-seq><br />
<exec seq="step71" type="verbatim"><br />
apt-get -y install python-trove python-troveclient python-glanceclient trove-common trove-api trove-taskmanager trove-conductor python-pip<br />
pip install trove-dashboard==7.0.0.0b2<br />
</exec><br />
<br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-conductor.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-taskmanager.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-guestagent.conf</filetree><br />
<exec seq="step72" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE trove;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
<br />
openstack user create --domain default --password xxxx trove<br />
openstack role add --project service --user trove admin<br />
openstack service create --name trove --description "Database" database<br />
openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s<br />
<br />
su -s /bin/sh -c "trove-manage db_sync" trove<br />
<br />
service trove-api restart<br />
service trove-taskmanager restart<br />
service trove-conductor restart<br />
<br />
# Install trove_dashboard<br />
cp -a /usr/local/lib/python2.7/dist-packages/trove_dashboard/enabled/* /usr/share/openstack-dashboard/openstack_dashboard/local/enabled/<br />
service apache2 restart<br />
<br />
</exec><br />
<br />
<exec seq="step73" type="verbatim"><br />
#wget -P /tmp/images http://tarballs.openstack.org/trove/images/ubuntu/mariadb.qcow2<br />
wget -P /tmp/images/ http://138.4.7.228/download/vnx/filesystems/ostack-images/trove/mariadb.qcow2<br />
glance image-create --name "trove-mariadb" --file /tmp/images/mariadb.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/mariadb.qcow2<br />
su -s /bin/sh -c "trove-manage --config-file /etc/trove/trove.conf datastore_update mysql ''" trove <br />
su -s /bin/sh -c "trove-manage --config-file /etc/trove/trove.conf datastore_version_update mysql mariadb mariadb glance_image_ID '' 1" trove<br />
<br />
# Create example database<br />
openstack flavor show m1.smaller >/dev/null 2>&amp;1 || openstack flavor create m1.smaller --ram 512 --disk 3 --vcpus 1 --id 6<br />
#trove create mysql_instance_1 m1.smaller --size 1 --databases myDB --users userA:xxxx --datastore_version mariadb --datastore mysql<br />
</exec><br />
<br />
<!-- STEP 8: Heat service --><br />
<!--cmd-seq seq="step8">step81,step82</cmd-seq--><br />
<br />
<filetree seq="step8" root="/etc/heat/">conf/controller/heat/heat.conf</filetree><br />
<filetree seq="step8" root="/root/heat/">conf/controller/heat/examples</filetree><br />
<exec seq="step8" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE heat;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password xxxx heat<br />
openstack role add --project service --user heat admin<br />
openstack service create --name heat --description "Orchestration" orchestration<br />
openstack service create --name heat-cfn --description "Orchestration" cloudformation<br />
openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1<br />
openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1<br />
openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1<br />
openstack domain create --description "Stack projects and users" heat<br />
openstack user create --domain heat --password xxxx heat_domain_admin<br />
openstack role add --domain heat --user-domain heat --user heat_domain_admin admin<br />
openstack role create heat_stack_owner<br />
openstack role add --project demo --user demo heat_stack_owner<br />
openstack role create heat_stack_user<br />
<br />
su -s /bin/sh -c "heat-manage db_sync" heat<br />
service heat-api restart<br />
service heat-api-cfn restart<br />
service heat-engine restart<br />
<br />
# Install Orchestration interface in Dashboard<br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get install -y gettext<br />
pip3 install heat-dashboard<br />
<br />
cd /root<br />
git clone https://github.com/openstack/heat-dashboard.git<br />
cd heat-dashboard/<br />
git checkout stable/stein<br />
cp heat_dashboard/enabled/_[1-9]*.py /usr/share/openstack-dashboard/openstack_dashboard/local/enabled<br />
python3 ./manage.py compilemessages<br />
cd /usr/share/openstack-dashboard<br />
DJANGO_SETTINGS_MODULE=openstack_dashboard.settings python3 manage.py collectstatic --noinput<br />
DJANGO_SETTINGS_MODULE=openstack_dashboard.settings python3 manage.py compress --force<br />
rm /var/lib/openstack-dashboard/secret_key<br />
service apache2 restart<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-heat" type="verbatim"><br />
source /root/bin/demo-openrc.sh<br />
<br />
# Create internal network<br />
openstack network create net-heat<br />
openstack subnet create --network net-heat --gateway 10.1.10.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.10.0/24 --allocation-pool start=10.1.10.8,end=10.1.10.100 subnet-heat<br />
<br />
mkdir -p /root/keys<br />
openstack keypair create key-heat > /root/keys/key-heat<br />
#export NET_ID=$(openstack network list | awk '/ net-heat / { print $2 }')<br />
export NET_ID=$( openstack network list --name net-heat -f value -c ID )<br />
openstack stack create -t /root/heat/examples/demo-template.yml --parameter "NetID=$NET_ID" stack<br />
<br />
</exec><br />
<br />
<br />
<!-- STEP 9: Tacker service --><br />
<cmd-seq seq="step9">step91,step92</cmd-seq><br />
<br />
<exec seq="step91" type="verbatim"><br />
apt-get -y install python-pip git<br />
pip install --upgrade pip<br />
</exec><br />
<br />
<filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/default-vim-config.yaml</filetree><br />
<filetree seq="step92" root="/root/tacker/">conf/controller/tacker/examples</filetree><br />
<exec seq="step92" type="verbatim"><br />
sed -i -e 's/.*"resource_types:OS::Nova::Flavor":.*/ "resource_types:OS::Nova::Flavor": "role:admin",/' /etc/heat/policy.json <br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE tacker;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password xxxx tacker<br />
openstack role add --project service --user tacker admin<br />
openstack service create --name tacker --description "Tacker Project" nfv-orchestration<br />
openstack endpoint create --region RegionOne nfv-orchestration public http://controller:9890/<br />
openstack endpoint create --region RegionOne nfv-orchestration internal http://controller:9890/<br />
openstack endpoint create --region RegionOne nfv-orchestration admin http://controller:9890/<br />
<br />
mkdir -p /root/tacker<br />
cd /root/tacker<br />
git clone https://github.com/openstack/tacker<br />
cd tacker<br />
git checkout stable/ocata<br />
pip install -r requirements.txt<br />
pip install tosca-parser<br />
python setup.py install<br />
mkdir -p /var/log/tacker<br />
<br />
/usr/local/bin/tacker-db-manage --config-file /usr/local/etc/tacker/tacker.conf upgrade head<br />
<br />
# Tacker client<br />
cd /root/tacker<br />
git clone https://github.com/openstack/python-tackerclient<br />
cd python-tackerclient<br />
git checkout stable/ocata<br />
python setup.py install<br />
<br />
# Tacker horizon<br />
cd /root/tacker<br />
git clone https://github.com/openstack/tacker-horizon<br />
cd tacker-horizon<br />
git checkout stable/ocata<br />
python setup.py install<br />
cp tacker_horizon/enabled/* /usr/share/openstack-dashboard/openstack_dashboard/enabled/<br />
service apache2 restart<br />
<br />
# Start tacker server<br />
mkdir -p /var/log/tacker<br />
nohup python /usr/local/bin/tacker-server \<br />
--config-file /usr/local/etc/tacker/tacker.conf \<br />
--log-file /var/log/tacker/tacker.log &amp;<br />
<br />
# Register default VIM<br />
tacker vim-register --is-default --config-file /usr/local/etc/tacker/default-vim-config.yaml \<br />
--description "Default VIM" "Openstack-VIM"<br />
<br />
</exec><br />
<exec seq="step93" type="verbatim"><br />
nohup python /usr/local/bin/tacker-server \<br />
--config-file /usr/local/etc/tacker/tacker.conf \<br />
--log-file /var/log/tacker/tacker.log &amp;<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-tacker" type="verbatim"><br />
source /root/bin/demo-openrc.sh<br />
<br />
# Create internal network<br />
openstack network create net-tacker<br />
openstack subnet create --network net-tacker --gateway 10.1.11.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.11.0/24 --allocation-pool start=10.1.11.8,end=10.1.11.100 subnet-tacker<br />
<br />
cd /root/tacker/examples<br />
tacker vnfd-create --vnfd-file sample-vnfd.yaml testd<br />
<br />
# Falla con error: <br />
# ERROR: Property error: : resources.VDU1.properties.image: : No module named v2.client<br />
tacker vnf-create --vnfd-id $( tacker vnfd-list | awk '/ testd / { print $2 }' ) test<br />
<br />
</exec><br />
<br />
<!--filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<exec seq="step92" type="verbatim"><br />
</exec--><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
#apt-get -y install python-pip<br />
#pip install --upgrade pip<br />
#pip install gnocchi[mysql,keystone] gnocchiclient<br />
apt-get -y install ceilometer-collector ceilometer-agent-central ceilometer-agent-notification python-ceilometerclient<br />
apt-get -y install gnocchi-common gnocchi-api gnocchi-metricd gnocchi-statsd python-gnocchiclient<br />
<br />
</exec><br />
<br />
<filetree seq="step102" root="/etc/ceilometer/">conf/controller/ceilometer/ceilometer.conf</filetree><br />
<!--filetree seq="step102" root="/etc/mongodb.conf">conf/controller/mongodb/mongodb.conf</filetree--><br />
<!--filetree seq="step102" root="/etc/apache2/sites-available/ceilometer.conf">conf/controller/ceilometer/apache/ceilometer.conf</filetree--><br />
<!--filetree seq="step102" root="/etc/apache2/sites-available/gnocchi.conf">conf/controller/ceilometer/apache/gnocchi.conf</filetree--><br />
<filetree seq="step102" root="/etc/gnocchi/">conf/controller/ceilometer/gnocchi.conf</filetree><br />
<filetree seq="step102" root="/etc/gnocchi/">conf/controller/ceilometer/api-paste.ini</filetree><br />
<exec seq="step102" type="verbatim"><br />
<br />
# Create gnocchi database<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE gnocchidb default character set utf8;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON gnocchidb.* TO 'gnocchi'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON gnocchidb.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
# Ceilometer<br />
source /root/bin/admin-openrc.sh <br />
openstack user create --domain default --password xxxx ceilometer<br />
openstack role add --project service --user ceilometer admin<br />
openstack service create --name ceilometer --description "Telemetry" metering<br />
openstack user create --domain default --password xxxx gnocchi<br />
openstack role add --project service --user gnocchi admin<br />
openstack service create --name gnocchi --description "Metric Service" metric<br />
openstack endpoint create --region RegionOne metric public http://controller:8041<br />
openstack endpoint create --region RegionOne metric internal http://controller:8041<br />
openstack endpoint create --region RegionOne metric admin http://controller:8041<br />
<br />
mkdir -p /var/cache/gnocchi<br />
chown gnocchi:gnocchi -R /var/cache/gnocchi<br />
mkdir -p /var/lib/gnocchi<br />
chown gnocchi:gnocchi -R /var/lib/gnocchi<br />
<br />
gnocchi-upgrade<br />
sed -i 's/8000/8041/g' /usr/bin/gnocchi-api<br />
# Correct error in gnocchi-api startup script<br />
sed -i -e 's/exec $DAEMON $DAEMON_ARGS/exec $DAEMON -- $DAEMON_ARGS/' /etc/init.d/gnocchi-api<br />
systemctl enable gnocchi-api.service gnocchi-metricd.service gnocchi-statsd.service<br />
systemctl start gnocchi-api.service gnocchi-metricd.service gnocchi-statsd.service<br />
<br />
ceilometer-upgrade --skip-metering-database<br />
service ceilometer-agent-central restart<br />
service ceilometer-agent-notification restart<br />
service ceilometer-collector restart<br />
<br />
# Enable Glance service meters<br />
crudini --set /etc/glance/glance-api.conf DEFAULT transport_url rabbit://openstack:xxxx@controller<br />
crudini --set /etc/glance/glance-api.conf oslo_messaging_notifications driver messagingv2<br />
crudini --set /etc/glance/glance-registry.conf DEFAULT transport_url rabbit://openstack:xxxx@controller<br />
crudini --set /etc/glance/glance-registry.conf oslo_messaging_notifications driver messagingv2<br />
<br />
service glance-registry restart<br />
service glance-api restart<br />
<br />
# Enable Neutron service meters<br />
crudini --set /etc/neutron/neutron.conf oslo_messaging_notifications driver messagingv2<br />
service neutron-server restart<br />
<br />
# Enable Heat service meters<br />
crudini --set /etc/heat/heat.conf oslo_messaging_notifications driver messagingv2<br />
service heat-api restart<br />
service heat-api-cfn restart<br />
service heat-engine restart<br />
<br />
#crudini --set /etc/glance/glance-api.conf DEFAULT rpc_backend rabbit<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_notifications driver messagingv2<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_host controller<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_userid openstack<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_password xxxx<br />
<br />
#crudini --set /etc/glance/glance-registry.conf DEFAULT rpc_backend rabbit<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_notifications driver messagingv2<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_host controller<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_userid openstack<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_password xxxx<br />
<br />
<br />
</exec><br />
<br />
<br />
<exec seq="load-img" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create flavors if not created<br />
openstack flavor show m1.nano >/dev/null 2>&amp;1 || openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano<br />
openstack flavor show m1.tiny >/dev/null 2>&amp;1 || openstack flavor create --id 1 --vcpus 1 --ram 512 --disk 1 m1.tiny<br />
openstack flavor show m1.smaller >/dev/null 2>&amp;1 || openstack flavor create --id 6 --vcpus 1 --ram 512 --disk 3 m1.smaller<br />
<br />
# CentOS image<br />
# Cirros image <br />
#wget -P /tmp/images http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img<br />
wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/cirros-0.3.4-x86_64-disk-vnx.qcow2<br />
glance image-create --name "cirros-0.3.4-x86_64-vnx" --file /tmp/images/cirros-0.3.4-x86_64-disk-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/cirros-0.3.4-x86_64-disk*.qcow2<br />
<br />
# Ubuntu image (trusty)<br />
#wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/trusty-server-cloudimg-amd64-disk1-vnx.qcow2<br />
#glance image-create --name "trusty-server-cloudimg-amd64-vnx" --file /tmp/images/trusty-server-cloudimg-amd64-disk1-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
#rm /tmp/images/trusty-server-cloudimg-amd64-disk1*.qcow2<br />
<br />
# Ubuntu image (xenial)<br />
wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/xenial-server-cloudimg-amd64-disk1-vnx.qcow2<br />
glance image-create --name "xenial-server-cloudimg-amd64-vnx" --file /tmp/images/xenial-server-cloudimg-amd64-disk1-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/xenial-server-cloudimg-amd64-disk1*.qcow2<br />
<br />
#wget -P /tmp/images http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2<br />
#glance image-create --name "CentOS-7-x86_64" --file /tmp/images/CentOS-7-x86_64-GenericCloud.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
#rm /tmp/images/CentOS-7-x86_64-GenericCloud.qcow2<br />
</exec><br />
<br />
<exec seq="create-demo-scenario" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create internal network<br />
#neutron net-create net0<br />
openstack network create net0<br />
#neutron subnet-create net0 10.1.1.0/24 --name subnet0 --gateway 10.1.1.1 --dns-nameserver 8.8.8.8<br />
openstack subnet create --network net0 --gateway 10.1.1.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.1.0/24 --allocation-pool start=10.1.1.8,end=10.1.1.100 subnet0<br />
<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm1 > /root/keys/vm1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm1 --nic net-id=net0 --key-name vm1<br />
<br />
# Create external network<br />
#neutron net-create ExtNet --provider:physical_network provider --provider:network_type flat --router:external --shared<br />
openstack network create --share --external --provider-physical-network provider --provider-network-type flat ExtNet<br />
#neutron subnet-create --name ExtSubnet --allocation-pool start=10.0.10.100,end=10.0.10.200 --dns-nameserver 10.0.10.1 --gateway 10.0.10.1 ExtNet 10.0.10.0/24<br />
openstack subnet create --network ExtNet --gateway 10.0.10.1 --dns-nameserver 10.0.10.1 --subnet-range 10.0.10.0/24 --allocation-pool start=10.0.10.100,end=10.0.10.200 ExtSubNet<br />
#neutron router-create r0<br />
openstack router create r0<br />
#neutron router-gateway-set r0 ExtNet<br />
openstack router set r0 --external-gateway ExtNet<br />
#neutron router-interface-add r0 subnet0<br />
openstack router add subnet r0 subnet0<br />
<br />
<br />
# Assign floating IP address to vm1<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm1<br />
openstack server add floating ip vm1 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
<br />
<br />
# Create security group rules to allow ICMP, SSH and WWW access<br />
openstack security group rule create --proto icmp --dst-port 0 default<br />
openstack security group rule create --proto tcp --dst-port 80 default<br />
openstack security group rule create --proto tcp --dst-port 22 default<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-vm2" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm2 > /root/keys/vm2<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm2 --nic net-id=net0 --key-name vm2<br />
# Assign floating IP address to vm2<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm2<br />
openstack server add floating ip vm2 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
</exec><br />
<br />
<exec seq="create-demo-vm3" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm3 > /root/keys/vm3<br />
openstack server create --flavor m1.smaller --image xenial-server-cloudimg-amd64-vnx vm3 --nic net-id=net0 --key-name vm3<br />
# Assign floating IP address to vm3<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm3<br />
openstack server add floating ip vm3 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
</exec><br />
<br />
<exec seq="create-vlan-demo-scenario" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create vlan based networks and subnetworks<br />
#neutron net-create vlan1000 --shared --provider:physical_network vlan --provider:network_type vlan --provider:segmentation_id 1000<br />
#neutron net-create vlan1001 --shared --provider:physical_network vlan --provider:network_type vlan --provider:segmentation_id 1001<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1000 vlan1000<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1001 vlan1001<br />
#neutron subnet-create vlan1000 10.1.2.0/24 --name vlan1000-subnet --allocation-pool start=10.1.2.2,end=10.1.2.99 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8<br />
#neutron subnet-create vlan1001 10.1.3.0/24 --name vlan1001-subnet --allocation-pool start=10.1.3.2,end=10.1.3.99 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8<br />
openstack subnet create --network vlan1000 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.2.0/24 --allocation-pool start=10.1.2.2,end=10.1.2.99 subvlan1000<br />
openstack subnet create --network vlan1001 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.3.0/24 --allocation-pool start=10.1.3.2,end=10.1.3.99 subvlan1001<br />
<br />
<br />
# Create virtual machine<br />
mkdir -p tmp<br />
openstack keypair create vm3 > tmp/vm3<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm3 --nic net-id=vlan1000 --key-name vm3<br />
openstack keypair create vm4 > tmp/vm4<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm4 --nic net-id=vlan1001 --key-name vm4<br />
<br />
<br />
# Create security group rules to allow ICMP, SSH and WWW access<br />
openstack security group rule create --proto icmp --dst-port 0 default<br />
openstack security group rule create --proto tcp --dst-port 80 default<br />
openstack security group rule create --proto tcp --dst-port 22 default<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
echo "--"<br />
echo "-- Keystone (identity)"<br />
echo "--"<br />
echo "Command: openstack --os-auth-url http://controller:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue"<br />
openstack --os-auth-url http://controller:35357/v3 \<br />
--os-project-domain-name default --os-user-domain-name default \<br />
--os-project-name admin --os-username admin token issue<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Glance (images)"<br />
echo "--"<br />
echo "Command: openstack image list"<br />
openstack image list<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Nova (compute)"<br />
echo "--"<br />
echo "Command: openstack compute service list"<br />
openstack compute service list<br />
echo "Command: openstack hypervisor service list"<br />
openstack hypervisor service list<br />
echo "Command: openstack catalog list"<br />
openstack catalog list<br />
echo "Command: nova-status upgrade check"<br />
nova-status upgrade check<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Neutron (network)"<br />
echo "--"<br />
echo "Command: openstack extension list --network"<br />
openstack extension list --network<br />
echo "Command: openstack network agent list"<br />
openstack network agent list<br />
echo "Command: openstack security group list"<br />
openstack security group list<br />
echo "Command: openstack security group rule list"<br />
openstack security group rule list<br />
</exec><br />
<br />
</vm><br />
<br />
<vm name="network" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-network</filesystem><br />
<!--vm name="network" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-network</filesystem--><br />
<mem>1G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.21/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.21/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="4" net="ExtNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<forwarding type="ip" /><br />
<forwarding type="ipv6" /><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts<br />
rm /root/hosts<br />
</exec><br />
<exec seq="on_boot" type="verbatim"><br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --><br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<filetree seq="on_boot" root="/root/">conf/network/bin</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
chmod +x /root/bin/*<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) --><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/neutron.conf</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/metadata_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/plugins/ml2/">conf/network/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step52" root="/etc/neutron/plugins/ml2/">conf/network/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/l3_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/dhcp_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/dnsmasq-neutron.conf</filetree><br />
<!--filetree seq="step52" root="/etc/neutron/">conf/network/neutron/fwaas_driver.ini</filetree--><br />
<!--filetree seq="step52" root="/etc/neutron/">conf/network/neutron/lbaas_agent.ini</filetree--><br />
<exec seq="step52" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
#ovs-vsctl add-br br-ex<br />
ovs-vsctl add-port br-provider eth4<br />
<br />
service neutron-lbaasv2-agent restart<br />
service openvswitch-switch restart<br />
<br />
service neutron-openvswitch-agent restart<br />
#service neutron-linuxbridge-agent restart<br />
service neutron-dhcp-agent restart<br />
service neutron-metadata-agent restart<br />
service neutron-l3-agent restart<br />
<br />
rm -f /var/lib/neutron/neutron.sqlite<br />
</exec><br />
<br />
</vm><br />
<br />
<br />
<vm name="compute1" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-compute</filesystem><br />
<!--vm name="compute1" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64" vcpu="2"--><br />
<!--filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-compute</filesystem--><br />
<mem>2G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.31/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.31/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
# Create /dev/net/tun device <br />
#mkdir -p /dev/net/<br />
#mknod -m 666 /dev/net/tun c 10 200<br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<!-- STEP 42: Compute service (Nova) --><br />
<filetree seq="step42" root="/etc/nova/">conf/compute1/nova/nova.conf</filetree><br />
<filetree seq="step42" root="/etc/nova/">conf/compute1/nova/nova-compute.conf</filetree><br />
<exec seq="step42" type="verbatim"><br />
service nova-compute restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron) --><br />
<filetree seq="step53" root="/etc/neutron/">conf/compute1/neutron/neutron.conf</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/ml2_conf.ini</filetree!--><br />
<exec seq="step53" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
service openvswitch-switch restart<br />
service nova-compute restart<br />
service neutron-openvswitch-agent restart<br />
</exec><br />
<br />
<!--filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<exec seq="step92" type="verbatim"><br />
</exec--><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get -y install ceilometer-agent-compute<br />
</exec><br />
<filetree seq="step102" root="/etc/ceilometer/">conf/compute2/ceilometer/ceilometer.conf</filetree><br />
<exec seq="step102" type="verbatim"><br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit True<br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit_period hour<br />
crudini --set /etc/nova/nova.conf DEFAULT notify_on_state_change vm_and_task_state<br />
crudini --set /etc/nova/nova.conf oslo_messaging_notifications driver messagingv2<br />
service ceilometer-agent-compute restart<br />
service nova-compute restart<br />
</exec><br />
<br />
</vm><br />
<br />
<vm name="compute2" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-compute</filesystem><br />
<!--vm name="compute2" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64" vcpu="2"--><br />
<!--filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-compute</filesystem!--><br />
<mem>2G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.32/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.32/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
# Create /dev/net/tun device <br />
#mkdir -p /dev/net/<br />
#mknod -m 666 /dev/net/tun c 10 200<br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<!-- STEP 42: Compute service (Nova) --><br />
<filetree seq="step42" root="/etc/nova/">conf/compute2/nova/nova.conf</filetree><br />
<filetree seq="step42" root="/etc/nova/">conf/compute2/nova/nova-compute.conf</filetree><br />
<exec seq="step42" type="verbatim"><br />
service nova-compute restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) --><br />
<filetree seq="step53" root="/etc/neutron/">conf/compute2/neutron/neutron.conf</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/ml2_conf.ini</filetree!--><br />
<exec seq="step53" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
service openvswitch-switch restart<br />
service nova-compute restart<br />
service neutron-openvswitch-agent restart<br />
</exec><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get -y install ceilometer-agent-compute<br />
</exec><br />
<filetree seq="step102" root="/etc/ceilometer/">conf/compute2/ceilometer/ceilometer.conf</filetree><br />
<exec seq="step102" type="verbatim"><br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit True<br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit_period hour<br />
crudini --set /etc/nova/nova.conf DEFAULT notify_on_state_change vm_and_task_state<br />
crudini --set /etc/nova/nova.conf oslo_messaging_notifications driver messagingv2<br />
service ceilometer-agent-compute restart<br />
service nova-compute restart<br />
</exec><br />
<br />
</vm><br />
<br />
<br />
<host><br />
<hostif net="ExtNet"><br />
<ipv4>10.0.10.1/24</ipv4><br />
</hostif><br />
<hostif net="MgmtNet"><br />
<ipv4>10.0.0.1/24</ipv4><br />
</hostif><br />
<exec seq="step00" type="verbatim"><br />
echo "--\n-- Waiting for all VMs to be ssh ready...\n--"<br />
</exec><br />
<exec seq="step00" type="verbatim"><br />
# Wait till ssh is accesible in all VMs<br />
while ! $( nc -z controller 22 ); do sleep 1; done<br />
while ! $( nc -z network 22 ); do sleep 1; done<br />
while ! $( nc -z compute1 22 ); do sleep 1; done<br />
while ! $( nc -z compute2 22 ); do sleep 1; done<br />
</exec><br />
<exec seq="step00" type="verbatim"><br />
echo "-- ...OK\n--"<br />
</exec><br />
</host><br />
<br />
</vnx></div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Vnx-labo-openstack-4nodes-classic-ovs-stein&diff=2634
Vnx-labo-openstack-4nodes-classic-ovs-stein
2019-08-27T11:45:26Z
<p>David: /* Provider networks example */</p>
<hr />
<div>{{Title|VNX Openstack Stein four nodes classic scenario using Open vSwitch}}<br />
<br />
== Introduction ==<br />
<br />
This is an Openstack tutorial scenario designed to experiment with Openstack free and open-source software platform for cloud-computing.<br />
<br />
The scenario is made of four virtual machines: a controller node, a network node and two compute nodes, all based on LXC. Optionally, new compute nodes can be added by starting additional VNX scenarios.<br />
<br />
Openstack version used is Stein (April 2019) over Ubuntu 18.04 LTS. The deployment scenario is the one that was named "Classic with Open vSwitch" and was described in previous versions of Openstack documentation (https://docs.openstack.org/liberty/networking-guide/scenario-classic-ovs.html). It is prepared to experiment either with the deployment of virtual machines using "Self Service Networks" provided by Openstack, or with the use of external "Provider networks".<br />
<br />
The scenario is similar to the ones developed by Raul Alvarez to test OpenDaylight-Openstack integration, but instead of using Devstack to configure Openstack nodes, the configuration is done by means of commands integrated into the VNX scenario following [https://docs.openstack.org/stein/install/ Openstack Stein installation recipes].<br />
<br />
[[File:Openstack_tutorial-stein.png|center|thumb|600px|<div align=center><br />
'''Figure 1: Openstack tutorial scenario'''</div>]]<br />
<br />
== Requirements ==<br />
<br />
To use the scenario you need a Linux computer (Ubuntu 16.04 or later recommended) with VNX software installed. At least 8GB of memory are needed to execute the scenario. <br />
<br />
See how to install VNX here: http://vnx.dit.upm.es/vnx/index.php/Vnx-install<br />
<br />
If already installed, update VNX to the latest version with:<br />
<br />
vnx_update<br />
<br />
== Installation ==<br />
<br />
Download the scenario with the virtual machines images included and unpack it:<br />
<br />
wget http://idefix.dit.upm.es/vnx/examples/openstack/openstack_lab-stein_4n_classic_ovs-v01-with-rootfs.tgz<br />
sudo vnx --unpack openstack_lab-stein_4n_classic_ovs-v01-with-rootfs.tgz<br />
<br />
== Starting the scenario ==<br />
<br />
Start the scenario and configure it and load an example cirros and ubuntu images with:<br />
cd openstack_lab-stein_4n_classic_ovs-v01<br />
# Start the scenario<br />
sudo vnx -f openstack_lab.xml -v --create<br />
# Wait for all consoles to have started and configure all Openstack services<br />
vnx -f openstack_lab.xml -v -x start-all<br />
# Load vm images in GLANCE<br />
vnx -f openstack_lab.xml -v -x load-img<br />
<br />
<br />
[[File:Tutorial-openstack-stein-4n-classic-ovs.png|center|thumb|600px|<div align=center><br />
'''Figure 2: Openstack tutorial detailed topology'''</div>]]<br />
<br />
Once started, you can connect to Openstack Dashboard (default/admin/xxxx) starting a browser and pointing it to the controller horizon page. For example:<br />
<br />
firefox 10.0.10.11/horizon<br />
<br />
== Self Service networks example ==<br />
<br />
Access the network topology Dashboard page (Project->Network->Network topology) and create a simple demo scenario inside Openstack:<br />
<br />
vnx -f openstack_lab.xml -v -x create-demo-scenario<br />
<br />
You should see the simple scenario as it is being created through the Dashboard.<br />
<br />
Once created you should be able to access vm1 console, to ping or ssh from the host to the vm1 or the opposite (see the floating IP assigned to vm1 in the Dashboard, probably 10.0.10.102).<br />
<br />
You can create a second Cirros virtual machine (vm2) or a third Ubuntu virtual machine and test connectivity among all virtual machines with: <br />
<br />
vnx -f openstack_lab.xml -v -x create-demo-vm2<br />
vnx -f openstack_lab.xml -v -x create-demo-vm3<br />
<br />
To allow external Internet access from vm1 you hace to configure a NAT in the host. You can easily do it using '''vnx_config_nat''' command distributed with VNX. Just find out the name of the public network interface of your host (i.e eth0) and execute:<br />
<br />
vnx_config_nat ExtNet eth0<br />
<br />
Besides, you can access the Openstack controller by ssh from the host and execute management commands directly:<br />
slogin root@controller # root/xxxx<br />
source bin/admin-openrc.sh # Load admin credentials<br />
<br />
For example, to show the virtual machines started:<br />
openstack server list<br />
<br />
You can also execute that commands from the host where the virtual scenario is started. For that purpose you need to install openstack client first:<br />
<br />
pip install python-openstackclient<br />
source bin/admin-openrc.sh # Load admin credentials<br />
openstack server list<br />
<br />
== Provider networks example ==<br />
<br />
Compute nodes in the Openstack virtual lab scenario have two network interfaces for internal and external connections: <br />
* '''eth2''', connected to Tunnels network and used to connect with VMs in other compute nodes or routers in the network node<br />
* '''eth3''', connected to VLANs network and used to connect with VMs in other compute nodes and also to connect to external systems through the Providers networks infrastructure. <br />
<br />
To demonstrate how Openstack VMs can be connected with external systems though the VLAN network switches, an additional demo scenario is included. Just execute:<br />
vnx -f openstack_lab.xml -v -x create-vlan-demo-scenario<br />
<br />
That scenario will create two new networks and subnetworks associated with VLANs 1000 and 1001, and two VMs, vmA1 and vmB1 connected to that networks. You can see the scenario created through the openstack Dashboard.<br />
<br />
[[File:Openstack-provider-networks-example.png|center|thumb|600px|<div align=center><br />
'''Figure 3: Provider networks testing scenario'''</div>]]<br />
<br />
The commands used to create that networks and vms are the following (can be consulted also in the scenario XML file):<br />
<br />
<pre><br />
# Networks<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1000 vlan1000<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1001 vlan1001<br />
openstack subnet create --network vlan1000 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.2.0/24 --allocation-pool start=10.1.2.2,end=10.1.2.99 subvlan1000<br />
openstack subnet create --network vlan1001 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.3.0/24 --allocation-pool start=10.1.3.2,end=10.1.3.99 subvlan1001<br />
<br />
# VMs<br />
mkdir -p tmp<br />
openstack keypair create vmA1 > tmp/vmA1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vmA1 --nic net-id=vlan1000 --key-name vmA1<br />
openstack keypair create vmB1 > tmp/vmB1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vmB1 --nic net-id=vlan1001 --key-name vmB1<br />
</pre><br />
<br />
To demonstrate the connectivity of vmA1 and vmB1 to external systems connected on VLANs 1000/1001, you can start and additional virtual scenario which creates three additional systems: vmA2 (vlan 1000), vmB2 (vlan 1001) and vlan-router (connected to both vlans). To start it just execute:<br />
vnx -f openstack_lab-vms-vlan.xml -v -t<br />
<br />
[[File:Openstack_lab-vms-vlan-map.png|center|thumb|600px|<div align=center><br />
'''Figure 4: Provider networks demo external scenario'''</div>]]<br />
<br />
Once the scenario is started, you should be able to ping, traceroute and ssh among vmA1, vmB1, vmA2 and vmB2 using the following IP addresses:<br />
<ul><br />
<li>Virtual machines inside Openstack:</li><br />
<ul><br />
<li>vmA1 and vmB1: dynamic addresses assigned from 10.1.2.0/24. You can consult the addresses from horizon or using the command:</li><br />
openstack server list<br />
</ul><br />
<li>Virtual machines outside Openstack:</li><br />
<ul><br />
<li>vmA2: 10.1.2.100/24</li><br />
<li>vmB2: 10.1.3.100/24</li><br />
</ul><br />
</ul><br />
<br />
Take into account that the pings from the exterior virtual machines to the internal ones is not allowed by the default security groups filters applied by Openstack.<br />
<br />
You can have a look at the virtual switch that supports the Openstack VLAN Network executing the following command in the host:<br />
ovs-vsctl show<br />
<br />
<br />
[[File:Vnx-demo-scenario-openstack-stein.png|center|thumb|600px|<div align=center><br />
'''Figure 5: Openstack Dashboard view of the demo virtual scenarios created'''</div>]]<br />
<br />
== Stopping or releasing the scenario ==<br />
<br />
To stop the scenario preserving the configuration and the changes made:<br />
<br />
vnx -f openstack_lab.xml -v --shutdown<br />
<br />
To start it again use:<br />
<br />
vnx -f openstack_lab.xml -v --start<br />
<br />
To stop the scenario destroying all the configuration and changes made:<br />
<br />
vnx -f openstack_lab.xml -v --destroy<br />
<br />
To unconfigure the NAT, just execute (change eth0 by the name of your external interface):<br />
<br />
vnx_config_nat -d ExtNet eth0<br />
<br />
== Other useful information ==<br />
<br />
To pack the scenario in a tgz file:<br />
<br />
bin/pack-scenario-with-rootfs # including rootfs<br />
bin/pack-scenario # without rootfs<br />
<br />
== Other Openstack Dashboard screen captures ==<br />
<br />
[[File:Vnx-demo-scenario-openstack-mitaka-compute-overview.png|center|thumb|600px|<div align=center><br />
'''Figure 6: Openstack Dashboard compute overview'''</div>]]<br />
<br />
[[File:Vnx-demo-scenario-openstack-mitaka-instances.png|center|thumb|600px|<div align=center><br />
'''Figure 7: Openstack Dashboard view of the demo virtual machines created'''</div>]]<br />
<br />
== XML specification of Openstack tutorial scenario ==<br />
<br />
<pre><br />
<?xml version="1.0" encoding="UTF-8"?><br />
<br />
<!--<br />
~~~~~~~~~~~~~~~~~~~~~~<br />
VNX Sample scenarios<br />
~~~~~~~~~~~~~~~~~~~~~~<br />
<br />
Name: openstack_tutorial-stein<br />
<br />
Description: This is an Openstack tutorial scenario designed to experiment with Openstack free and open-source <br />
software platform for cloud-computing. It is made of four LXC containers: <br />
- one controller<br />
- one network node<br />
- two compute nodes<br />
Openstack version used: Stein.<br />
The network configuration is based on the one named "Classic with Open vSwitch" described here:<br />
http://docs.openstack.org/liberty/networking-guide/scenario-classic-ovs.html<br />
<br />
Author: David Fernandez (david@dit.upm.es)<br />
<br />
This file is part of the Virtual Networks over LinuX (VNX) Project distribution. <br />
(www: http://www.dit.upm.es/vnx - e-mail: vnx@dit.upm.es) <br />
<br />
Copyright (C) 2019 Departamento de Ingenieria de Sistemas Telematicos (DIT)<br />
Universidad Politecnica de Madrid (UPM)<br />
SPAIN<br />
--><br />
<br />
<vnx xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"<br />
xsi:noNamespaceSchemaLocation="/usr/share/xml/vnx/vnx-2.00.xsd"><br />
<global><br />
<version>2.0</version><br />
<scenario_name>openstack_tutorial-stein</scenario_name><br />
<ssh_key>/root/.ssh/id_rsa.pub</ssh_key><br />
<automac offset="0"/><br />
<!--vm_mgmt type="none"/--><br />
<vm_mgmt type="private" network="10.20.0.0" mask="24" offset="0"><br />
<host_mapping /><br />
</vm_mgmt> <br />
<vm_defaults><br />
<console id="0" display="no"/><br />
<console id="1" display="yes"/><br />
</vm_defaults><br />
<cmd-seq seq="step1-6">step1,step2,step3,step3b,step4,step5,step6</cmd-seq><br />
<cmd-seq seq="step4">step41,step42,step43,step44</cmd-seq><br />
<cmd-seq seq="step5">step51,step52,step53</cmd-seq><br />
<!-- start-all for 'noconfig' scenario: all installation steps included --><br />
<!--cmd-seq seq="start-all">step1,step2,step3,step4,step5,step6</cmd-seq--><br />
<!-- start-all for configured scenario: only network and compute node steps included --><br />
<cmd-seq seq="step10">step101,step102</cmd-seq><br />
<cmd-seq seq="start-all">step00,step42,step43,step44,step52,step53</cmd-seq><br />
<cmd-seq seq="discover-hosts">step44</cmd-seq><br />
</global><br />
<br />
<net name="MgmtNet" mode="openvswitch" mtu="1450"/><br />
<net name="TunnNet" mode="openvswitch" mtu="1450"/><br />
<net name="ExtNet" mode="openvswitch" /><br />
<net name="VlanNet" mode="openvswitch" /><br />
<net name="virbr0" mode="virtual_bridge" managed="no"/><br />
<br />
<vm name="controller" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-controller</filesystem><br />
<mem>1G</mem><br />
<!--console id="0" display="yes"/--><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.11/24</ipv4><br />
</if><br />
<if id="2" net="ExtNet"><br />
<ipv4>10.0.10.11/24</ipv4><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-controller.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<filetree seq="on_boot" root="/root/">conf/controller/bin</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
chmod +x /root/bin/*<br />
</exec><br />
<exec seq="on_boot" type="verbatim"><br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
<br />
# Change owner of secret_key to horizon to avoid a 500 error when<br />
# accessing horizon (new problem arosed in v04)<br />
# See: https://ask.openstack.org/en/question/30059/getting-500-internal-server-error-while-accessing-horizon-dashboard-in-ubuntu-icehouse/<br />
chown horizon /var/lib/openstack-dashboard/secret_key<br />
<br />
# Stop nova services. Before being configured, they consume a lot of CPU<br />
service nova-scheduler stop<br />
service nova-api stop<br />
service nova-conductor stop<br />
</exec><br />
<br />
<exec seq="step00" type="verbatim"><br />
# Restart nova services<br />
service nova-scheduler start<br />
service nova-api start<br />
service nova-conductor start<br />
</exec><br />
<br />
<!-- STEP 1: Basic services --><br />
<filetree seq="step1" root="/etc/mysql/mariadb.conf.d/">conf/controller/mysql/mysqld_openstack.cnf</filetree><br />
<filetree seq="step1" root="/etc/">conf/controller/memcached/memcached.conf</filetree><br />
<filetree seq="step1" root="/etc/default/">conf/controller/etcd/etcd</filetree><br />
<!--filetree seq="step1" root="/etc/mongodb.conf">conf/controller/mongodb/mongodb.conf</filetree!--><br />
<exec seq="step1" type="verbatim"><br />
<br />
# Stop nova services. Before being configured, they consume a lot of CPU<br />
service nova-scheduler stop<br />
service nova-api stop<br />
service nova-conductor stop<br />
<br />
# Change all ocurrences of utf8mb4 to utf8. See comment above<br />
#for f in $( find /etc/mysql/mariadb.conf.d/ -type f ); do echo "Changing utf8mb4 to utf8 in file $f"; sed -i -e 's/utf8mb4/utf8/g' $f; done<br />
service mysql restart<br />
#mysql_secure_installation # to be run manually<br />
<br />
rabbitmqctl add_user openstack xxxx<br />
rabbitmqctl set_permissions openstack ".*" ".*" ".*" <br />
<br />
service memcached restart<br />
<br />
systemctl enable etcd<br />
systemctl start etcd<br />
<br />
#service mongodb stop<br />
#rm -f /var/lib/mongodb/journal/prealloc.*<br />
#service mongodb start<br />
</exec><br />
<br />
<!-- STEP 2: Identity service --><br />
<filetree seq="step2" root="/etc/keystone/">conf/controller/keystone/keystone.conf</filetree><br />
<filetree seq="step2" root="/root/bin/">conf/controller/keystone/admin-openrc.sh</filetree><br />
<filetree seq="step2" root="/root/bin/">conf/controller/keystone/demo-openrc.sh</filetree><br />
<exec seq="step2" type="verbatim"><br />
count=1; while ! mysqladmin ping ; do echo -n $count; echo ": waiting for mysql ..."; ((count++)) &amp;&amp; ((count==6)) &amp;&amp; echo "--" &amp;&amp; echo "-- ERROR: database not ready." &amp;&amp; echo "--" &amp;&amp; break; sleep 2; done<br />
</exec><br />
<exec seq="step2" type="verbatim"><br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE keystone;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
su -s /bin/sh -c "keystone-manage db_sync" keystone<br />
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone<br />
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone<br />
<br />
keystone-manage bootstrap --bootstrap-password xxxx \<br />
--bootstrap-admin-url http://controller:5000/v3/ \<br />
--bootstrap-internal-url http://controller:5000/v3/ \<br />
--bootstrap-public-url http://controller:5000/v3/ \<br />
--bootstrap-region-id RegionOne<br />
<br />
echo "ServerName controller" >> /etc/apache2/apache2.conf<br />
#ln -s /etc/apache2/sites-available/wsgi-keystone.conf /etc/apache2/sites-enabled<br />
service apache2 restart<br />
rm -f /var/lib/keystone/keystone.db<br />
sleep 5<br />
<br />
export OS_USERNAME=admin<br />
export OS_PASSWORD=xxxx<br />
export OS_PROJECT_NAME=admin<br />
export OS_USER_DOMAIN_NAME=Default<br />
export OS_PROJECT_DOMAIN_NAME=Default<br />
export OS_AUTH_URL=http://controller:5000/v3<br />
export OS_IDENTITY_API_VERSION=3<br />
<br />
# Create users and projects<br />
openstack project create --domain default --description "Service Project" service<br />
openstack project create --domain default --description "Demo Project" demo<br />
openstack user create --domain default --password=xxxx demo<br />
openstack role create user<br />
openstack role add --project demo --user demo user<br />
</exec><br />
<br />
<!-- <br />
STEP 3: Image service (Glance) <br />
--><br />
<filetree seq="step3" root="/etc/glance/">conf/controller/glance/glance-api.conf</filetree><br />
<filetree seq="step3" root="/etc/glance/">conf/controller/glance/glance-registry.conf</filetree><br />
<exec seq="step3" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE glance;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx glance<br />
openstack role add --project service --user glance admin<br />
openstack service create --name glance --description "OpenStack Image" image<br />
openstack endpoint create --region RegionOne image public http://controller:9292<br />
openstack endpoint create --region RegionOne image internal http://controller:9292<br />
openstack endpoint create --region RegionOne image admin http://controller:9292<br />
<br />
su -s /bin/sh -c "glance-manage db_sync" glance<br />
service glance-registry restart<br />
service glance-api restart<br />
#rm -f /var/lib/glance/glance.sqlite<br />
</exec><br />
<br />
<!-- <br />
STEP 3B: Placement service API<br />
--><br />
<filetree seq="step3b" root="/etc/placement/">conf/controller/placement/placement.conf</filetree><br />
<exec seq="step3b" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE placement;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx placement<br />
openstack role add --project service --user placement admin<br />
openstack service create --name placement --description "Placement API" placement<br />
openstack endpoint create --region RegionOne placement public http://controller:8778<br />
openstack endpoint create --region RegionOne placement internal http://controller:8778<br />
openstack endpoint create --region RegionOne placement admin http://controller:8778<br />
su -s /bin/sh -c "placement-manage db sync" placement<br />
service apache2 restart<br />
</exec><br />
<br />
<br />
<br />
<!-- STEP 4: Compute service (Nova) --><br />
<filetree seq="step41" root="/etc/nova/">conf/controller/nova/nova.conf</filetree><br />
<exec seq="step41" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova_api;"<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova;"<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova_cell0;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
<br />
openstack user create --domain default --password=xxxx nova<br />
openstack role add --project service --user nova admin<br />
openstack service create --name nova --description "OpenStack Compute" compute<br />
<br />
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1<br />
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1<br />
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1<br />
<br />
# Restart services stopped at step 1 to save CPU<br />
service nova-scheduler start<br />
service nova-api start<br />
service nova-conductor start<br />
<br />
su -s /bin/sh -c "nova-manage api_db sync" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova<br />
su -s /bin/sh -c "nova-manage db sync" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova<br />
<br />
service nova-api restart<br />
service nova-consoleauth restart<br />
service nova-scheduler restart<br />
service nova-conductor restart<br />
service nova-novncproxy restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<exec seq="step43" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Wait for compute1 hypervisor to be up<br />
while [ $( openstack hypervisor list --matching compute1 -f value -c State ) != 'up' ]; do echo "waiting for compute1 hypervisor..."; sleep 5; done<br />
</exec><br />
<exec seq="step43" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Wait for compute2 hypervisor to be up<br />
while [ $( openstack hypervisor list --matching compute2 -f value -c State ) != 'up' ]; do echo "waiting for compute2 hypervisor..."; sleep 5; done<br />
</exec><br />
<exec seq="step44" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
openstack hypervisor list<br />
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron) --><br />
<filetree seq="step51" root="/etc/neutron/">conf/controller/neutron/neutron.conf</filetree><br />
<!--filetree seq="step51" root="/etc/neutron/">conf/controller/neutron/neutron_lbaas.conf</filetree--><br />
<filetree seq="step51" root="/etc/neutron/plugins/ml2/">conf/controller/neutron/ml2_conf.ini</filetree><br />
<exec seq="step51" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE neutron;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx neutron<br />
openstack role add --project service --user neutron admin<br />
openstack service create --name neutron --description "OpenStack Networking" network<br />
openstack endpoint create --region RegionOne network public http://controller:9696<br />
openstack endpoint create --region RegionOne network internal http://controller:9696<br />
openstack endpoint create --region RegionOne network admin http://controller:9696<br />
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron<br />
<br />
# LBaaS<br />
neutron-db-manage --subproject neutron-lbaas upgrade head<br />
<br />
# FwaaS<br />
neutron-db-manage --subproject neutron-fwaas upgrade head<br />
<br />
# LBaaS Dashboard panels<br />
#git clone https://git.openstack.org/openstack/neutron-lbaas-dashboard<br />
#cd neutron-lbaas-dashboard<br />
#git checkout stable/mitaka<br />
#python setup.py install<br />
#cp neutron_lbaas_dashboard/enabled/_1481_project_ng_loadbalancersv2_panel.py /usr/share/openstack-dashboard/openstack_dashboard/local/enabled/<br />
#cd /usr/share/openstack-dashboard<br />
#./manage.py collectstatic --noinput<br />
#./manage.py compress<br />
#sudo service apache2 restart<br />
<br />
service nova-api restart<br />
service neutron-server restart<br />
</exec><br />
<br />
<!-- STEP 6: Dashboard service --><br />
<filetree seq="step6" root="/etc/openstack-dashboard/">conf/controller/dashboard/local_settings.py</filetree><br />
<exec seq="step6" type="verbatim"><br />
#chown www-data:www-data /var/lib/openstack-dashboard/secret_key<br />
rm /var/lib/openstack-dashboard/secret_key<br />
systemctl enable apache2<br />
service apache2 restart<br />
</exec><br />
<br />
<!-- STEP 7: Trove service --><br />
<cmd-seq seq="step7">step71,step72,step73</cmd-seq><br />
<exec seq="step71" type="verbatim"><br />
apt-get -y install python-trove python-troveclient python-glanceclient trove-common trove-api trove-taskmanager trove-conductor python-pip<br />
pip install trove-dashboard==7.0.0.0b2<br />
</exec><br />
<br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-conductor.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-taskmanager.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-guestagent.conf</filetree><br />
<exec seq="step72" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE trove;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
<br />
openstack user create --domain default --password xxxx trove<br />
openstack role add --project service --user trove admin<br />
openstack service create --name trove --description "Database" database<br />
openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s<br />
<br />
su -s /bin/sh -c "trove-manage db_sync" trove<br />
<br />
service trove-api restart<br />
service trove-taskmanager restart<br />
service trove-conductor restart<br />
<br />
# Install trove_dashboard<br />
cp -a /usr/local/lib/python2.7/dist-packages/trove_dashboard/enabled/* /usr/share/openstack-dashboard/openstack_dashboard/local/enabled/<br />
service apache2 restart<br />
<br />
</exec><br />
<br />
<exec seq="step73" type="verbatim"><br />
#wget -P /tmp/images http://tarballs.openstack.org/trove/images/ubuntu/mariadb.qcow2<br />
wget -P /tmp/images/ http://138.4.7.228/download/vnx/filesystems/ostack-images/trove/mariadb.qcow2<br />
glance image-create --name "trove-mariadb" --file /tmp/images/mariadb.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/mariadb.qcow2<br />
su -s /bin/sh -c "trove-manage --config-file /etc/trove/trove.conf datastore_update mysql ''" trove <br />
su -s /bin/sh -c "trove-manage --config-file /etc/trove/trove.conf datastore_version_update mysql mariadb mariadb glance_image_ID '' 1" trove<br />
<br />
# Create example database<br />
openstack flavor show m1.smaller >/dev/null 2>&amp;1 || openstack flavor create m1.smaller --ram 512 --disk 3 --vcpus 1 --id 6<br />
#trove create mysql_instance_1 m1.smaller --size 1 --databases myDB --users userA:xxxx --datastore_version mariadb --datastore mysql<br />
</exec><br />
<br />
<!-- STEP 8: Heat service --><br />
<!--cmd-seq seq="step8">step81,step82</cmd-seq--><br />
<br />
<filetree seq="step8" root="/etc/heat/">conf/controller/heat/heat.conf</filetree><br />
<filetree seq="step8" root="/root/heat/">conf/controller/heat/examples</filetree><br />
<exec seq="step8" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE heat;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password xxxx heat<br />
openstack role add --project service --user heat admin<br />
openstack service create --name heat --description "Orchestration" orchestration<br />
openstack service create --name heat-cfn --description "Orchestration" cloudformation<br />
openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1<br />
openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1<br />
openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1<br />
openstack domain create --description "Stack projects and users" heat<br />
openstack user create --domain heat --password xxxx heat_domain_admin<br />
openstack role add --domain heat --user-domain heat --user heat_domain_admin admin<br />
openstack role create heat_stack_owner<br />
openstack role add --project demo --user demo heat_stack_owner<br />
openstack role create heat_stack_user<br />
<br />
su -s /bin/sh -c "heat-manage db_sync" heat<br />
service heat-api restart<br />
service heat-api-cfn restart<br />
service heat-engine restart<br />
<br />
# Install Orchestration interface in Dashboard<br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get install -y gettext<br />
pip3 install heat-dashboard<br />
<br />
cd /root<br />
git clone https://github.com/openstack/heat-dashboard.git<br />
cd heat-dashboard/<br />
git checkout stable/stein<br />
cp heat_dashboard/enabled/_[1-9]*.py /usr/share/openstack-dashboard/openstack_dashboard/local/enabled<br />
python3 ./manage.py compilemessages<br />
cd /usr/share/openstack-dashboard<br />
DJANGO_SETTINGS_MODULE=openstack_dashboard.settings python3 manage.py collectstatic --noinput<br />
DJANGO_SETTINGS_MODULE=openstack_dashboard.settings python3 manage.py compress --force<br />
rm /var/lib/openstack-dashboard/secret_key<br />
service apache2 restart<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-heat" type="verbatim"><br />
source /root/bin/demo-openrc.sh<br />
<br />
# Create internal network<br />
openstack network create net-heat<br />
openstack subnet create --network net-heat --gateway 10.1.10.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.10.0/24 --allocation-pool start=10.1.10.8,end=10.1.10.100 subnet-heat<br />
<br />
mkdir -p /root/keys<br />
openstack keypair create key-heat > /root/keys/key-heat<br />
#export NET_ID=$(openstack network list | awk '/ net-heat / { print $2 }')<br />
export NET_ID=$( openstack network list --name net-heat -f value -c ID )<br />
openstack stack create -t /root/heat/examples/demo-template.yml --parameter "NetID=$NET_ID" stack<br />
<br />
</exec><br />
<br />
<br />
<!-- STEP 9: Tacker service --><br />
<cmd-seq seq="step9">step91,step92</cmd-seq><br />
<br />
<exec seq="step91" type="verbatim"><br />
apt-get -y install python-pip git<br />
pip install --upgrade pip<br />
</exec><br />
<br />
<filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/default-vim-config.yaml</filetree><br />
<filetree seq="step92" root="/root/tacker/">conf/controller/tacker/examples</filetree><br />
<exec seq="step92" type="verbatim"><br />
sed -i -e 's/.*"resource_types:OS::Nova::Flavor":.*/ "resource_types:OS::Nova::Flavor": "role:admin",/' /etc/heat/policy.json <br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE tacker;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password xxxx tacker<br />
openstack role add --project service --user tacker admin<br />
openstack service create --name tacker --description "Tacker Project" nfv-orchestration<br />
openstack endpoint create --region RegionOne nfv-orchestration public http://controller:9890/<br />
openstack endpoint create --region RegionOne nfv-orchestration internal http://controller:9890/<br />
openstack endpoint create --region RegionOne nfv-orchestration admin http://controller:9890/<br />
<br />
mkdir -p /root/tacker<br />
cd /root/tacker<br />
git clone https://github.com/openstack/tacker<br />
cd tacker<br />
git checkout stable/ocata<br />
pip install -r requirements.txt<br />
pip install tosca-parser<br />
python setup.py install<br />
mkdir -p /var/log/tacker<br />
<br />
/usr/local/bin/tacker-db-manage --config-file /usr/local/etc/tacker/tacker.conf upgrade head<br />
<br />
# Tacker client<br />
cd /root/tacker<br />
git clone https://github.com/openstack/python-tackerclient<br />
cd python-tackerclient<br />
git checkout stable/ocata<br />
python setup.py install<br />
<br />
# Tacker horizon<br />
cd /root/tacker<br />
git clone https://github.com/openstack/tacker-horizon<br />
cd tacker-horizon<br />
git checkout stable/ocata<br />
python setup.py install<br />
cp tacker_horizon/enabled/* /usr/share/openstack-dashboard/openstack_dashboard/enabled/<br />
service apache2 restart<br />
<br />
# Start tacker server<br />
mkdir -p /var/log/tacker<br />
nohup python /usr/local/bin/tacker-server \<br />
--config-file /usr/local/etc/tacker/tacker.conf \<br />
--log-file /var/log/tacker/tacker.log &amp;<br />
<br />
# Register default VIM<br />
tacker vim-register --is-default --config-file /usr/local/etc/tacker/default-vim-config.yaml \<br />
--description "Default VIM" "Openstack-VIM"<br />
<br />
</exec><br />
<exec seq="step93" type="verbatim"><br />
nohup python /usr/local/bin/tacker-server \<br />
--config-file /usr/local/etc/tacker/tacker.conf \<br />
--log-file /var/log/tacker/tacker.log &amp;<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-tacker" type="verbatim"><br />
source /root/bin/demo-openrc.sh<br />
<br />
# Create internal network<br />
openstack network create net-tacker<br />
openstack subnet create --network net-tacker --gateway 10.1.11.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.11.0/24 --allocation-pool start=10.1.11.8,end=10.1.11.100 subnet-tacker<br />
<br />
cd /root/tacker/examples<br />
tacker vnfd-create --vnfd-file sample-vnfd.yaml testd<br />
<br />
# Falla con error: <br />
# ERROR: Property error: : resources.VDU1.properties.image: : No module named v2.client<br />
tacker vnf-create --vnfd-id $( tacker vnfd-list | awk '/ testd / { print $2 }' ) test<br />
<br />
</exec><br />
<br />
<!--filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<exec seq="step92" type="verbatim"><br />
</exec--><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
#apt-get -y install python-pip<br />
#pip install --upgrade pip<br />
#pip install gnocchi[mysql,keystone] gnocchiclient<br />
apt-get -y install ceilometer-collector ceilometer-agent-central ceilometer-agent-notification python-ceilometerclient<br />
apt-get -y install gnocchi-common gnocchi-api gnocchi-metricd gnocchi-statsd python-gnocchiclient<br />
<br />
</exec><br />
<br />
<filetree seq="step102" root="/etc/ceilometer/">conf/controller/ceilometer/ceilometer.conf</filetree><br />
<!--filetree seq="step102" root="/etc/mongodb.conf">conf/controller/mongodb/mongodb.conf</filetree--><br />
<!--filetree seq="step102" root="/etc/apache2/sites-available/ceilometer.conf">conf/controller/ceilometer/apache/ceilometer.conf</filetree--><br />
<!--filetree seq="step102" root="/etc/apache2/sites-available/gnocchi.conf">conf/controller/ceilometer/apache/gnocchi.conf</filetree--><br />
<filetree seq="step102" root="/etc/gnocchi/">conf/controller/ceilometer/gnocchi.conf</filetree><br />
<filetree seq="step102" root="/etc/gnocchi/">conf/controller/ceilometer/api-paste.ini</filetree><br />
<exec seq="step102" type="verbatim"><br />
<br />
# Create gnocchi database<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE gnocchidb default character set utf8;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON gnocchidb.* TO 'gnocchi'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON gnocchidb.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
# Ceilometer<br />
source /root/bin/admin-openrc.sh <br />
openstack user create --domain default --password xxxx ceilometer<br />
openstack role add --project service --user ceilometer admin<br />
openstack service create --name ceilometer --description "Telemetry" metering<br />
openstack user create --domain default --password xxxx gnocchi<br />
openstack role add --project service --user gnocchi admin<br />
openstack service create --name gnocchi --description "Metric Service" metric<br />
openstack endpoint create --region RegionOne metric public http://controller:8041<br />
openstack endpoint create --region RegionOne metric internal http://controller:8041<br />
openstack endpoint create --region RegionOne metric admin http://controller:8041<br />
<br />
mkdir -p /var/cache/gnocchi<br />
chown gnocchi:gnocchi -R /var/cache/gnocchi<br />
mkdir -p /var/lib/gnocchi<br />
chown gnocchi:gnocchi -R /var/lib/gnocchi<br />
<br />
gnocchi-upgrade<br />
sed -i 's/8000/8041/g' /usr/bin/gnocchi-api<br />
# Correct error in gnocchi-api startup script<br />
sed -i -e 's/exec $DAEMON $DAEMON_ARGS/exec $DAEMON -- $DAEMON_ARGS/' /etc/init.d/gnocchi-api<br />
systemctl enable gnocchi-api.service gnocchi-metricd.service gnocchi-statsd.service<br />
systemctl start gnocchi-api.service gnocchi-metricd.service gnocchi-statsd.service<br />
<br />
ceilometer-upgrade --skip-metering-database<br />
service ceilometer-agent-central restart<br />
service ceilometer-agent-notification restart<br />
service ceilometer-collector restart<br />
<br />
# Enable Glance service meters<br />
crudini --set /etc/glance/glance-api.conf DEFAULT transport_url rabbit://openstack:xxxx@controller<br />
crudini --set /etc/glance/glance-api.conf oslo_messaging_notifications driver messagingv2<br />
crudini --set /etc/glance/glance-registry.conf DEFAULT transport_url rabbit://openstack:xxxx@controller<br />
crudini --set /etc/glance/glance-registry.conf oslo_messaging_notifications driver messagingv2<br />
<br />
service glance-registry restart<br />
service glance-api restart<br />
<br />
# Enable Neutron service meters<br />
crudini --set /etc/neutron/neutron.conf oslo_messaging_notifications driver messagingv2<br />
service neutron-server restart<br />
<br />
# Enable Heat service meters<br />
crudini --set /etc/heat/heat.conf oslo_messaging_notifications driver messagingv2<br />
service heat-api restart<br />
service heat-api-cfn restart<br />
service heat-engine restart<br />
<br />
#crudini --set /etc/glance/glance-api.conf DEFAULT rpc_backend rabbit<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_notifications driver messagingv2<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_host controller<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_userid openstack<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_password xxxx<br />
<br />
#crudini --set /etc/glance/glance-registry.conf DEFAULT rpc_backend rabbit<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_notifications driver messagingv2<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_host controller<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_userid openstack<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_password xxxx<br />
<br />
<br />
</exec><br />
<br />
<br />
<exec seq="load-img" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create flavors if not created<br />
openstack flavor show m1.nano >/dev/null 2>&amp;1 || openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano<br />
openstack flavor show m1.tiny >/dev/null 2>&amp;1 || openstack flavor create --id 1 --vcpus 1 --ram 512 --disk 1 m1.tiny<br />
openstack flavor show m1.smaller >/dev/null 2>&amp;1 || openstack flavor create --id 6 --vcpus 1 --ram 512 --disk 3 m1.smaller<br />
<br />
# CentOS image<br />
# Cirros image <br />
#wget -P /tmp/images http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img<br />
wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/cirros-0.3.4-x86_64-disk-vnx.qcow2<br />
glance image-create --name "cirros-0.3.4-x86_64-vnx" --file /tmp/images/cirros-0.3.4-x86_64-disk-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/cirros-0.3.4-x86_64-disk*.qcow2<br />
<br />
# Ubuntu image (trusty)<br />
#wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/trusty-server-cloudimg-amd64-disk1-vnx.qcow2<br />
#glance image-create --name "trusty-server-cloudimg-amd64-vnx" --file /tmp/images/trusty-server-cloudimg-amd64-disk1-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
#rm /tmp/images/trusty-server-cloudimg-amd64-disk1*.qcow2<br />
<br />
# Ubuntu image (xenial)<br />
wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/xenial-server-cloudimg-amd64-disk1-vnx.qcow2<br />
glance image-create --name "xenial-server-cloudimg-amd64-vnx" --file /tmp/images/xenial-server-cloudimg-amd64-disk1-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/xenial-server-cloudimg-amd64-disk1*.qcow2<br />
<br />
#wget -P /tmp/images http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2<br />
#glance image-create --name "CentOS-7-x86_64" --file /tmp/images/CentOS-7-x86_64-GenericCloud.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
#rm /tmp/images/CentOS-7-x86_64-GenericCloud.qcow2<br />
</exec><br />
<br />
<exec seq="create-demo-scenario" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create internal network<br />
#neutron net-create net0<br />
openstack network create net0<br />
#neutron subnet-create net0 10.1.1.0/24 --name subnet0 --gateway 10.1.1.1 --dns-nameserver 8.8.8.8<br />
openstack subnet create --network net0 --gateway 10.1.1.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.1.0/24 --allocation-pool start=10.1.1.8,end=10.1.1.100 subnet0<br />
<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm1 > /root/keys/vm1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm1 --nic net-id=net0 --key-name vm1<br />
<br />
# Create external network<br />
#neutron net-create ExtNet --provider:physical_network provider --provider:network_type flat --router:external --shared<br />
openstack network create --share --external --provider-physical-network provider --provider-network-type flat ExtNet<br />
#neutron subnet-create --name ExtSubnet --allocation-pool start=10.0.10.100,end=10.0.10.200 --dns-nameserver 10.0.10.1 --gateway 10.0.10.1 ExtNet 10.0.10.0/24<br />
openstack subnet create --network ExtNet --gateway 10.0.10.1 --dns-nameserver 10.0.10.1 --subnet-range 10.0.10.0/24 --allocation-pool start=10.0.10.100,end=10.0.10.200 ExtSubNet<br />
#neutron router-create r0<br />
openstack router create r0<br />
#neutron router-gateway-set r0 ExtNet<br />
openstack router set r0 --external-gateway ExtNet<br />
#neutron router-interface-add r0 subnet0<br />
openstack router add subnet r0 subnet0<br />
<br />
<br />
# Assign floating IP address to vm1<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm1<br />
openstack server add floating ip vm1 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
<br />
<br />
# Create security group rules to allow ICMP, SSH and WWW access<br />
openstack security group rule create --proto icmp --dst-port 0 default<br />
openstack security group rule create --proto tcp --dst-port 80 default<br />
openstack security group rule create --proto tcp --dst-port 22 default<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-vm2" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm2 > /root/keys/vm2<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm2 --nic net-id=net0 --key-name vm2<br />
# Assign floating IP address to vm2<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm2<br />
openstack server add floating ip vm2 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
</exec><br />
<br />
<exec seq="create-demo-vm3" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm3 > /root/keys/vm3<br />
openstack server create --flavor m1.smaller --image xenial-server-cloudimg-amd64-vnx vm3 --nic net-id=net0 --key-name vm3<br />
# Assign floating IP address to vm3<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm3<br />
openstack server add floating ip vm3 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
</exec><br />
<br />
<exec seq="create-vlan-demo-scenario" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create vlan based networks and subnetworks<br />
#neutron net-create vlan1000 --shared --provider:physical_network vlan --provider:network_type vlan --provider:segmentation_id 1000<br />
#neutron net-create vlan1001 --shared --provider:physical_network vlan --provider:network_type vlan --provider:segmentation_id 1001<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1000 vlan1000<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1001 vlan1001<br />
#neutron subnet-create vlan1000 10.1.2.0/24 --name vlan1000-subnet --allocation-pool start=10.1.2.2,end=10.1.2.99 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8<br />
#neutron subnet-create vlan1001 10.1.3.0/24 --name vlan1001-subnet --allocation-pool start=10.1.3.2,end=10.1.3.99 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8<br />
openstack subnet create --network vlan1000 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.2.0/24 --allocation-pool start=10.1.2.2,end=10.1.2.99 subvlan1000<br />
openstack subnet create --network vlan1001 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.3.0/24 --allocation-pool start=10.1.3.2,end=10.1.3.99 subvlan1001<br />
<br />
<br />
# Create virtual machine<br />
mkdir -p tmp<br />
openstack keypair create vm3 > tmp/vm3<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm3 --nic net-id=vlan1000 --key-name vm3<br />
openstack keypair create vm4 > tmp/vm4<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm4 --nic net-id=vlan1001 --key-name vm4<br />
<br />
<br />
# Create security group rules to allow ICMP, SSH and WWW access<br />
openstack security group rule create --proto icmp --dst-port 0 default<br />
openstack security group rule create --proto tcp --dst-port 80 default<br />
openstack security group rule create --proto tcp --dst-port 22 default<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
echo "--"<br />
echo "-- Keystone (identity)"<br />
echo "--"<br />
echo "Command: openstack --os-auth-url http://controller:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue"<br />
openstack --os-auth-url http://controller:35357/v3 \<br />
--os-project-domain-name default --os-user-domain-name default \<br />
--os-project-name admin --os-username admin token issue<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Glance (images)"<br />
echo "--"<br />
echo "Command: openstack image list"<br />
openstack image list<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Nova (compute)"<br />
echo "--"<br />
echo "Command: openstack compute service list"<br />
openstack compute service list<br />
echo "Command: openstack hypervisor service list"<br />
openstack hypervisor service list<br />
echo "Command: openstack catalog list"<br />
openstack catalog list<br />
echo "Command: nova-status upgrade check"<br />
nova-status upgrade check<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Neutron (network)"<br />
echo "--"<br />
echo "Command: openstack extension list --network"<br />
openstack extension list --network<br />
echo "Command: openstack network agent list"<br />
openstack network agent list<br />
echo "Command: openstack security group list"<br />
openstack security group list<br />
echo "Command: openstack security group rule list"<br />
openstack security group rule list<br />
</exec><br />
<br />
</vm><br />
<br />
<vm name="network" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-network</filesystem><br />
<!--vm name="network" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-network</filesystem--><br />
<mem>1G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.21/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.21/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="4" net="ExtNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<forwarding type="ip" /><br />
<forwarding type="ipv6" /><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts<br />
rm /root/hosts<br />
</exec><br />
<exec seq="on_boot" type="verbatim"><br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --><br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<filetree seq="on_boot" root="/root/">conf/network/bin</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
chmod +x /root/bin/*<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) --><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/neutron.conf</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/metadata_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/plugins/ml2/">conf/network/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step52" root="/etc/neutron/plugins/ml2/">conf/network/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/l3_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/dhcp_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/dnsmasq-neutron.conf</filetree><br />
<!--filetree seq="step52" root="/etc/neutron/">conf/network/neutron/fwaas_driver.ini</filetree--><br />
<!--filetree seq="step52" root="/etc/neutron/">conf/network/neutron/lbaas_agent.ini</filetree--><br />
<exec seq="step52" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
#ovs-vsctl add-br br-ex<br />
ovs-vsctl add-port br-provider eth4<br />
<br />
service neutron-lbaasv2-agent restart<br />
service openvswitch-switch restart<br />
<br />
service neutron-openvswitch-agent restart<br />
#service neutron-linuxbridge-agent restart<br />
service neutron-dhcp-agent restart<br />
service neutron-metadata-agent restart<br />
service neutron-l3-agent restart<br />
<br />
rm -f /var/lib/neutron/neutron.sqlite<br />
</exec><br />
<br />
</vm><br />
<br />
<br />
<vm name="compute1" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-compute</filesystem><br />
<!--vm name="compute1" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64" vcpu="2"--><br />
<!--filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-compute</filesystem--><br />
<mem>2G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.31/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.31/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
# Create /dev/net/tun device <br />
#mkdir -p /dev/net/<br />
#mknod -m 666 /dev/net/tun c 10 200<br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<!-- STEP 42: Compute service (Nova) --><br />
<filetree seq="step42" root="/etc/nova/">conf/compute1/nova/nova.conf</filetree><br />
<filetree seq="step42" root="/etc/nova/">conf/compute1/nova/nova-compute.conf</filetree><br />
<exec seq="step42" type="verbatim"><br />
service nova-compute restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron) --><br />
<filetree seq="step53" root="/etc/neutron/">conf/compute1/neutron/neutron.conf</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/ml2_conf.ini</filetree!--><br />
<exec seq="step53" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
service openvswitch-switch restart<br />
service nova-compute restart<br />
service neutron-openvswitch-agent restart<br />
</exec><br />
<br />
<!--filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<exec seq="step92" type="verbatim"><br />
</exec--><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get -y install ceilometer-agent-compute<br />
</exec><br />
<filetree seq="step102" root="/etc/ceilometer/">conf/compute2/ceilometer/ceilometer.conf</filetree><br />
<exec seq="step102" type="verbatim"><br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit True<br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit_period hour<br />
crudini --set /etc/nova/nova.conf DEFAULT notify_on_state_change vm_and_task_state<br />
crudini --set /etc/nova/nova.conf oslo_messaging_notifications driver messagingv2<br />
service ceilometer-agent-compute restart<br />
service nova-compute restart<br />
</exec><br />
<br />
</vm><br />
<br />
<vm name="compute2" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-compute</filesystem><br />
<!--vm name="compute2" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64" vcpu="2"--><br />
<!--filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-compute</filesystem!--><br />
<mem>2G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.32/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.32/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
# Create /dev/net/tun device <br />
#mkdir -p /dev/net/<br />
#mknod -m 666 /dev/net/tun c 10 200<br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<!-- STEP 42: Compute service (Nova) --><br />
<filetree seq="step42" root="/etc/nova/">conf/compute2/nova/nova.conf</filetree><br />
<filetree seq="step42" root="/etc/nova/">conf/compute2/nova/nova-compute.conf</filetree><br />
<exec seq="step42" type="verbatim"><br />
service nova-compute restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) --><br />
<filetree seq="step53" root="/etc/neutron/">conf/compute2/neutron/neutron.conf</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/ml2_conf.ini</filetree!--><br />
<exec seq="step53" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
service openvswitch-switch restart<br />
service nova-compute restart<br />
service neutron-openvswitch-agent restart<br />
</exec><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get -y install ceilometer-agent-compute<br />
</exec><br />
<filetree seq="step102" root="/etc/ceilometer/">conf/compute2/ceilometer/ceilometer.conf</filetree><br />
<exec seq="step102" type="verbatim"><br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit True<br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit_period hour<br />
crudini --set /etc/nova/nova.conf DEFAULT notify_on_state_change vm_and_task_state<br />
crudini --set /etc/nova/nova.conf oslo_messaging_notifications driver messagingv2<br />
service ceilometer-agent-compute restart<br />
service nova-compute restart<br />
</exec><br />
<br />
</vm><br />
<br />
<br />
<host><br />
<hostif net="ExtNet"><br />
<ipv4>10.0.10.1/24</ipv4><br />
</hostif><br />
<hostif net="MgmtNet"><br />
<ipv4>10.0.0.1/24</ipv4><br />
</hostif><br />
<exec seq="step00" type="verbatim"><br />
echo "--\n-- Waiting for all VMs to be ssh ready...\n--"<br />
</exec><br />
<exec seq="step00" type="verbatim"><br />
# Wait till ssh is accesible in all VMs<br />
while ! $( nc -z controller 22 ); do sleep 1; done<br />
while ! $( nc -z network 22 ); do sleep 1; done<br />
while ! $( nc -z compute1 22 ); do sleep 1; done<br />
while ! $( nc -z compute2 22 ); do sleep 1; done<br />
</exec><br />
<exec seq="step00" type="verbatim"><br />
echo "-- ...OK\n--"<br />
</exec><br />
</host><br />
<br />
</vnx></div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Vnx-labo-openstack-4nodes-classic-ovs-stein&diff=2633
Vnx-labo-openstack-4nodes-classic-ovs-stein
2019-08-26T14:30:33Z
<p>David: /* Other Openstack Dashboard screen captures */</p>
<hr />
<div>{{Title|VNX Openstack Stein four nodes classic scenario using Open vSwitch}}<br />
<br />
== Introduction ==<br />
<br />
This is an Openstack tutorial scenario designed to experiment with Openstack free and open-source software platform for cloud-computing.<br />
<br />
The scenario is made of four virtual machines: a controller node, a network node and two compute nodes, all based on LXC. Optionally, new compute nodes can be added by starting additional VNX scenarios.<br />
<br />
Openstack version used is Stein (April 2019) over Ubuntu 18.04 LTS. The deployment scenario is the one that was named "Classic with Open vSwitch" and was described in previous versions of Openstack documentation (https://docs.openstack.org/liberty/networking-guide/scenario-classic-ovs.html). It is prepared to experiment either with the deployment of virtual machines using "Self Service Networks" provided by Openstack, or with the use of external "Provider networks".<br />
<br />
The scenario is similar to the ones developed by Raul Alvarez to test OpenDaylight-Openstack integration, but instead of using Devstack to configure Openstack nodes, the configuration is done by means of commands integrated into the VNX scenario following [https://docs.openstack.org/stein/install/ Openstack Stein installation recipes].<br />
<br />
[[File:Openstack_tutorial-stein.png|center|thumb|600px|<div align=center><br />
'''Figure 1: Openstack tutorial scenario'''</div>]]<br />
<br />
== Requirements ==<br />
<br />
To use the scenario you need a Linux computer (Ubuntu 16.04 or later recommended) with VNX software installed. At least 8GB of memory are needed to execute the scenario. <br />
<br />
See how to install VNX here: http://vnx.dit.upm.es/vnx/index.php/Vnx-install<br />
<br />
If already installed, update VNX to the latest version with:<br />
<br />
vnx_update<br />
<br />
== Installation ==<br />
<br />
Download the scenario with the virtual machines images included and unpack it:<br />
<br />
wget http://idefix.dit.upm.es/vnx/examples/openstack/openstack_lab-stein_4n_classic_ovs-v01-with-rootfs.tgz<br />
sudo vnx --unpack openstack_lab-stein_4n_classic_ovs-v01-with-rootfs.tgz<br />
<br />
== Starting the scenario ==<br />
<br />
Start the scenario and configure it and load an example cirros and ubuntu images with:<br />
cd openstack_lab-stein_4n_classic_ovs-v01<br />
# Start the scenario<br />
sudo vnx -f openstack_lab.xml -v --create<br />
# Wait for all consoles to have started and configure all Openstack services<br />
vnx -f openstack_lab.xml -v -x start-all<br />
# Load vm images in GLANCE<br />
vnx -f openstack_lab.xml -v -x load-img<br />
<br />
<br />
[[File:Tutorial-openstack-stein-4n-classic-ovs.png|center|thumb|600px|<div align=center><br />
'''Figure 2: Openstack tutorial detailed topology'''</div>]]<br />
<br />
Once started, you can connect to Openstack Dashboard (default/admin/xxxx) starting a browser and pointing it to the controller horizon page. For example:<br />
<br />
firefox 10.0.10.11/horizon<br />
<br />
== Self Service networks example ==<br />
<br />
Access the network topology Dashboard page (Project->Network->Network topology) and create a simple demo scenario inside Openstack:<br />
<br />
vnx -f openstack_lab.xml -v -x create-demo-scenario<br />
<br />
You should see the simple scenario as it is being created through the Dashboard.<br />
<br />
Once created you should be able to access vm1 console, to ping or ssh from the host to the vm1 or the opposite (see the floating IP assigned to vm1 in the Dashboard, probably 10.0.10.102).<br />
<br />
You can create a second Cirros virtual machine (vm2) or a third Ubuntu virtual machine and test connectivity among all virtual machines with: <br />
<br />
vnx -f openstack_lab.xml -v -x create-demo-vm2<br />
vnx -f openstack_lab.xml -v -x create-demo-vm3<br />
<br />
To allow external Internet access from vm1 you hace to configure a NAT in the host. You can easily do it using '''vnx_config_nat''' command distributed with VNX. Just find out the name of the public network interface of your host (i.e eth0) and execute:<br />
<br />
vnx_config_nat ExtNet eth0<br />
<br />
Besides, you can access the Openstack controller by ssh from the host and execute management commands directly:<br />
slogin root@controller # root/xxxx<br />
source bin/admin-openrc.sh # Load admin credentials<br />
<br />
For example, to show the virtual machines started:<br />
openstack server list<br />
<br />
You can also execute that commands from the host where the virtual scenario is started. For that purpose you need to install openstack client first:<br />
<br />
pip install python-openstackclient<br />
source bin/admin-openrc.sh # Load admin credentials<br />
openstack server list<br />
<br />
== Provider networks example ==<br />
<br />
Compute nodes in the Openstack virtual lab scenario have two network interfaces for internal and external connections: <br />
* '''eth2''', connected to Tunnels network and used to connect with VMs in other compute nodes or routers in the network node<br />
* '''eth3''', connected to VLANs network and used to connect with VMs in other compute nodes and also to connect to external systems through the Providers networks infrastructure. <br />
<br />
To demonstrate how Openstack VMs can be connected with external systems though the VLAN network switches, an additional demo scenario is included. Just execute:<br />
vnx -f openstack_lab.xml -v -x create-vlan-demo-scenario<br />
<br />
That scenario will create two new networks and subnetworks associated with VLANs 1000 and 1001, and two VMs, vmA1 and vmB1 connected to that networks. You can see the scenario created through the openstack Dashboard.<br />
<br />
[[File:Openstack-provider-networks-example.png|center|thumb|600px|<div align=center><br />
'''Figure 3: Provider networks testing scenario'''</div>]]<br />
<br />
The commands used to create that networks and vms are the following (can be consulted also in the scenario XML file):<br />
<br />
<pre><br />
# Networks<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1000 vlan1000<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1001 vlan1001<br />
openstack subnet create --network vlan1000 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.2.0/24 --allocation-pool start=10.1.2.2,end=10.1.2.99 subvlan1000<br />
openstack subnet create --network vlan1001 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.3.0/24 --allocation-pool start=10.1.3.2,end=10.1.3.99 subvlan1001<br />
<br />
# VMs<br />
mkdir -p tmp<br />
openstack keypair create vmA1 > tmp/vmA1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vmA1 --nic net-id=vlan1000 --key-name vmA1<br />
openstack keypair create vmB1 > tmp/vmB1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vmB1 --nic net-id=vlan1001 --key-name vmB1<br />
</pre><br />
<br />
To demonstrate the connectivity of vmA1 and vmB1 to external systems connected on VLANs 1000/1001, you can start and additional virtual scenario which creates three additional systems: vmA2 (vlan 1000), vmB2 (vlan 1001) and vlan-router (connected to both vlans). To start it just execute:<br />
vnx -f openstack_lab-vms-vlan.xml -v -t<br />
<br />
[[File:Openstack_lab-vms-vlan-map.png|center|thumb|600px|<div align=center><br />
'''Figure 4: Provider networks demo external scenario'''</div>]]<br />
<br />
Once the scenario is started, you should be able to ping, traceroute and ssh among vmA1, vmB1, vmA2 and vmB2 using the following IP addresses:<br />
<ul><br />
<li>Virtual machines inside Openstack:</li><br />
<ul><br />
<li>vmA1: 10.1.2.20/24</li><br />
<li>vmB1: 10.1.3.20/24</li><br />
</ul><br />
<li>Virtual machines outside Openstack:</li><br />
<ul><br />
<li>vmA2: 10.1.2.100/24</li><br />
<li>vmB2: 10.1.3.100/24</li><br />
</ul><br />
</ul><br />
<br />
You can have a look at the virtual switch that supports the Openstack VLAN Network executing the following command in the host:<br />
ovs-vsctl show<br />
<br />
<br />
[[File:Vnx-demo-scenario-openstack-stein.png|center|thumb|600px|<div align=center><br />
'''Figure 5: Openstack Dashboard view of the demo virtual scenarios created'''</div>]]<br />
<br />
== Stopping or releasing the scenario ==<br />
<br />
To stop the scenario preserving the configuration and the changes made:<br />
<br />
vnx -f openstack_lab.xml -v --shutdown<br />
<br />
To start it again use:<br />
<br />
vnx -f openstack_lab.xml -v --start<br />
<br />
To stop the scenario destroying all the configuration and changes made:<br />
<br />
vnx -f openstack_lab.xml -v --destroy<br />
<br />
To unconfigure the NAT, just execute (change eth0 by the name of your external interface):<br />
<br />
vnx_config_nat -d ExtNet eth0<br />
<br />
== Other useful information ==<br />
<br />
To pack the scenario in a tgz file:<br />
<br />
bin/pack-scenario-with-rootfs # including rootfs<br />
bin/pack-scenario # without rootfs<br />
<br />
== Other Openstack Dashboard screen captures ==<br />
<br />
[[File:Vnx-demo-scenario-openstack-mitaka-compute-overview.png|center|thumb|600px|<div align=center><br />
'''Figure 6: Openstack Dashboard compute overview'''</div>]]<br />
<br />
[[File:Vnx-demo-scenario-openstack-mitaka-instances.png|center|thumb|600px|<div align=center><br />
'''Figure 7: Openstack Dashboard view of the demo virtual machines created'''</div>]]<br />
<br />
== XML specification of Openstack tutorial scenario ==<br />
<br />
<pre><br />
<?xml version="1.0" encoding="UTF-8"?><br />
<br />
<!--<br />
~~~~~~~~~~~~~~~~~~~~~~<br />
VNX Sample scenarios<br />
~~~~~~~~~~~~~~~~~~~~~~<br />
<br />
Name: openstack_tutorial-stein<br />
<br />
Description: This is an Openstack tutorial scenario designed to experiment with Openstack free and open-source <br />
software platform for cloud-computing. It is made of four LXC containers: <br />
- one controller<br />
- one network node<br />
- two compute nodes<br />
Openstack version used: Stein.<br />
The network configuration is based on the one named "Classic with Open vSwitch" described here:<br />
http://docs.openstack.org/liberty/networking-guide/scenario-classic-ovs.html<br />
<br />
Author: David Fernandez (david@dit.upm.es)<br />
<br />
This file is part of the Virtual Networks over LinuX (VNX) Project distribution. <br />
(www: http://www.dit.upm.es/vnx - e-mail: vnx@dit.upm.es) <br />
<br />
Copyright (C) 2019 Departamento de Ingenieria de Sistemas Telematicos (DIT)<br />
Universidad Politecnica de Madrid (UPM)<br />
SPAIN<br />
--><br />
<br />
<vnx xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"<br />
xsi:noNamespaceSchemaLocation="/usr/share/xml/vnx/vnx-2.00.xsd"><br />
<global><br />
<version>2.0</version><br />
<scenario_name>openstack_tutorial-stein</scenario_name><br />
<ssh_key>/root/.ssh/id_rsa.pub</ssh_key><br />
<automac offset="0"/><br />
<!--vm_mgmt type="none"/--><br />
<vm_mgmt type="private" network="10.20.0.0" mask="24" offset="0"><br />
<host_mapping /><br />
</vm_mgmt> <br />
<vm_defaults><br />
<console id="0" display="no"/><br />
<console id="1" display="yes"/><br />
</vm_defaults><br />
<cmd-seq seq="step1-6">step1,step2,step3,step3b,step4,step5,step6</cmd-seq><br />
<cmd-seq seq="step4">step41,step42,step43,step44</cmd-seq><br />
<cmd-seq seq="step5">step51,step52,step53</cmd-seq><br />
<!-- start-all for 'noconfig' scenario: all installation steps included --><br />
<!--cmd-seq seq="start-all">step1,step2,step3,step4,step5,step6</cmd-seq--><br />
<!-- start-all for configured scenario: only network and compute node steps included --><br />
<cmd-seq seq="step10">step101,step102</cmd-seq><br />
<cmd-seq seq="start-all">step00,step42,step43,step44,step52,step53</cmd-seq><br />
<cmd-seq seq="discover-hosts">step44</cmd-seq><br />
</global><br />
<br />
<net name="MgmtNet" mode="openvswitch" mtu="1450"/><br />
<net name="TunnNet" mode="openvswitch" mtu="1450"/><br />
<net name="ExtNet" mode="openvswitch" /><br />
<net name="VlanNet" mode="openvswitch" /><br />
<net name="virbr0" mode="virtual_bridge" managed="no"/><br />
<br />
<vm name="controller" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-controller</filesystem><br />
<mem>1G</mem><br />
<!--console id="0" display="yes"/--><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.11/24</ipv4><br />
</if><br />
<if id="2" net="ExtNet"><br />
<ipv4>10.0.10.11/24</ipv4><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-controller.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<filetree seq="on_boot" root="/root/">conf/controller/bin</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
chmod +x /root/bin/*<br />
</exec><br />
<exec seq="on_boot" type="verbatim"><br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
<br />
# Change owner of secret_key to horizon to avoid a 500 error when<br />
# accessing horizon (new problem arosed in v04)<br />
# See: https://ask.openstack.org/en/question/30059/getting-500-internal-server-error-while-accessing-horizon-dashboard-in-ubuntu-icehouse/<br />
chown horizon /var/lib/openstack-dashboard/secret_key<br />
<br />
# Stop nova services. Before being configured, they consume a lot of CPU<br />
service nova-scheduler stop<br />
service nova-api stop<br />
service nova-conductor stop<br />
</exec><br />
<br />
<exec seq="step00" type="verbatim"><br />
# Restart nova services<br />
service nova-scheduler start<br />
service nova-api start<br />
service nova-conductor start<br />
</exec><br />
<br />
<!-- STEP 1: Basic services --><br />
<filetree seq="step1" root="/etc/mysql/mariadb.conf.d/">conf/controller/mysql/mysqld_openstack.cnf</filetree><br />
<filetree seq="step1" root="/etc/">conf/controller/memcached/memcached.conf</filetree><br />
<filetree seq="step1" root="/etc/default/">conf/controller/etcd/etcd</filetree><br />
<!--filetree seq="step1" root="/etc/mongodb.conf">conf/controller/mongodb/mongodb.conf</filetree!--><br />
<exec seq="step1" type="verbatim"><br />
<br />
# Stop nova services. Before being configured, they consume a lot of CPU<br />
service nova-scheduler stop<br />
service nova-api stop<br />
service nova-conductor stop<br />
<br />
# Change all ocurrences of utf8mb4 to utf8. See comment above<br />
#for f in $( find /etc/mysql/mariadb.conf.d/ -type f ); do echo "Changing utf8mb4 to utf8 in file $f"; sed -i -e 's/utf8mb4/utf8/g' $f; done<br />
service mysql restart<br />
#mysql_secure_installation # to be run manually<br />
<br />
rabbitmqctl add_user openstack xxxx<br />
rabbitmqctl set_permissions openstack ".*" ".*" ".*" <br />
<br />
service memcached restart<br />
<br />
systemctl enable etcd<br />
systemctl start etcd<br />
<br />
#service mongodb stop<br />
#rm -f /var/lib/mongodb/journal/prealloc.*<br />
#service mongodb start<br />
</exec><br />
<br />
<!-- STEP 2: Identity service --><br />
<filetree seq="step2" root="/etc/keystone/">conf/controller/keystone/keystone.conf</filetree><br />
<filetree seq="step2" root="/root/bin/">conf/controller/keystone/admin-openrc.sh</filetree><br />
<filetree seq="step2" root="/root/bin/">conf/controller/keystone/demo-openrc.sh</filetree><br />
<exec seq="step2" type="verbatim"><br />
count=1; while ! mysqladmin ping ; do echo -n $count; echo ": waiting for mysql ..."; ((count++)) &amp;&amp; ((count==6)) &amp;&amp; echo "--" &amp;&amp; echo "-- ERROR: database not ready." &amp;&amp; echo "--" &amp;&amp; break; sleep 2; done<br />
</exec><br />
<exec seq="step2" type="verbatim"><br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE keystone;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
su -s /bin/sh -c "keystone-manage db_sync" keystone<br />
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone<br />
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone<br />
<br />
keystone-manage bootstrap --bootstrap-password xxxx \<br />
--bootstrap-admin-url http://controller:5000/v3/ \<br />
--bootstrap-internal-url http://controller:5000/v3/ \<br />
--bootstrap-public-url http://controller:5000/v3/ \<br />
--bootstrap-region-id RegionOne<br />
<br />
echo "ServerName controller" >> /etc/apache2/apache2.conf<br />
#ln -s /etc/apache2/sites-available/wsgi-keystone.conf /etc/apache2/sites-enabled<br />
service apache2 restart<br />
rm -f /var/lib/keystone/keystone.db<br />
sleep 5<br />
<br />
export OS_USERNAME=admin<br />
export OS_PASSWORD=xxxx<br />
export OS_PROJECT_NAME=admin<br />
export OS_USER_DOMAIN_NAME=Default<br />
export OS_PROJECT_DOMAIN_NAME=Default<br />
export OS_AUTH_URL=http://controller:5000/v3<br />
export OS_IDENTITY_API_VERSION=3<br />
<br />
# Create users and projects<br />
openstack project create --domain default --description "Service Project" service<br />
openstack project create --domain default --description "Demo Project" demo<br />
openstack user create --domain default --password=xxxx demo<br />
openstack role create user<br />
openstack role add --project demo --user demo user<br />
</exec><br />
<br />
<!-- <br />
STEP 3: Image service (Glance) <br />
--><br />
<filetree seq="step3" root="/etc/glance/">conf/controller/glance/glance-api.conf</filetree><br />
<filetree seq="step3" root="/etc/glance/">conf/controller/glance/glance-registry.conf</filetree><br />
<exec seq="step3" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE glance;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx glance<br />
openstack role add --project service --user glance admin<br />
openstack service create --name glance --description "OpenStack Image" image<br />
openstack endpoint create --region RegionOne image public http://controller:9292<br />
openstack endpoint create --region RegionOne image internal http://controller:9292<br />
openstack endpoint create --region RegionOne image admin http://controller:9292<br />
<br />
su -s /bin/sh -c "glance-manage db_sync" glance<br />
service glance-registry restart<br />
service glance-api restart<br />
#rm -f /var/lib/glance/glance.sqlite<br />
</exec><br />
<br />
<!-- <br />
STEP 3B: Placement service API<br />
--><br />
<filetree seq="step3b" root="/etc/placement/">conf/controller/placement/placement.conf</filetree><br />
<exec seq="step3b" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE placement;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx placement<br />
openstack role add --project service --user placement admin<br />
openstack service create --name placement --description "Placement API" placement<br />
openstack endpoint create --region RegionOne placement public http://controller:8778<br />
openstack endpoint create --region RegionOne placement internal http://controller:8778<br />
openstack endpoint create --region RegionOne placement admin http://controller:8778<br />
su -s /bin/sh -c "placement-manage db sync" placement<br />
service apache2 restart<br />
</exec><br />
<br />
<br />
<br />
<!-- STEP 4: Compute service (Nova) --><br />
<filetree seq="step41" root="/etc/nova/">conf/controller/nova/nova.conf</filetree><br />
<exec seq="step41" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova_api;"<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova;"<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova_cell0;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
<br />
openstack user create --domain default --password=xxxx nova<br />
openstack role add --project service --user nova admin<br />
openstack service create --name nova --description "OpenStack Compute" compute<br />
<br />
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1<br />
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1<br />
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1<br />
<br />
# Restart services stopped at step 1 to save CPU<br />
service nova-scheduler start<br />
service nova-api start<br />
service nova-conductor start<br />
<br />
su -s /bin/sh -c "nova-manage api_db sync" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova<br />
su -s /bin/sh -c "nova-manage db sync" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova<br />
<br />
service nova-api restart<br />
service nova-consoleauth restart<br />
service nova-scheduler restart<br />
service nova-conductor restart<br />
service nova-novncproxy restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<exec seq="step43" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Wait for compute1 hypervisor to be up<br />
while [ $( openstack hypervisor list --matching compute1 -f value -c State ) != 'up' ]; do echo "waiting for compute1 hypervisor..."; sleep 5; done<br />
</exec><br />
<exec seq="step43" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Wait for compute2 hypervisor to be up<br />
while [ $( openstack hypervisor list --matching compute2 -f value -c State ) != 'up' ]; do echo "waiting for compute2 hypervisor..."; sleep 5; done<br />
</exec><br />
<exec seq="step44" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
openstack hypervisor list<br />
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron) --><br />
<filetree seq="step51" root="/etc/neutron/">conf/controller/neutron/neutron.conf</filetree><br />
<!--filetree seq="step51" root="/etc/neutron/">conf/controller/neutron/neutron_lbaas.conf</filetree--><br />
<filetree seq="step51" root="/etc/neutron/plugins/ml2/">conf/controller/neutron/ml2_conf.ini</filetree><br />
<exec seq="step51" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE neutron;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx neutron<br />
openstack role add --project service --user neutron admin<br />
openstack service create --name neutron --description "OpenStack Networking" network<br />
openstack endpoint create --region RegionOne network public http://controller:9696<br />
openstack endpoint create --region RegionOne network internal http://controller:9696<br />
openstack endpoint create --region RegionOne network admin http://controller:9696<br />
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron<br />
<br />
# LBaaS<br />
neutron-db-manage --subproject neutron-lbaas upgrade head<br />
<br />
# FwaaS<br />
neutron-db-manage --subproject neutron-fwaas upgrade head<br />
<br />
# LBaaS Dashboard panels<br />
#git clone https://git.openstack.org/openstack/neutron-lbaas-dashboard<br />
#cd neutron-lbaas-dashboard<br />
#git checkout stable/mitaka<br />
#python setup.py install<br />
#cp neutron_lbaas_dashboard/enabled/_1481_project_ng_loadbalancersv2_panel.py /usr/share/openstack-dashboard/openstack_dashboard/local/enabled/<br />
#cd /usr/share/openstack-dashboard<br />
#./manage.py collectstatic --noinput<br />
#./manage.py compress<br />
#sudo service apache2 restart<br />
<br />
service nova-api restart<br />
service neutron-server restart<br />
</exec><br />
<br />
<!-- STEP 6: Dashboard service --><br />
<filetree seq="step6" root="/etc/openstack-dashboard/">conf/controller/dashboard/local_settings.py</filetree><br />
<exec seq="step6" type="verbatim"><br />
#chown www-data:www-data /var/lib/openstack-dashboard/secret_key<br />
rm /var/lib/openstack-dashboard/secret_key<br />
systemctl enable apache2<br />
service apache2 restart<br />
</exec><br />
<br />
<!-- STEP 7: Trove service --><br />
<cmd-seq seq="step7">step71,step72,step73</cmd-seq><br />
<exec seq="step71" type="verbatim"><br />
apt-get -y install python-trove python-troveclient python-glanceclient trove-common trove-api trove-taskmanager trove-conductor python-pip<br />
pip install trove-dashboard==7.0.0.0b2<br />
</exec><br />
<br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-conductor.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-taskmanager.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-guestagent.conf</filetree><br />
<exec seq="step72" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE trove;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
<br />
openstack user create --domain default --password xxxx trove<br />
openstack role add --project service --user trove admin<br />
openstack service create --name trove --description "Database" database<br />
openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s<br />
<br />
su -s /bin/sh -c "trove-manage db_sync" trove<br />
<br />
service trove-api restart<br />
service trove-taskmanager restart<br />
service trove-conductor restart<br />
<br />
# Install trove_dashboard<br />
cp -a /usr/local/lib/python2.7/dist-packages/trove_dashboard/enabled/* /usr/share/openstack-dashboard/openstack_dashboard/local/enabled/<br />
service apache2 restart<br />
<br />
</exec><br />
<br />
<exec seq="step73" type="verbatim"><br />
#wget -P /tmp/images http://tarballs.openstack.org/trove/images/ubuntu/mariadb.qcow2<br />
wget -P /tmp/images/ http://138.4.7.228/download/vnx/filesystems/ostack-images/trove/mariadb.qcow2<br />
glance image-create --name "trove-mariadb" --file /tmp/images/mariadb.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/mariadb.qcow2<br />
su -s /bin/sh -c "trove-manage --config-file /etc/trove/trove.conf datastore_update mysql ''" trove <br />
su -s /bin/sh -c "trove-manage --config-file /etc/trove/trove.conf datastore_version_update mysql mariadb mariadb glance_image_ID '' 1" trove<br />
<br />
# Create example database<br />
openstack flavor show m1.smaller >/dev/null 2>&amp;1 || openstack flavor create m1.smaller --ram 512 --disk 3 --vcpus 1 --id 6<br />
#trove create mysql_instance_1 m1.smaller --size 1 --databases myDB --users userA:xxxx --datastore_version mariadb --datastore mysql<br />
</exec><br />
<br />
<!-- STEP 8: Heat service --><br />
<!--cmd-seq seq="step8">step81,step82</cmd-seq--><br />
<br />
<filetree seq="step8" root="/etc/heat/">conf/controller/heat/heat.conf</filetree><br />
<filetree seq="step8" root="/root/heat/">conf/controller/heat/examples</filetree><br />
<exec seq="step8" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE heat;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password xxxx heat<br />
openstack role add --project service --user heat admin<br />
openstack service create --name heat --description "Orchestration" orchestration<br />
openstack service create --name heat-cfn --description "Orchestration" cloudformation<br />
openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1<br />
openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1<br />
openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1<br />
openstack domain create --description "Stack projects and users" heat<br />
openstack user create --domain heat --password xxxx heat_domain_admin<br />
openstack role add --domain heat --user-domain heat --user heat_domain_admin admin<br />
openstack role create heat_stack_owner<br />
openstack role add --project demo --user demo heat_stack_owner<br />
openstack role create heat_stack_user<br />
<br />
su -s /bin/sh -c "heat-manage db_sync" heat<br />
service heat-api restart<br />
service heat-api-cfn restart<br />
service heat-engine restart<br />
<br />
# Install Orchestration interface in Dashboard<br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get install -y gettext<br />
pip3 install heat-dashboard<br />
<br />
cd /root<br />
git clone https://github.com/openstack/heat-dashboard.git<br />
cd heat-dashboard/<br />
git checkout stable/stein<br />
cp heat_dashboard/enabled/_[1-9]*.py /usr/share/openstack-dashboard/openstack_dashboard/local/enabled<br />
python3 ./manage.py compilemessages<br />
cd /usr/share/openstack-dashboard<br />
DJANGO_SETTINGS_MODULE=openstack_dashboard.settings python3 manage.py collectstatic --noinput<br />
DJANGO_SETTINGS_MODULE=openstack_dashboard.settings python3 manage.py compress --force<br />
rm /var/lib/openstack-dashboard/secret_key<br />
service apache2 restart<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-heat" type="verbatim"><br />
source /root/bin/demo-openrc.sh<br />
<br />
# Create internal network<br />
openstack network create net-heat<br />
openstack subnet create --network net-heat --gateway 10.1.10.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.10.0/24 --allocation-pool start=10.1.10.8,end=10.1.10.100 subnet-heat<br />
<br />
mkdir -p /root/keys<br />
openstack keypair create key-heat > /root/keys/key-heat<br />
#export NET_ID=$(openstack network list | awk '/ net-heat / { print $2 }')<br />
export NET_ID=$( openstack network list --name net-heat -f value -c ID )<br />
openstack stack create -t /root/heat/examples/demo-template.yml --parameter "NetID=$NET_ID" stack<br />
<br />
</exec><br />
<br />
<br />
<!-- STEP 9: Tacker service --><br />
<cmd-seq seq="step9">step91,step92</cmd-seq><br />
<br />
<exec seq="step91" type="verbatim"><br />
apt-get -y install python-pip git<br />
pip install --upgrade pip<br />
</exec><br />
<br />
<filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/default-vim-config.yaml</filetree><br />
<filetree seq="step92" root="/root/tacker/">conf/controller/tacker/examples</filetree><br />
<exec seq="step92" type="verbatim"><br />
sed -i -e 's/.*"resource_types:OS::Nova::Flavor":.*/ "resource_types:OS::Nova::Flavor": "role:admin",/' /etc/heat/policy.json <br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE tacker;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password xxxx tacker<br />
openstack role add --project service --user tacker admin<br />
openstack service create --name tacker --description "Tacker Project" nfv-orchestration<br />
openstack endpoint create --region RegionOne nfv-orchestration public http://controller:9890/<br />
openstack endpoint create --region RegionOne nfv-orchestration internal http://controller:9890/<br />
openstack endpoint create --region RegionOne nfv-orchestration admin http://controller:9890/<br />
<br />
mkdir -p /root/tacker<br />
cd /root/tacker<br />
git clone https://github.com/openstack/tacker<br />
cd tacker<br />
git checkout stable/ocata<br />
pip install -r requirements.txt<br />
pip install tosca-parser<br />
python setup.py install<br />
mkdir -p /var/log/tacker<br />
<br />
/usr/local/bin/tacker-db-manage --config-file /usr/local/etc/tacker/tacker.conf upgrade head<br />
<br />
# Tacker client<br />
cd /root/tacker<br />
git clone https://github.com/openstack/python-tackerclient<br />
cd python-tackerclient<br />
git checkout stable/ocata<br />
python setup.py install<br />
<br />
# Tacker horizon<br />
cd /root/tacker<br />
git clone https://github.com/openstack/tacker-horizon<br />
cd tacker-horizon<br />
git checkout stable/ocata<br />
python setup.py install<br />
cp tacker_horizon/enabled/* /usr/share/openstack-dashboard/openstack_dashboard/enabled/<br />
service apache2 restart<br />
<br />
# Start tacker server<br />
mkdir -p /var/log/tacker<br />
nohup python /usr/local/bin/tacker-server \<br />
--config-file /usr/local/etc/tacker/tacker.conf \<br />
--log-file /var/log/tacker/tacker.log &amp;<br />
<br />
# Register default VIM<br />
tacker vim-register --is-default --config-file /usr/local/etc/tacker/default-vim-config.yaml \<br />
--description "Default VIM" "Openstack-VIM"<br />
<br />
</exec><br />
<exec seq="step93" type="verbatim"><br />
nohup python /usr/local/bin/tacker-server \<br />
--config-file /usr/local/etc/tacker/tacker.conf \<br />
--log-file /var/log/tacker/tacker.log &amp;<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-tacker" type="verbatim"><br />
source /root/bin/demo-openrc.sh<br />
<br />
# Create internal network<br />
openstack network create net-tacker<br />
openstack subnet create --network net-tacker --gateway 10.1.11.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.11.0/24 --allocation-pool start=10.1.11.8,end=10.1.11.100 subnet-tacker<br />
<br />
cd /root/tacker/examples<br />
tacker vnfd-create --vnfd-file sample-vnfd.yaml testd<br />
<br />
# Falla con error: <br />
# ERROR: Property error: : resources.VDU1.properties.image: : No module named v2.client<br />
tacker vnf-create --vnfd-id $( tacker vnfd-list | awk '/ testd / { print $2 }' ) test<br />
<br />
</exec><br />
<br />
<!--filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<exec seq="step92" type="verbatim"><br />
</exec--><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
#apt-get -y install python-pip<br />
#pip install --upgrade pip<br />
#pip install gnocchi[mysql,keystone] gnocchiclient<br />
apt-get -y install ceilometer-collector ceilometer-agent-central ceilometer-agent-notification python-ceilometerclient<br />
apt-get -y install gnocchi-common gnocchi-api gnocchi-metricd gnocchi-statsd python-gnocchiclient<br />
<br />
</exec><br />
<br />
<filetree seq="step102" root="/etc/ceilometer/">conf/controller/ceilometer/ceilometer.conf</filetree><br />
<!--filetree seq="step102" root="/etc/mongodb.conf">conf/controller/mongodb/mongodb.conf</filetree--><br />
<!--filetree seq="step102" root="/etc/apache2/sites-available/ceilometer.conf">conf/controller/ceilometer/apache/ceilometer.conf</filetree--><br />
<!--filetree seq="step102" root="/etc/apache2/sites-available/gnocchi.conf">conf/controller/ceilometer/apache/gnocchi.conf</filetree--><br />
<filetree seq="step102" root="/etc/gnocchi/">conf/controller/ceilometer/gnocchi.conf</filetree><br />
<filetree seq="step102" root="/etc/gnocchi/">conf/controller/ceilometer/api-paste.ini</filetree><br />
<exec seq="step102" type="verbatim"><br />
<br />
# Create gnocchi database<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE gnocchidb default character set utf8;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON gnocchidb.* TO 'gnocchi'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON gnocchidb.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
# Ceilometer<br />
source /root/bin/admin-openrc.sh <br />
openstack user create --domain default --password xxxx ceilometer<br />
openstack role add --project service --user ceilometer admin<br />
openstack service create --name ceilometer --description "Telemetry" metering<br />
openstack user create --domain default --password xxxx gnocchi<br />
openstack role add --project service --user gnocchi admin<br />
openstack service create --name gnocchi --description "Metric Service" metric<br />
openstack endpoint create --region RegionOne metric public http://controller:8041<br />
openstack endpoint create --region RegionOne metric internal http://controller:8041<br />
openstack endpoint create --region RegionOne metric admin http://controller:8041<br />
<br />
mkdir -p /var/cache/gnocchi<br />
chown gnocchi:gnocchi -R /var/cache/gnocchi<br />
mkdir -p /var/lib/gnocchi<br />
chown gnocchi:gnocchi -R /var/lib/gnocchi<br />
<br />
gnocchi-upgrade<br />
sed -i 's/8000/8041/g' /usr/bin/gnocchi-api<br />
# Correct error in gnocchi-api startup script<br />
sed -i -e 's/exec $DAEMON $DAEMON_ARGS/exec $DAEMON -- $DAEMON_ARGS/' /etc/init.d/gnocchi-api<br />
systemctl enable gnocchi-api.service gnocchi-metricd.service gnocchi-statsd.service<br />
systemctl start gnocchi-api.service gnocchi-metricd.service gnocchi-statsd.service<br />
<br />
ceilometer-upgrade --skip-metering-database<br />
service ceilometer-agent-central restart<br />
service ceilometer-agent-notification restart<br />
service ceilometer-collector restart<br />
<br />
# Enable Glance service meters<br />
crudini --set /etc/glance/glance-api.conf DEFAULT transport_url rabbit://openstack:xxxx@controller<br />
crudini --set /etc/glance/glance-api.conf oslo_messaging_notifications driver messagingv2<br />
crudini --set /etc/glance/glance-registry.conf DEFAULT transport_url rabbit://openstack:xxxx@controller<br />
crudini --set /etc/glance/glance-registry.conf oslo_messaging_notifications driver messagingv2<br />
<br />
service glance-registry restart<br />
service glance-api restart<br />
<br />
# Enable Neutron service meters<br />
crudini --set /etc/neutron/neutron.conf oslo_messaging_notifications driver messagingv2<br />
service neutron-server restart<br />
<br />
# Enable Heat service meters<br />
crudini --set /etc/heat/heat.conf oslo_messaging_notifications driver messagingv2<br />
service heat-api restart<br />
service heat-api-cfn restart<br />
service heat-engine restart<br />
<br />
#crudini --set /etc/glance/glance-api.conf DEFAULT rpc_backend rabbit<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_notifications driver messagingv2<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_host controller<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_userid openstack<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_password xxxx<br />
<br />
#crudini --set /etc/glance/glance-registry.conf DEFAULT rpc_backend rabbit<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_notifications driver messagingv2<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_host controller<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_userid openstack<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_password xxxx<br />
<br />
<br />
</exec><br />
<br />
<br />
<exec seq="load-img" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create flavors if not created<br />
openstack flavor show m1.nano >/dev/null 2>&amp;1 || openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano<br />
openstack flavor show m1.tiny >/dev/null 2>&amp;1 || openstack flavor create --id 1 --vcpus 1 --ram 512 --disk 1 m1.tiny<br />
openstack flavor show m1.smaller >/dev/null 2>&amp;1 || openstack flavor create --id 6 --vcpus 1 --ram 512 --disk 3 m1.smaller<br />
<br />
# CentOS image<br />
# Cirros image <br />
#wget -P /tmp/images http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img<br />
wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/cirros-0.3.4-x86_64-disk-vnx.qcow2<br />
glance image-create --name "cirros-0.3.4-x86_64-vnx" --file /tmp/images/cirros-0.3.4-x86_64-disk-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/cirros-0.3.4-x86_64-disk*.qcow2<br />
<br />
# Ubuntu image (trusty)<br />
#wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/trusty-server-cloudimg-amd64-disk1-vnx.qcow2<br />
#glance image-create --name "trusty-server-cloudimg-amd64-vnx" --file /tmp/images/trusty-server-cloudimg-amd64-disk1-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
#rm /tmp/images/trusty-server-cloudimg-amd64-disk1*.qcow2<br />
<br />
# Ubuntu image (xenial)<br />
wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/xenial-server-cloudimg-amd64-disk1-vnx.qcow2<br />
glance image-create --name "xenial-server-cloudimg-amd64-vnx" --file /tmp/images/xenial-server-cloudimg-amd64-disk1-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/xenial-server-cloudimg-amd64-disk1*.qcow2<br />
<br />
#wget -P /tmp/images http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2<br />
#glance image-create --name "CentOS-7-x86_64" --file /tmp/images/CentOS-7-x86_64-GenericCloud.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
#rm /tmp/images/CentOS-7-x86_64-GenericCloud.qcow2<br />
</exec><br />
<br />
<exec seq="create-demo-scenario" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create internal network<br />
#neutron net-create net0<br />
openstack network create net0<br />
#neutron subnet-create net0 10.1.1.0/24 --name subnet0 --gateway 10.1.1.1 --dns-nameserver 8.8.8.8<br />
openstack subnet create --network net0 --gateway 10.1.1.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.1.0/24 --allocation-pool start=10.1.1.8,end=10.1.1.100 subnet0<br />
<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm1 > /root/keys/vm1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm1 --nic net-id=net0 --key-name vm1<br />
<br />
# Create external network<br />
#neutron net-create ExtNet --provider:physical_network provider --provider:network_type flat --router:external --shared<br />
openstack network create --share --external --provider-physical-network provider --provider-network-type flat ExtNet<br />
#neutron subnet-create --name ExtSubnet --allocation-pool start=10.0.10.100,end=10.0.10.200 --dns-nameserver 10.0.10.1 --gateway 10.0.10.1 ExtNet 10.0.10.0/24<br />
openstack subnet create --network ExtNet --gateway 10.0.10.1 --dns-nameserver 10.0.10.1 --subnet-range 10.0.10.0/24 --allocation-pool start=10.0.10.100,end=10.0.10.200 ExtSubNet<br />
#neutron router-create r0<br />
openstack router create r0<br />
#neutron router-gateway-set r0 ExtNet<br />
openstack router set r0 --external-gateway ExtNet<br />
#neutron router-interface-add r0 subnet0<br />
openstack router add subnet r0 subnet0<br />
<br />
<br />
# Assign floating IP address to vm1<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm1<br />
openstack server add floating ip vm1 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
<br />
<br />
# Create security group rules to allow ICMP, SSH and WWW access<br />
openstack security group rule create --proto icmp --dst-port 0 default<br />
openstack security group rule create --proto tcp --dst-port 80 default<br />
openstack security group rule create --proto tcp --dst-port 22 default<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-vm2" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm2 > /root/keys/vm2<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm2 --nic net-id=net0 --key-name vm2<br />
# Assign floating IP address to vm2<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm2<br />
openstack server add floating ip vm2 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
</exec><br />
<br />
<exec seq="create-demo-vm3" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm3 > /root/keys/vm3<br />
openstack server create --flavor m1.smaller --image xenial-server-cloudimg-amd64-vnx vm3 --nic net-id=net0 --key-name vm3<br />
# Assign floating IP address to vm3<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm3<br />
openstack server add floating ip vm3 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
</exec><br />
<br />
<exec seq="create-vlan-demo-scenario" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create vlan based networks and subnetworks<br />
#neutron net-create vlan1000 --shared --provider:physical_network vlan --provider:network_type vlan --provider:segmentation_id 1000<br />
#neutron net-create vlan1001 --shared --provider:physical_network vlan --provider:network_type vlan --provider:segmentation_id 1001<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1000 vlan1000<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1001 vlan1001<br />
#neutron subnet-create vlan1000 10.1.2.0/24 --name vlan1000-subnet --allocation-pool start=10.1.2.2,end=10.1.2.99 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8<br />
#neutron subnet-create vlan1001 10.1.3.0/24 --name vlan1001-subnet --allocation-pool start=10.1.3.2,end=10.1.3.99 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8<br />
openstack subnet create --network vlan1000 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.2.0/24 --allocation-pool start=10.1.2.2,end=10.1.2.99 subvlan1000<br />
openstack subnet create --network vlan1001 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.3.0/24 --allocation-pool start=10.1.3.2,end=10.1.3.99 subvlan1001<br />
<br />
<br />
# Create virtual machine<br />
mkdir -p tmp<br />
openstack keypair create vm3 > tmp/vm3<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm3 --nic net-id=vlan1000 --key-name vm3<br />
openstack keypair create vm4 > tmp/vm4<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm4 --nic net-id=vlan1001 --key-name vm4<br />
<br />
<br />
# Create security group rules to allow ICMP, SSH and WWW access<br />
openstack security group rule create --proto icmp --dst-port 0 default<br />
openstack security group rule create --proto tcp --dst-port 80 default<br />
openstack security group rule create --proto tcp --dst-port 22 default<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
echo "--"<br />
echo "-- Keystone (identity)"<br />
echo "--"<br />
echo "Command: openstack --os-auth-url http://controller:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue"<br />
openstack --os-auth-url http://controller:35357/v3 \<br />
--os-project-domain-name default --os-user-domain-name default \<br />
--os-project-name admin --os-username admin token issue<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Glance (images)"<br />
echo "--"<br />
echo "Command: openstack image list"<br />
openstack image list<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Nova (compute)"<br />
echo "--"<br />
echo "Command: openstack compute service list"<br />
openstack compute service list<br />
echo "Command: openstack hypervisor service list"<br />
openstack hypervisor service list<br />
echo "Command: openstack catalog list"<br />
openstack catalog list<br />
echo "Command: nova-status upgrade check"<br />
nova-status upgrade check<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Neutron (network)"<br />
echo "--"<br />
echo "Command: openstack extension list --network"<br />
openstack extension list --network<br />
echo "Command: openstack network agent list"<br />
openstack network agent list<br />
echo "Command: openstack security group list"<br />
openstack security group list<br />
echo "Command: openstack security group rule list"<br />
openstack security group rule list<br />
</exec><br />
<br />
</vm><br />
<br />
<vm name="network" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-network</filesystem><br />
<!--vm name="network" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-network</filesystem--><br />
<mem>1G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.21/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.21/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="4" net="ExtNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<forwarding type="ip" /><br />
<forwarding type="ipv6" /><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts<br />
rm /root/hosts<br />
</exec><br />
<exec seq="on_boot" type="verbatim"><br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --><br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<filetree seq="on_boot" root="/root/">conf/network/bin</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
chmod +x /root/bin/*<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) --><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/neutron.conf</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/metadata_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/plugins/ml2/">conf/network/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step52" root="/etc/neutron/plugins/ml2/">conf/network/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/l3_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/dhcp_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/dnsmasq-neutron.conf</filetree><br />
<!--filetree seq="step52" root="/etc/neutron/">conf/network/neutron/fwaas_driver.ini</filetree--><br />
<!--filetree seq="step52" root="/etc/neutron/">conf/network/neutron/lbaas_agent.ini</filetree--><br />
<exec seq="step52" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
#ovs-vsctl add-br br-ex<br />
ovs-vsctl add-port br-provider eth4<br />
<br />
service neutron-lbaasv2-agent restart<br />
service openvswitch-switch restart<br />
<br />
service neutron-openvswitch-agent restart<br />
#service neutron-linuxbridge-agent restart<br />
service neutron-dhcp-agent restart<br />
service neutron-metadata-agent restart<br />
service neutron-l3-agent restart<br />
<br />
rm -f /var/lib/neutron/neutron.sqlite<br />
</exec><br />
<br />
</vm><br />
<br />
<br />
<vm name="compute1" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-compute</filesystem><br />
<!--vm name="compute1" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64" vcpu="2"--><br />
<!--filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-compute</filesystem--><br />
<mem>2G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.31/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.31/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
# Create /dev/net/tun device <br />
#mkdir -p /dev/net/<br />
#mknod -m 666 /dev/net/tun c 10 200<br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<!-- STEP 42: Compute service (Nova) --><br />
<filetree seq="step42" root="/etc/nova/">conf/compute1/nova/nova.conf</filetree><br />
<filetree seq="step42" root="/etc/nova/">conf/compute1/nova/nova-compute.conf</filetree><br />
<exec seq="step42" type="verbatim"><br />
service nova-compute restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron) --><br />
<filetree seq="step53" root="/etc/neutron/">conf/compute1/neutron/neutron.conf</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/ml2_conf.ini</filetree!--><br />
<exec seq="step53" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
service openvswitch-switch restart<br />
service nova-compute restart<br />
service neutron-openvswitch-agent restart<br />
</exec><br />
<br />
<!--filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<exec seq="step92" type="verbatim"><br />
</exec--><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get -y install ceilometer-agent-compute<br />
</exec><br />
<filetree seq="step102" root="/etc/ceilometer/">conf/compute2/ceilometer/ceilometer.conf</filetree><br />
<exec seq="step102" type="verbatim"><br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit True<br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit_period hour<br />
crudini --set /etc/nova/nova.conf DEFAULT notify_on_state_change vm_and_task_state<br />
crudini --set /etc/nova/nova.conf oslo_messaging_notifications driver messagingv2<br />
service ceilometer-agent-compute restart<br />
service nova-compute restart<br />
</exec><br />
<br />
</vm><br />
<br />
<vm name="compute2" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-compute</filesystem><br />
<!--vm name="compute2" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64" vcpu="2"--><br />
<!--filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-compute</filesystem!--><br />
<mem>2G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.32/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.32/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
# Create /dev/net/tun device <br />
#mkdir -p /dev/net/<br />
#mknod -m 666 /dev/net/tun c 10 200<br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<!-- STEP 42: Compute service (Nova) --><br />
<filetree seq="step42" root="/etc/nova/">conf/compute2/nova/nova.conf</filetree><br />
<filetree seq="step42" root="/etc/nova/">conf/compute2/nova/nova-compute.conf</filetree><br />
<exec seq="step42" type="verbatim"><br />
service nova-compute restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) --><br />
<filetree seq="step53" root="/etc/neutron/">conf/compute2/neutron/neutron.conf</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/ml2_conf.ini</filetree!--><br />
<exec seq="step53" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
service openvswitch-switch restart<br />
service nova-compute restart<br />
service neutron-openvswitch-agent restart<br />
</exec><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get -y install ceilometer-agent-compute<br />
</exec><br />
<filetree seq="step102" root="/etc/ceilometer/">conf/compute2/ceilometer/ceilometer.conf</filetree><br />
<exec seq="step102" type="verbatim"><br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit True<br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit_period hour<br />
crudini --set /etc/nova/nova.conf DEFAULT notify_on_state_change vm_and_task_state<br />
crudini --set /etc/nova/nova.conf oslo_messaging_notifications driver messagingv2<br />
service ceilometer-agent-compute restart<br />
service nova-compute restart<br />
</exec><br />
<br />
</vm><br />
<br />
<br />
<host><br />
<hostif net="ExtNet"><br />
<ipv4>10.0.10.1/24</ipv4><br />
</hostif><br />
<hostif net="MgmtNet"><br />
<ipv4>10.0.0.1/24</ipv4><br />
</hostif><br />
<exec seq="step00" type="verbatim"><br />
echo "--\n-- Waiting for all VMs to be ssh ready...\n--"<br />
</exec><br />
<exec seq="step00" type="verbatim"><br />
# Wait till ssh is accesible in all VMs<br />
while ! $( nc -z controller 22 ); do sleep 1; done<br />
while ! $( nc -z network 22 ); do sleep 1; done<br />
while ! $( nc -z compute1 22 ); do sleep 1; done<br />
while ! $( nc -z compute2 22 ); do sleep 1; done<br />
</exec><br />
<exec seq="step00" type="verbatim"><br />
echo "-- ...OK\n--"<br />
</exec><br />
</host><br />
<br />
</vnx></div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Vnx-labo-openstack-4nodes-classic-ovs-stein&diff=2632
Vnx-labo-openstack-4nodes-classic-ovs-stein
2019-08-26T14:28:50Z
<p>David: /* Other Openstack Dashboard screen captures */</p>
<hr />
<div>{{Title|VNX Openstack Stein four nodes classic scenario using Open vSwitch}}<br />
<br />
== Introduction ==<br />
<br />
This is an Openstack tutorial scenario designed to experiment with Openstack free and open-source software platform for cloud-computing.<br />
<br />
The scenario is made of four virtual machines: a controller node, a network node and two compute nodes, all based on LXC. Optionally, new compute nodes can be added by starting additional VNX scenarios.<br />
<br />
Openstack version used is Stein (April 2019) over Ubuntu 18.04 LTS. The deployment scenario is the one that was named "Classic with Open vSwitch" and was described in previous versions of Openstack documentation (https://docs.openstack.org/liberty/networking-guide/scenario-classic-ovs.html). It is prepared to experiment either with the deployment of virtual machines using "Self Service Networks" provided by Openstack, or with the use of external "Provider networks".<br />
<br />
The scenario is similar to the ones developed by Raul Alvarez to test OpenDaylight-Openstack integration, but instead of using Devstack to configure Openstack nodes, the configuration is done by means of commands integrated into the VNX scenario following [https://docs.openstack.org/stein/install/ Openstack Stein installation recipes].<br />
<br />
[[File:Openstack_tutorial-stein.png|center|thumb|600px|<div align=center><br />
'''Figure 1: Openstack tutorial scenario'''</div>]]<br />
<br />
== Requirements ==<br />
<br />
To use the scenario you need a Linux computer (Ubuntu 16.04 or later recommended) with VNX software installed. At least 8GB of memory are needed to execute the scenario. <br />
<br />
See how to install VNX here: http://vnx.dit.upm.es/vnx/index.php/Vnx-install<br />
<br />
If already installed, update VNX to the latest version with:<br />
<br />
vnx_update<br />
<br />
== Installation ==<br />
<br />
Download the scenario with the virtual machines images included and unpack it:<br />
<br />
wget http://idefix.dit.upm.es/vnx/examples/openstack/openstack_lab-stein_4n_classic_ovs-v01-with-rootfs.tgz<br />
sudo vnx --unpack openstack_lab-stein_4n_classic_ovs-v01-with-rootfs.tgz<br />
<br />
== Starting the scenario ==<br />
<br />
Start the scenario and configure it and load an example cirros and ubuntu images with:<br />
cd openstack_lab-stein_4n_classic_ovs-v01<br />
# Start the scenario<br />
sudo vnx -f openstack_lab.xml -v --create<br />
# Wait for all consoles to have started and configure all Openstack services<br />
vnx -f openstack_lab.xml -v -x start-all<br />
# Load vm images in GLANCE<br />
vnx -f openstack_lab.xml -v -x load-img<br />
<br />
<br />
[[File:Tutorial-openstack-stein-4n-classic-ovs.png|center|thumb|600px|<div align=center><br />
'''Figure 2: Openstack tutorial detailed topology'''</div>]]<br />
<br />
Once started, you can connect to Openstack Dashboard (default/admin/xxxx) starting a browser and pointing it to the controller horizon page. For example:<br />
<br />
firefox 10.0.10.11/horizon<br />
<br />
== Self Service networks example ==<br />
<br />
Access the network topology Dashboard page (Project->Network->Network topology) and create a simple demo scenario inside Openstack:<br />
<br />
vnx -f openstack_lab.xml -v -x create-demo-scenario<br />
<br />
You should see the simple scenario as it is being created through the Dashboard.<br />
<br />
Once created you should be able to access vm1 console, to ping or ssh from the host to the vm1 or the opposite (see the floating IP assigned to vm1 in the Dashboard, probably 10.0.10.102).<br />
<br />
You can create a second Cirros virtual machine (vm2) or a third Ubuntu virtual machine and test connectivity among all virtual machines with: <br />
<br />
vnx -f openstack_lab.xml -v -x create-demo-vm2<br />
vnx -f openstack_lab.xml -v -x create-demo-vm3<br />
<br />
To allow external Internet access from vm1 you hace to configure a NAT in the host. You can easily do it using '''vnx_config_nat''' command distributed with VNX. Just find out the name of the public network interface of your host (i.e eth0) and execute:<br />
<br />
vnx_config_nat ExtNet eth0<br />
<br />
Besides, you can access the Openstack controller by ssh from the host and execute management commands directly:<br />
slogin root@controller # root/xxxx<br />
source bin/admin-openrc.sh # Load admin credentials<br />
<br />
For example, to show the virtual machines started:<br />
openstack server list<br />
<br />
You can also execute that commands from the host where the virtual scenario is started. For that purpose you need to install openstack client first:<br />
<br />
pip install python-openstackclient<br />
source bin/admin-openrc.sh # Load admin credentials<br />
openstack server list<br />
<br />
== Provider networks example ==<br />
<br />
Compute nodes in the Openstack virtual lab scenario have two network interfaces for internal and external connections: <br />
* '''eth2''', connected to Tunnels network and used to connect with VMs in other compute nodes or routers in the network node<br />
* '''eth3''', connected to VLANs network and used to connect with VMs in other compute nodes and also to connect to external systems through the Providers networks infrastructure. <br />
<br />
To demonstrate how Openstack VMs can be connected with external systems though the VLAN network switches, an additional demo scenario is included. Just execute:<br />
vnx -f openstack_lab.xml -v -x create-vlan-demo-scenario<br />
<br />
That scenario will create two new networks and subnetworks associated with VLANs 1000 and 1001, and two VMs, vmA1 and vmB1 connected to that networks. You can see the scenario created through the openstack Dashboard.<br />
<br />
[[File:Openstack-provider-networks-example.png|center|thumb|600px|<div align=center><br />
'''Figure 3: Provider networks testing scenario'''</div>]]<br />
<br />
The commands used to create that networks and vms are the following (can be consulted also in the scenario XML file):<br />
<br />
<pre><br />
# Networks<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1000 vlan1000<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1001 vlan1001<br />
openstack subnet create --network vlan1000 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.2.0/24 --allocation-pool start=10.1.2.2,end=10.1.2.99 subvlan1000<br />
openstack subnet create --network vlan1001 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.3.0/24 --allocation-pool start=10.1.3.2,end=10.1.3.99 subvlan1001<br />
<br />
# VMs<br />
mkdir -p tmp<br />
openstack keypair create vmA1 > tmp/vmA1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vmA1 --nic net-id=vlan1000 --key-name vmA1<br />
openstack keypair create vmB1 > tmp/vmB1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vmB1 --nic net-id=vlan1001 --key-name vmB1<br />
</pre><br />
<br />
To demonstrate the connectivity of vmA1 and vmB1 to external systems connected on VLANs 1000/1001, you can start and additional virtual scenario which creates three additional systems: vmA2 (vlan 1000), vmB2 (vlan 1001) and vlan-router (connected to both vlans). To start it just execute:<br />
vnx -f openstack_lab-vms-vlan.xml -v -t<br />
<br />
[[File:Openstack_lab-vms-vlan-map.png|center|thumb|600px|<div align=center><br />
'''Figure 4: Provider networks demo external scenario'''</div>]]<br />
<br />
Once the scenario is started, you should be able to ping, traceroute and ssh among vmA1, vmB1, vmA2 and vmB2 using the following IP addresses:<br />
<ul><br />
<li>Virtual machines inside Openstack:</li><br />
<ul><br />
<li>vmA1: 10.1.2.20/24</li><br />
<li>vmB1: 10.1.3.20/24</li><br />
</ul><br />
<li>Virtual machines outside Openstack:</li><br />
<ul><br />
<li>vmA2: 10.1.2.100/24</li><br />
<li>vmB2: 10.1.3.100/24</li><br />
</ul><br />
</ul><br />
<br />
You can have a look at the virtual switch that supports the Openstack VLAN Network executing the following command in the host:<br />
ovs-vsctl show<br />
<br />
<br />
[[File:Vnx-demo-scenario-openstack-stein.png|center|thumb|600px|<div align=center><br />
'''Figure 5: Openstack Dashboard view of the demo virtual scenarios created'''</div>]]<br />
<br />
== Stopping or releasing the scenario ==<br />
<br />
To stop the scenario preserving the configuration and the changes made:<br />
<br />
vnx -f openstack_lab.xml -v --shutdown<br />
<br />
To start it again use:<br />
<br />
vnx -f openstack_lab.xml -v --start<br />
<br />
To stop the scenario destroying all the configuration and changes made:<br />
<br />
vnx -f openstack_lab.xml -v --destroy<br />
<br />
To unconfigure the NAT, just execute (change eth0 by the name of your external interface):<br />
<br />
vnx_config_nat -d ExtNet eth0<br />
<br />
== Other useful information ==<br />
<br />
To pack the scenario in a tgz file:<br />
<br />
bin/pack-scenario-with-rootfs # including rootfs<br />
bin/pack-scenario # without rootfs<br />
<br />
== Other Openstack Dashboard screen captures ==<br />
<br />
[[File:Vnx-demo-scenario-openstack-mitaka-compute-overview.png|center|thumb|600px|<div align=center><br />
'''Figure 6: Openstack Dashboard compute overview'''</div>]]<br />
<br />
[[File:Vnx-demo-scenario-openstack-mitaka-instances.png|center|thumb|600px|<div align=center><br />
'''Figure 7: Openstack Dashboard view of the demo virtual machines created'''</div>]]<br />
<br />
<?xml version="1.0" encoding="UTF-8"?><br />
<br />
<!--<br />
~~~~~~~~~~~~~~~~~~~~~~<br />
VNX Sample scenarios<br />
~~~~~~~~~~~~~~~~~~~~~~<br />
<br />
Name: openstack_tutorial-stein<br />
<br />
Description: This is an Openstack tutorial scenario designed to experiment with Openstack free and open-source <br />
software platform for cloud-computing. It is made of four LXC containers: <br />
- one controller<br />
- one network node<br />
- two compute nodes<br />
Openstack version used: Stein.<br />
The network configuration is based on the one named "Classic with Open vSwitch" described here:<br />
http://docs.openstack.org/liberty/networking-guide/scenario-classic-ovs.html<br />
<br />
Author: David Fernandez (david@dit.upm.es)<br />
<br />
This file is part of the Virtual Networks over LinuX (VNX) Project distribution. <br />
(www: http://www.dit.upm.es/vnx - e-mail: vnx@dit.upm.es) <br />
<br />
Copyright (C) 2019 Departamento de Ingenieria de Sistemas Telematicos (DIT)<br />
Universidad Politecnica de Madrid (UPM)<br />
SPAIN<br />
--><br />
<br />
<vnx xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"<br />
xsi:noNamespaceSchemaLocation="/usr/share/xml/vnx/vnx-2.00.xsd"><br />
<global><br />
<version>2.0</version><br />
<scenario_name>openstack_tutorial-stein</scenario_name><br />
<ssh_key>/root/.ssh/id_rsa.pub</ssh_key><br />
<automac offset="0"/><br />
<!--vm_mgmt type="none"/--><br />
<vm_mgmt type="private" network="10.20.0.0" mask="24" offset="0"><br />
<host_mapping /><br />
</vm_mgmt> <br />
<vm_defaults><br />
<console id="0" display="no"/><br />
<console id="1" display="yes"/><br />
</vm_defaults><br />
<cmd-seq seq="step1-6">step1,step2,step3,step3b,step4,step5,step6</cmd-seq><br />
<cmd-seq seq="step4">step41,step42,step43,step44</cmd-seq><br />
<cmd-seq seq="step5">step51,step52,step53</cmd-seq><br />
<!-- start-all for 'noconfig' scenario: all installation steps included --><br />
<!--cmd-seq seq="start-all">step1,step2,step3,step4,step5,step6</cmd-seq--><br />
<!-- start-all for configured scenario: only network and compute node steps included --><br />
<cmd-seq seq="step10">step101,step102</cmd-seq><br />
<cmd-seq seq="start-all">step00,step42,step43,step44,step52,step53</cmd-seq><br />
<cmd-seq seq="discover-hosts">step44</cmd-seq><br />
</global><br />
<br />
<net name="MgmtNet" mode="openvswitch" mtu="1450"/><br />
<net name="TunnNet" mode="openvswitch" mtu="1450"/><br />
<net name="ExtNet" mode="openvswitch" /><br />
<net name="VlanNet" mode="openvswitch" /><br />
<net name="virbr0" mode="virtual_bridge" managed="no"/><br />
<br />
<vm name="controller" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-controller</filesystem><br />
<mem>1G</mem><br />
<!--console id="0" display="yes"/--><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.11/24</ipv4><br />
</if><br />
<if id="2" net="ExtNet"><br />
<ipv4>10.0.10.11/24</ipv4><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-controller.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<filetree seq="on_boot" root="/root/">conf/controller/bin</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
chmod +x /root/bin/*<br />
</exec><br />
<exec seq="on_boot" type="verbatim"><br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
<br />
# Change owner of secret_key to horizon to avoid a 500 error when<br />
# accessing horizon (new problem arosed in v04)<br />
# See: https://ask.openstack.org/en/question/30059/getting-500-internal-server-error-while-accessing-horizon-dashboard-in-ubuntu-icehouse/<br />
chown horizon /var/lib/openstack-dashboard/secret_key<br />
<br />
# Stop nova services. Before being configured, they consume a lot of CPU<br />
service nova-scheduler stop<br />
service nova-api stop<br />
service nova-conductor stop<br />
</exec><br />
<br />
<exec seq="step00" type="verbatim"><br />
# Restart nova services<br />
service nova-scheduler start<br />
service nova-api start<br />
service nova-conductor start<br />
</exec><br />
<br />
<!-- STEP 1: Basic services --><br />
<filetree seq="step1" root="/etc/mysql/mariadb.conf.d/">conf/controller/mysql/mysqld_openstack.cnf</filetree><br />
<filetree seq="step1" root="/etc/">conf/controller/memcached/memcached.conf</filetree><br />
<filetree seq="step1" root="/etc/default/">conf/controller/etcd/etcd</filetree><br />
<!--filetree seq="step1" root="/etc/mongodb.conf">conf/controller/mongodb/mongodb.conf</filetree!--><br />
<exec seq="step1" type="verbatim"><br />
<br />
# Stop nova services. Before being configured, they consume a lot of CPU<br />
service nova-scheduler stop<br />
service nova-api stop<br />
service nova-conductor stop<br />
<br />
# Change all ocurrences of utf8mb4 to utf8. See comment above<br />
#for f in $( find /etc/mysql/mariadb.conf.d/ -type f ); do echo "Changing utf8mb4 to utf8 in file $f"; sed -i -e 's/utf8mb4/utf8/g' $f; done<br />
service mysql restart<br />
#mysql_secure_installation # to be run manually<br />
<br />
rabbitmqctl add_user openstack xxxx<br />
rabbitmqctl set_permissions openstack ".*" ".*" ".*" <br />
<br />
service memcached restart<br />
<br />
systemctl enable etcd<br />
systemctl start etcd<br />
<br />
#service mongodb stop<br />
#rm -f /var/lib/mongodb/journal/prealloc.*<br />
#service mongodb start<br />
</exec><br />
<br />
<!-- STEP 2: Identity service --><br />
<filetree seq="step2" root="/etc/keystone/">conf/controller/keystone/keystone.conf</filetree><br />
<filetree seq="step2" root="/root/bin/">conf/controller/keystone/admin-openrc.sh</filetree><br />
<filetree seq="step2" root="/root/bin/">conf/controller/keystone/demo-openrc.sh</filetree><br />
<exec seq="step2" type="verbatim"><br />
count=1; while ! mysqladmin ping ; do echo -n $count; echo ": waiting for mysql ..."; ((count++)) &amp;&amp; ((count==6)) &amp;&amp; echo "--" &amp;&amp; echo "-- ERROR: database not ready." &amp;&amp; echo "--" &amp;&amp; break; sleep 2; done<br />
</exec><br />
<exec seq="step2" type="verbatim"><br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE keystone;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
su -s /bin/sh -c "keystone-manage db_sync" keystone<br />
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone<br />
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone<br />
<br />
keystone-manage bootstrap --bootstrap-password xxxx \<br />
--bootstrap-admin-url http://controller:5000/v3/ \<br />
--bootstrap-internal-url http://controller:5000/v3/ \<br />
--bootstrap-public-url http://controller:5000/v3/ \<br />
--bootstrap-region-id RegionOne<br />
<br />
echo "ServerName controller" >> /etc/apache2/apache2.conf<br />
#ln -s /etc/apache2/sites-available/wsgi-keystone.conf /etc/apache2/sites-enabled<br />
service apache2 restart<br />
rm -f /var/lib/keystone/keystone.db<br />
sleep 5<br />
<br />
export OS_USERNAME=admin<br />
export OS_PASSWORD=xxxx<br />
export OS_PROJECT_NAME=admin<br />
export OS_USER_DOMAIN_NAME=Default<br />
export OS_PROJECT_DOMAIN_NAME=Default<br />
export OS_AUTH_URL=http://controller:5000/v3<br />
export OS_IDENTITY_API_VERSION=3<br />
<br />
# Create users and projects<br />
openstack project create --domain default --description "Service Project" service<br />
openstack project create --domain default --description "Demo Project" demo<br />
openstack user create --domain default --password=xxxx demo<br />
openstack role create user<br />
openstack role add --project demo --user demo user<br />
</exec><br />
<br />
<!-- <br />
STEP 3: Image service (Glance) <br />
--><br />
<filetree seq="step3" root="/etc/glance/">conf/controller/glance/glance-api.conf</filetree><br />
<filetree seq="step3" root="/etc/glance/">conf/controller/glance/glance-registry.conf</filetree><br />
<exec seq="step3" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE glance;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx glance<br />
openstack role add --project service --user glance admin<br />
openstack service create --name glance --description "OpenStack Image" image<br />
openstack endpoint create --region RegionOne image public http://controller:9292<br />
openstack endpoint create --region RegionOne image internal http://controller:9292<br />
openstack endpoint create --region RegionOne image admin http://controller:9292<br />
<br />
su -s /bin/sh -c "glance-manage db_sync" glance<br />
service glance-registry restart<br />
service glance-api restart<br />
#rm -f /var/lib/glance/glance.sqlite<br />
</exec><br />
<br />
<!-- <br />
STEP 3B: Placement service API<br />
--><br />
<filetree seq="step3b" root="/etc/placement/">conf/controller/placement/placement.conf</filetree><br />
<exec seq="step3b" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE placement;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx placement<br />
openstack role add --project service --user placement admin<br />
openstack service create --name placement --description "Placement API" placement<br />
openstack endpoint create --region RegionOne placement public http://controller:8778<br />
openstack endpoint create --region RegionOne placement internal http://controller:8778<br />
openstack endpoint create --region RegionOne placement admin http://controller:8778<br />
su -s /bin/sh -c "placement-manage db sync" placement<br />
service apache2 restart<br />
</exec><br />
<br />
<br />
<br />
<!-- STEP 4: Compute service (Nova) --><br />
<filetree seq="step41" root="/etc/nova/">conf/controller/nova/nova.conf</filetree><br />
<exec seq="step41" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova_api;"<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova;"<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova_cell0;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
<br />
openstack user create --domain default --password=xxxx nova<br />
openstack role add --project service --user nova admin<br />
openstack service create --name nova --description "OpenStack Compute" compute<br />
<br />
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1<br />
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1<br />
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1<br />
<br />
# Restart services stopped at step 1 to save CPU<br />
service nova-scheduler start<br />
service nova-api start<br />
service nova-conductor start<br />
<br />
su -s /bin/sh -c "nova-manage api_db sync" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova<br />
su -s /bin/sh -c "nova-manage db sync" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova<br />
<br />
service nova-api restart<br />
service nova-consoleauth restart<br />
service nova-scheduler restart<br />
service nova-conductor restart<br />
service nova-novncproxy restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<exec seq="step43" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Wait for compute1 hypervisor to be up<br />
while [ $( openstack hypervisor list --matching compute1 -f value -c State ) != 'up' ]; do echo "waiting for compute1 hypervisor..."; sleep 5; done<br />
</exec><br />
<exec seq="step43" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Wait for compute2 hypervisor to be up<br />
while [ $( openstack hypervisor list --matching compute2 -f value -c State ) != 'up' ]; do echo "waiting for compute2 hypervisor..."; sleep 5; done<br />
</exec><br />
<exec seq="step44" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
openstack hypervisor list<br />
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron) --><br />
<filetree seq="step51" root="/etc/neutron/">conf/controller/neutron/neutron.conf</filetree><br />
<!--filetree seq="step51" root="/etc/neutron/">conf/controller/neutron/neutron_lbaas.conf</filetree--><br />
<filetree seq="step51" root="/etc/neutron/plugins/ml2/">conf/controller/neutron/ml2_conf.ini</filetree><br />
<exec seq="step51" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE neutron;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx neutron<br />
openstack role add --project service --user neutron admin<br />
openstack service create --name neutron --description "OpenStack Networking" network<br />
openstack endpoint create --region RegionOne network public http://controller:9696<br />
openstack endpoint create --region RegionOne network internal http://controller:9696<br />
openstack endpoint create --region RegionOne network admin http://controller:9696<br />
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron<br />
<br />
# LBaaS<br />
neutron-db-manage --subproject neutron-lbaas upgrade head<br />
<br />
# FwaaS<br />
neutron-db-manage --subproject neutron-fwaas upgrade head<br />
<br />
# LBaaS Dashboard panels<br />
#git clone https://git.openstack.org/openstack/neutron-lbaas-dashboard<br />
#cd neutron-lbaas-dashboard<br />
#git checkout stable/mitaka<br />
#python setup.py install<br />
#cp neutron_lbaas_dashboard/enabled/_1481_project_ng_loadbalancersv2_panel.py /usr/share/openstack-dashboard/openstack_dashboard/local/enabled/<br />
#cd /usr/share/openstack-dashboard<br />
#./manage.py collectstatic --noinput<br />
#./manage.py compress<br />
#sudo service apache2 restart<br />
<br />
service nova-api restart<br />
service neutron-server restart<br />
</exec><br />
<br />
<!-- STEP 6: Dashboard service --><br />
<filetree seq="step6" root="/etc/openstack-dashboard/">conf/controller/dashboard/local_settings.py</filetree><br />
<exec seq="step6" type="verbatim"><br />
#chown www-data:www-data /var/lib/openstack-dashboard/secret_key<br />
rm /var/lib/openstack-dashboard/secret_key<br />
systemctl enable apache2<br />
service apache2 restart<br />
</exec><br />
<br />
<!-- STEP 7: Trove service --><br />
<cmd-seq seq="step7">step71,step72,step73</cmd-seq><br />
<exec seq="step71" type="verbatim"><br />
apt-get -y install python-trove python-troveclient python-glanceclient trove-common trove-api trove-taskmanager trove-conductor python-pip<br />
pip install trove-dashboard==7.0.0.0b2<br />
</exec><br />
<br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-conductor.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-taskmanager.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-guestagent.conf</filetree><br />
<exec seq="step72" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE trove;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
<br />
openstack user create --domain default --password xxxx trove<br />
openstack role add --project service --user trove admin<br />
openstack service create --name trove --description "Database" database<br />
openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s<br />
<br />
su -s /bin/sh -c "trove-manage db_sync" trove<br />
<br />
service trove-api restart<br />
service trove-taskmanager restart<br />
service trove-conductor restart<br />
<br />
# Install trove_dashboard<br />
cp -a /usr/local/lib/python2.7/dist-packages/trove_dashboard/enabled/* /usr/share/openstack-dashboard/openstack_dashboard/local/enabled/<br />
service apache2 restart<br />
<br />
</exec><br />
<br />
<exec seq="step73" type="verbatim"><br />
#wget -P /tmp/images http://tarballs.openstack.org/trove/images/ubuntu/mariadb.qcow2<br />
wget -P /tmp/images/ http://138.4.7.228/download/vnx/filesystems/ostack-images/trove/mariadb.qcow2<br />
glance image-create --name "trove-mariadb" --file /tmp/images/mariadb.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/mariadb.qcow2<br />
su -s /bin/sh -c "trove-manage --config-file /etc/trove/trove.conf datastore_update mysql ''" trove <br />
su -s /bin/sh -c "trove-manage --config-file /etc/trove/trove.conf datastore_version_update mysql mariadb mariadb glance_image_ID '' 1" trove<br />
<br />
# Create example database<br />
openstack flavor show m1.smaller >/dev/null 2>&amp;1 || openstack flavor create m1.smaller --ram 512 --disk 3 --vcpus 1 --id 6<br />
#trove create mysql_instance_1 m1.smaller --size 1 --databases myDB --users userA:xxxx --datastore_version mariadb --datastore mysql<br />
</exec><br />
<br />
<!-- STEP 8: Heat service --><br />
<!--cmd-seq seq="step8">step81,step82</cmd-seq--><br />
<br />
<filetree seq="step8" root="/etc/heat/">conf/controller/heat/heat.conf</filetree><br />
<filetree seq="step8" root="/root/heat/">conf/controller/heat/examples</filetree><br />
<exec seq="step8" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE heat;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password xxxx heat<br />
openstack role add --project service --user heat admin<br />
openstack service create --name heat --description "Orchestration" orchestration<br />
openstack service create --name heat-cfn --description "Orchestration" cloudformation<br />
openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1<br />
openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1<br />
openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1<br />
openstack domain create --description "Stack projects and users" heat<br />
openstack user create --domain heat --password xxxx heat_domain_admin<br />
openstack role add --domain heat --user-domain heat --user heat_domain_admin admin<br />
openstack role create heat_stack_owner<br />
openstack role add --project demo --user demo heat_stack_owner<br />
openstack role create heat_stack_user<br />
<br />
su -s /bin/sh -c "heat-manage db_sync" heat<br />
service heat-api restart<br />
service heat-api-cfn restart<br />
service heat-engine restart<br />
<br />
# Install Orchestration interface in Dashboard<br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get install -y gettext<br />
pip3 install heat-dashboard<br />
<br />
cd /root<br />
git clone https://github.com/openstack/heat-dashboard.git<br />
cd heat-dashboard/<br />
git checkout stable/stein<br />
cp heat_dashboard/enabled/_[1-9]*.py /usr/share/openstack-dashboard/openstack_dashboard/local/enabled<br />
python3 ./manage.py compilemessages<br />
cd /usr/share/openstack-dashboard<br />
DJANGO_SETTINGS_MODULE=openstack_dashboard.settings python3 manage.py collectstatic --noinput<br />
DJANGO_SETTINGS_MODULE=openstack_dashboard.settings python3 manage.py compress --force<br />
rm /var/lib/openstack-dashboard/secret_key<br />
service apache2 restart<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-heat" type="verbatim"><br />
source /root/bin/demo-openrc.sh<br />
<br />
# Create internal network<br />
openstack network create net-heat<br />
openstack subnet create --network net-heat --gateway 10.1.10.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.10.0/24 --allocation-pool start=10.1.10.8,end=10.1.10.100 subnet-heat<br />
<br />
mkdir -p /root/keys<br />
openstack keypair create key-heat > /root/keys/key-heat<br />
#export NET_ID=$(openstack network list | awk '/ net-heat / { print $2 }')<br />
export NET_ID=$( openstack network list --name net-heat -f value -c ID )<br />
openstack stack create -t /root/heat/examples/demo-template.yml --parameter "NetID=$NET_ID" stack<br />
<br />
</exec><br />
<br />
<br />
<!-- STEP 9: Tacker service --><br />
<cmd-seq seq="step9">step91,step92</cmd-seq><br />
<br />
<exec seq="step91" type="verbatim"><br />
apt-get -y install python-pip git<br />
pip install --upgrade pip<br />
</exec><br />
<br />
<filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/default-vim-config.yaml</filetree><br />
<filetree seq="step92" root="/root/tacker/">conf/controller/tacker/examples</filetree><br />
<exec seq="step92" type="verbatim"><br />
sed -i -e 's/.*"resource_types:OS::Nova::Flavor":.*/ "resource_types:OS::Nova::Flavor": "role:admin",/' /etc/heat/policy.json <br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE tacker;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password xxxx tacker<br />
openstack role add --project service --user tacker admin<br />
openstack service create --name tacker --description "Tacker Project" nfv-orchestration<br />
openstack endpoint create --region RegionOne nfv-orchestration public http://controller:9890/<br />
openstack endpoint create --region RegionOne nfv-orchestration internal http://controller:9890/<br />
openstack endpoint create --region RegionOne nfv-orchestration admin http://controller:9890/<br />
<br />
mkdir -p /root/tacker<br />
cd /root/tacker<br />
git clone https://github.com/openstack/tacker<br />
cd tacker<br />
git checkout stable/ocata<br />
pip install -r requirements.txt<br />
pip install tosca-parser<br />
python setup.py install<br />
mkdir -p /var/log/tacker<br />
<br />
/usr/local/bin/tacker-db-manage --config-file /usr/local/etc/tacker/tacker.conf upgrade head<br />
<br />
# Tacker client<br />
cd /root/tacker<br />
git clone https://github.com/openstack/python-tackerclient<br />
cd python-tackerclient<br />
git checkout stable/ocata<br />
python setup.py install<br />
<br />
# Tacker horizon<br />
cd /root/tacker<br />
git clone https://github.com/openstack/tacker-horizon<br />
cd tacker-horizon<br />
git checkout stable/ocata<br />
python setup.py install<br />
cp tacker_horizon/enabled/* /usr/share/openstack-dashboard/openstack_dashboard/enabled/<br />
service apache2 restart<br />
<br />
# Start tacker server<br />
mkdir -p /var/log/tacker<br />
nohup python /usr/local/bin/tacker-server \<br />
--config-file /usr/local/etc/tacker/tacker.conf \<br />
--log-file /var/log/tacker/tacker.log &amp;<br />
<br />
# Register default VIM<br />
tacker vim-register --is-default --config-file /usr/local/etc/tacker/default-vim-config.yaml \<br />
--description "Default VIM" "Openstack-VIM"<br />
<br />
</exec><br />
<exec seq="step93" type="verbatim"><br />
nohup python /usr/local/bin/tacker-server \<br />
--config-file /usr/local/etc/tacker/tacker.conf \<br />
--log-file /var/log/tacker/tacker.log &amp;<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-tacker" type="verbatim"><br />
source /root/bin/demo-openrc.sh<br />
<br />
# Create internal network<br />
openstack network create net-tacker<br />
openstack subnet create --network net-tacker --gateway 10.1.11.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.11.0/24 --allocation-pool start=10.1.11.8,end=10.1.11.100 subnet-tacker<br />
<br />
cd /root/tacker/examples<br />
tacker vnfd-create --vnfd-file sample-vnfd.yaml testd<br />
<br />
# Falla con error: <br />
# ERROR: Property error: : resources.VDU1.properties.image: : No module named v2.client<br />
tacker vnf-create --vnfd-id $( tacker vnfd-list | awk '/ testd / { print $2 }' ) test<br />
<br />
</exec><br />
<br />
<!--filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<exec seq="step92" type="verbatim"><br />
</exec--><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
#apt-get -y install python-pip<br />
#pip install --upgrade pip<br />
#pip install gnocchi[mysql,keystone] gnocchiclient<br />
apt-get -y install ceilometer-collector ceilometer-agent-central ceilometer-agent-notification python-ceilometerclient<br />
apt-get -y install gnocchi-common gnocchi-api gnocchi-metricd gnocchi-statsd python-gnocchiclient<br />
<br />
</exec><br />
<br />
<filetree seq="step102" root="/etc/ceilometer/">conf/controller/ceilometer/ceilometer.conf</filetree><br />
<!--filetree seq="step102" root="/etc/mongodb.conf">conf/controller/mongodb/mongodb.conf</filetree--><br />
<!--filetree seq="step102" root="/etc/apache2/sites-available/ceilometer.conf">conf/controller/ceilometer/apache/ceilometer.conf</filetree--><br />
<!--filetree seq="step102" root="/etc/apache2/sites-available/gnocchi.conf">conf/controller/ceilometer/apache/gnocchi.conf</filetree--><br />
<filetree seq="step102" root="/etc/gnocchi/">conf/controller/ceilometer/gnocchi.conf</filetree><br />
<filetree seq="step102" root="/etc/gnocchi/">conf/controller/ceilometer/api-paste.ini</filetree><br />
<exec seq="step102" type="verbatim"><br />
<br />
# Create gnocchi database<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE gnocchidb default character set utf8;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON gnocchidb.* TO 'gnocchi'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON gnocchidb.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
# Ceilometer<br />
source /root/bin/admin-openrc.sh <br />
openstack user create --domain default --password xxxx ceilometer<br />
openstack role add --project service --user ceilometer admin<br />
openstack service create --name ceilometer --description "Telemetry" metering<br />
openstack user create --domain default --password xxxx gnocchi<br />
openstack role add --project service --user gnocchi admin<br />
openstack service create --name gnocchi --description "Metric Service" metric<br />
openstack endpoint create --region RegionOne metric public http://controller:8041<br />
openstack endpoint create --region RegionOne metric internal http://controller:8041<br />
openstack endpoint create --region RegionOne metric admin http://controller:8041<br />
<br />
mkdir -p /var/cache/gnocchi<br />
chown gnocchi:gnocchi -R /var/cache/gnocchi<br />
mkdir -p /var/lib/gnocchi<br />
chown gnocchi:gnocchi -R /var/lib/gnocchi<br />
<br />
gnocchi-upgrade<br />
sed -i 's/8000/8041/g' /usr/bin/gnocchi-api<br />
# Correct error in gnocchi-api startup script<br />
sed -i -e 's/exec $DAEMON $DAEMON_ARGS/exec $DAEMON -- $DAEMON_ARGS/' /etc/init.d/gnocchi-api<br />
systemctl enable gnocchi-api.service gnocchi-metricd.service gnocchi-statsd.service<br />
systemctl start gnocchi-api.service gnocchi-metricd.service gnocchi-statsd.service<br />
<br />
ceilometer-upgrade --skip-metering-database<br />
service ceilometer-agent-central restart<br />
service ceilometer-agent-notification restart<br />
service ceilometer-collector restart<br />
<br />
# Enable Glance service meters<br />
crudini --set /etc/glance/glance-api.conf DEFAULT transport_url rabbit://openstack:xxxx@controller<br />
crudini --set /etc/glance/glance-api.conf oslo_messaging_notifications driver messagingv2<br />
crudini --set /etc/glance/glance-registry.conf DEFAULT transport_url rabbit://openstack:xxxx@controller<br />
crudini --set /etc/glance/glance-registry.conf oslo_messaging_notifications driver messagingv2<br />
<br />
service glance-registry restart<br />
service glance-api restart<br />
<br />
# Enable Neutron service meters<br />
crudini --set /etc/neutron/neutron.conf oslo_messaging_notifications driver messagingv2<br />
service neutron-server restart<br />
<br />
# Enable Heat service meters<br />
crudini --set /etc/heat/heat.conf oslo_messaging_notifications driver messagingv2<br />
service heat-api restart<br />
service heat-api-cfn restart<br />
service heat-engine restart<br />
<br />
#crudini --set /etc/glance/glance-api.conf DEFAULT rpc_backend rabbit<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_notifications driver messagingv2<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_host controller<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_userid openstack<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_password xxxx<br />
<br />
#crudini --set /etc/glance/glance-registry.conf DEFAULT rpc_backend rabbit<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_notifications driver messagingv2<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_host controller<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_userid openstack<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_password xxxx<br />
<br />
<br />
</exec><br />
<br />
<br />
<exec seq="load-img" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create flavors if not created<br />
openstack flavor show m1.nano >/dev/null 2>&amp;1 || openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano<br />
openstack flavor show m1.tiny >/dev/null 2>&amp;1 || openstack flavor create --id 1 --vcpus 1 --ram 512 --disk 1 m1.tiny<br />
openstack flavor show m1.smaller >/dev/null 2>&amp;1 || openstack flavor create --id 6 --vcpus 1 --ram 512 --disk 3 m1.smaller<br />
<br />
# CentOS image<br />
# Cirros image <br />
#wget -P /tmp/images http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img<br />
wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/cirros-0.3.4-x86_64-disk-vnx.qcow2<br />
glance image-create --name "cirros-0.3.4-x86_64-vnx" --file /tmp/images/cirros-0.3.4-x86_64-disk-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/cirros-0.3.4-x86_64-disk*.qcow2<br />
<br />
# Ubuntu image (trusty)<br />
#wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/trusty-server-cloudimg-amd64-disk1-vnx.qcow2<br />
#glance image-create --name "trusty-server-cloudimg-amd64-vnx" --file /tmp/images/trusty-server-cloudimg-amd64-disk1-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
#rm /tmp/images/trusty-server-cloudimg-amd64-disk1*.qcow2<br />
<br />
# Ubuntu image (xenial)<br />
wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/xenial-server-cloudimg-amd64-disk1-vnx.qcow2<br />
glance image-create --name "xenial-server-cloudimg-amd64-vnx" --file /tmp/images/xenial-server-cloudimg-amd64-disk1-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/xenial-server-cloudimg-amd64-disk1*.qcow2<br />
<br />
#wget -P /tmp/images http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2<br />
#glance image-create --name "CentOS-7-x86_64" --file /tmp/images/CentOS-7-x86_64-GenericCloud.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
#rm /tmp/images/CentOS-7-x86_64-GenericCloud.qcow2<br />
</exec><br />
<br />
<exec seq="create-demo-scenario" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create internal network<br />
#neutron net-create net0<br />
openstack network create net0<br />
#neutron subnet-create net0 10.1.1.0/24 --name subnet0 --gateway 10.1.1.1 --dns-nameserver 8.8.8.8<br />
openstack subnet create --network net0 --gateway 10.1.1.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.1.0/24 --allocation-pool start=10.1.1.8,end=10.1.1.100 subnet0<br />
<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm1 > /root/keys/vm1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm1 --nic net-id=net0 --key-name vm1<br />
<br />
# Create external network<br />
#neutron net-create ExtNet --provider:physical_network provider --provider:network_type flat --router:external --shared<br />
openstack network create --share --external --provider-physical-network provider --provider-network-type flat ExtNet<br />
#neutron subnet-create --name ExtSubnet --allocation-pool start=10.0.10.100,end=10.0.10.200 --dns-nameserver 10.0.10.1 --gateway 10.0.10.1 ExtNet 10.0.10.0/24<br />
openstack subnet create --network ExtNet --gateway 10.0.10.1 --dns-nameserver 10.0.10.1 --subnet-range 10.0.10.0/24 --allocation-pool start=10.0.10.100,end=10.0.10.200 ExtSubNet<br />
#neutron router-create r0<br />
openstack router create r0<br />
#neutron router-gateway-set r0 ExtNet<br />
openstack router set r0 --external-gateway ExtNet<br />
#neutron router-interface-add r0 subnet0<br />
openstack router add subnet r0 subnet0<br />
<br />
<br />
# Assign floating IP address to vm1<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm1<br />
openstack server add floating ip vm1 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
<br />
<br />
# Create security group rules to allow ICMP, SSH and WWW access<br />
openstack security group rule create --proto icmp --dst-port 0 default<br />
openstack security group rule create --proto tcp --dst-port 80 default<br />
openstack security group rule create --proto tcp --dst-port 22 default<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-vm2" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm2 > /root/keys/vm2<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm2 --nic net-id=net0 --key-name vm2<br />
# Assign floating IP address to vm2<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm2<br />
openstack server add floating ip vm2 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
</exec><br />
<br />
<exec seq="create-demo-vm3" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm3 > /root/keys/vm3<br />
openstack server create --flavor m1.smaller --image xenial-server-cloudimg-amd64-vnx vm3 --nic net-id=net0 --key-name vm3<br />
# Assign floating IP address to vm3<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm3<br />
openstack server add floating ip vm3 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
</exec><br />
<br />
<exec seq="create-vlan-demo-scenario" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create vlan based networks and subnetworks<br />
#neutron net-create vlan1000 --shared --provider:physical_network vlan --provider:network_type vlan --provider:segmentation_id 1000<br />
#neutron net-create vlan1001 --shared --provider:physical_network vlan --provider:network_type vlan --provider:segmentation_id 1001<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1000 vlan1000<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1001 vlan1001<br />
#neutron subnet-create vlan1000 10.1.2.0/24 --name vlan1000-subnet --allocation-pool start=10.1.2.2,end=10.1.2.99 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8<br />
#neutron subnet-create vlan1001 10.1.3.0/24 --name vlan1001-subnet --allocation-pool start=10.1.3.2,end=10.1.3.99 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8<br />
openstack subnet create --network vlan1000 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.2.0/24 --allocation-pool start=10.1.2.2,end=10.1.2.99 subvlan1000<br />
openstack subnet create --network vlan1001 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.3.0/24 --allocation-pool start=10.1.3.2,end=10.1.3.99 subvlan1001<br />
<br />
<br />
# Create virtual machine<br />
mkdir -p tmp<br />
openstack keypair create vm3 > tmp/vm3<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm3 --nic net-id=vlan1000 --key-name vm3<br />
openstack keypair create vm4 > tmp/vm4<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm4 --nic net-id=vlan1001 --key-name vm4<br />
<br />
<br />
# Create security group rules to allow ICMP, SSH and WWW access<br />
openstack security group rule create --proto icmp --dst-port 0 default<br />
openstack security group rule create --proto tcp --dst-port 80 default<br />
openstack security group rule create --proto tcp --dst-port 22 default<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
echo "--"<br />
echo "-- Keystone (identity)"<br />
echo "--"<br />
echo "Command: openstack --os-auth-url http://controller:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue"<br />
openstack --os-auth-url http://controller:35357/v3 \<br />
--os-project-domain-name default --os-user-domain-name default \<br />
--os-project-name admin --os-username admin token issue<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Glance (images)"<br />
echo "--"<br />
echo "Command: openstack image list"<br />
openstack image list<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Nova (compute)"<br />
echo "--"<br />
echo "Command: openstack compute service list"<br />
openstack compute service list<br />
echo "Command: openstack hypervisor service list"<br />
openstack hypervisor service list<br />
echo "Command: openstack catalog list"<br />
openstack catalog list<br />
echo "Command: nova-status upgrade check"<br />
nova-status upgrade check<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Neutron (network)"<br />
echo "--"<br />
echo "Command: openstack extension list --network"<br />
openstack extension list --network<br />
echo "Command: openstack network agent list"<br />
openstack network agent list<br />
echo "Command: openstack security group list"<br />
openstack security group list<br />
echo "Command: openstack security group rule list"<br />
openstack security group rule list<br />
</exec><br />
<br />
</vm><br />
<br />
<vm name="network" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-network</filesystem><br />
<!--vm name="network" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-network</filesystem--><br />
<mem>1G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.21/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.21/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="4" net="ExtNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<forwarding type="ip" /><br />
<forwarding type="ipv6" /><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts<br />
rm /root/hosts<br />
</exec><br />
<exec seq="on_boot" type="verbatim"><br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --><br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<filetree seq="on_boot" root="/root/">conf/network/bin</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
chmod +x /root/bin/*<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) --><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/neutron.conf</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/metadata_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/plugins/ml2/">conf/network/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step52" root="/etc/neutron/plugins/ml2/">conf/network/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/l3_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/dhcp_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/dnsmasq-neutron.conf</filetree><br />
<!--filetree seq="step52" root="/etc/neutron/">conf/network/neutron/fwaas_driver.ini</filetree--><br />
<!--filetree seq="step52" root="/etc/neutron/">conf/network/neutron/lbaas_agent.ini</filetree--><br />
<exec seq="step52" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
#ovs-vsctl add-br br-ex<br />
ovs-vsctl add-port br-provider eth4<br />
<br />
service neutron-lbaasv2-agent restart<br />
service openvswitch-switch restart<br />
<br />
service neutron-openvswitch-agent restart<br />
#service neutron-linuxbridge-agent restart<br />
service neutron-dhcp-agent restart<br />
service neutron-metadata-agent restart<br />
service neutron-l3-agent restart<br />
<br />
rm -f /var/lib/neutron/neutron.sqlite<br />
</exec><br />
<br />
</vm><br />
<br />
<br />
<vm name="compute1" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-compute</filesystem><br />
<!--vm name="compute1" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64" vcpu="2"--><br />
<!--filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-compute</filesystem--><br />
<mem>2G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.31/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.31/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
# Create /dev/net/tun device <br />
#mkdir -p /dev/net/<br />
#mknod -m 666 /dev/net/tun c 10 200<br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<!-- STEP 42: Compute service (Nova) --><br />
<filetree seq="step42" root="/etc/nova/">conf/compute1/nova/nova.conf</filetree><br />
<filetree seq="step42" root="/etc/nova/">conf/compute1/nova/nova-compute.conf</filetree><br />
<exec seq="step42" type="verbatim"><br />
service nova-compute restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron) --><br />
<filetree seq="step53" root="/etc/neutron/">conf/compute1/neutron/neutron.conf</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/ml2_conf.ini</filetree!--><br />
<exec seq="step53" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
service openvswitch-switch restart<br />
service nova-compute restart<br />
service neutron-openvswitch-agent restart<br />
</exec><br />
<br />
<!--filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<exec seq="step92" type="verbatim"><br />
</exec--><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get -y install ceilometer-agent-compute<br />
</exec><br />
<filetree seq="step102" root="/etc/ceilometer/">conf/compute2/ceilometer/ceilometer.conf</filetree><br />
<exec seq="step102" type="verbatim"><br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit True<br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit_period hour<br />
crudini --set /etc/nova/nova.conf DEFAULT notify_on_state_change vm_and_task_state<br />
crudini --set /etc/nova/nova.conf oslo_messaging_notifications driver messagingv2<br />
service ceilometer-agent-compute restart<br />
service nova-compute restart<br />
</exec><br />
<br />
</vm><br />
<br />
<vm name="compute2" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-compute</filesystem><br />
<!--vm name="compute2" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64" vcpu="2"--><br />
<!--filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-compute</filesystem!--><br />
<mem>2G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.32/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.32/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
# Create /dev/net/tun device <br />
#mkdir -p /dev/net/<br />
#mknod -m 666 /dev/net/tun c 10 200<br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<!-- STEP 42: Compute service (Nova) --><br />
<filetree seq="step42" root="/etc/nova/">conf/compute2/nova/nova.conf</filetree><br />
<filetree seq="step42" root="/etc/nova/">conf/compute2/nova/nova-compute.conf</filetree><br />
<exec seq="step42" type="verbatim"><br />
service nova-compute restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) --><br />
<filetree seq="step53" root="/etc/neutron/">conf/compute2/neutron/neutron.conf</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/ml2_conf.ini</filetree!--><br />
<exec seq="step53" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
service openvswitch-switch restart<br />
service nova-compute restart<br />
service neutron-openvswitch-agent restart<br />
</exec><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get -y install ceilometer-agent-compute<br />
</exec><br />
<filetree seq="step102" root="/etc/ceilometer/">conf/compute2/ceilometer/ceilometer.conf</filetree><br />
<exec seq="step102" type="verbatim"><br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit True<br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit_period hour<br />
crudini --set /etc/nova/nova.conf DEFAULT notify_on_state_change vm_and_task_state<br />
crudini --set /etc/nova/nova.conf oslo_messaging_notifications driver messagingv2<br />
service ceilometer-agent-compute restart<br />
service nova-compute restart<br />
</exec><br />
<br />
</vm><br />
<br />
<br />
<host><br />
<hostif net="ExtNet"><br />
<ipv4>10.0.10.1/24</ipv4><br />
</hostif><br />
<hostif net="MgmtNet"><br />
<ipv4>10.0.0.1/24</ipv4><br />
</hostif><br />
<exec seq="step00" type="verbatim"><br />
echo "--\n-- Waiting for all VMs to be ssh ready...\n--"<br />
</exec><br />
<exec seq="step00" type="verbatim"><br />
# Wait till ssh is accesible in all VMs<br />
while ! $( nc -z controller 22 ); do sleep 1; done<br />
while ! $( nc -z network 22 ); do sleep 1; done<br />
while ! $( nc -z compute1 22 ); do sleep 1; done<br />
while ! $( nc -z compute2 22 ); do sleep 1; done<br />
</exec><br />
<exec seq="step00" type="verbatim"><br />
echo "-- ...OK\n--"<br />
</exec><br />
</host><br />
<br />
</vnx></div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Vnx-labo-openstack-4nodes-classic-ovs-stein&diff=2631
Vnx-labo-openstack-4nodes-classic-ovs-stein
2019-08-26T14:28:11Z
<p>David: /* Provider networks example */</p>
<hr />
<div>{{Title|VNX Openstack Stein four nodes classic scenario using Open vSwitch}}<br />
<br />
== Introduction ==<br />
<br />
This is an Openstack tutorial scenario designed to experiment with Openstack free and open-source software platform for cloud-computing.<br />
<br />
The scenario is made of four virtual machines: a controller node, a network node and two compute nodes, all based on LXC. Optionally, new compute nodes can be added by starting additional VNX scenarios.<br />
<br />
Openstack version used is Stein (April 2019) over Ubuntu 18.04 LTS. The deployment scenario is the one that was named "Classic with Open vSwitch" and was described in previous versions of Openstack documentation (https://docs.openstack.org/liberty/networking-guide/scenario-classic-ovs.html). It is prepared to experiment either with the deployment of virtual machines using "Self Service Networks" provided by Openstack, or with the use of external "Provider networks".<br />
<br />
The scenario is similar to the ones developed by Raul Alvarez to test OpenDaylight-Openstack integration, but instead of using Devstack to configure Openstack nodes, the configuration is done by means of commands integrated into the VNX scenario following [https://docs.openstack.org/stein/install/ Openstack Stein installation recipes].<br />
<br />
[[File:Openstack_tutorial-stein.png|center|thumb|600px|<div align=center><br />
'''Figure 1: Openstack tutorial scenario'''</div>]]<br />
<br />
== Requirements ==<br />
<br />
To use the scenario you need a Linux computer (Ubuntu 16.04 or later recommended) with VNX software installed. At least 8GB of memory are needed to execute the scenario. <br />
<br />
See how to install VNX here: http://vnx.dit.upm.es/vnx/index.php/Vnx-install<br />
<br />
If already installed, update VNX to the latest version with:<br />
<br />
vnx_update<br />
<br />
== Installation ==<br />
<br />
Download the scenario with the virtual machines images included and unpack it:<br />
<br />
wget http://idefix.dit.upm.es/vnx/examples/openstack/openstack_lab-stein_4n_classic_ovs-v01-with-rootfs.tgz<br />
sudo vnx --unpack openstack_lab-stein_4n_classic_ovs-v01-with-rootfs.tgz<br />
<br />
== Starting the scenario ==<br />
<br />
Start the scenario and configure it and load an example cirros and ubuntu images with:<br />
cd openstack_lab-stein_4n_classic_ovs-v01<br />
# Start the scenario<br />
sudo vnx -f openstack_lab.xml -v --create<br />
# Wait for all consoles to have started and configure all Openstack services<br />
vnx -f openstack_lab.xml -v -x start-all<br />
# Load vm images in GLANCE<br />
vnx -f openstack_lab.xml -v -x load-img<br />
<br />
<br />
[[File:Tutorial-openstack-stein-4n-classic-ovs.png|center|thumb|600px|<div align=center><br />
'''Figure 2: Openstack tutorial detailed topology'''</div>]]<br />
<br />
Once started, you can connect to Openstack Dashboard (default/admin/xxxx) starting a browser and pointing it to the controller horizon page. For example:<br />
<br />
firefox 10.0.10.11/horizon<br />
<br />
== Self Service networks example ==<br />
<br />
Access the network topology Dashboard page (Project->Network->Network topology) and create a simple demo scenario inside Openstack:<br />
<br />
vnx -f openstack_lab.xml -v -x create-demo-scenario<br />
<br />
You should see the simple scenario as it is being created through the Dashboard.<br />
<br />
Once created you should be able to access vm1 console, to ping or ssh from the host to the vm1 or the opposite (see the floating IP assigned to vm1 in the Dashboard, probably 10.0.10.102).<br />
<br />
You can create a second Cirros virtual machine (vm2) or a third Ubuntu virtual machine and test connectivity among all virtual machines with: <br />
<br />
vnx -f openstack_lab.xml -v -x create-demo-vm2<br />
vnx -f openstack_lab.xml -v -x create-demo-vm3<br />
<br />
To allow external Internet access from vm1 you hace to configure a NAT in the host. You can easily do it using '''vnx_config_nat''' command distributed with VNX. Just find out the name of the public network interface of your host (i.e eth0) and execute:<br />
<br />
vnx_config_nat ExtNet eth0<br />
<br />
Besides, you can access the Openstack controller by ssh from the host and execute management commands directly:<br />
slogin root@controller # root/xxxx<br />
source bin/admin-openrc.sh # Load admin credentials<br />
<br />
For example, to show the virtual machines started:<br />
openstack server list<br />
<br />
You can also execute that commands from the host where the virtual scenario is started. For that purpose you need to install openstack client first:<br />
<br />
pip install python-openstackclient<br />
source bin/admin-openrc.sh # Load admin credentials<br />
openstack server list<br />
<br />
== Provider networks example ==<br />
<br />
Compute nodes in the Openstack virtual lab scenario have two network interfaces for internal and external connections: <br />
* '''eth2''', connected to Tunnels network and used to connect with VMs in other compute nodes or routers in the network node<br />
* '''eth3''', connected to VLANs network and used to connect with VMs in other compute nodes and also to connect to external systems through the Providers networks infrastructure. <br />
<br />
To demonstrate how Openstack VMs can be connected with external systems though the VLAN network switches, an additional demo scenario is included. Just execute:<br />
vnx -f openstack_lab.xml -v -x create-vlan-demo-scenario<br />
<br />
That scenario will create two new networks and subnetworks associated with VLANs 1000 and 1001, and two VMs, vmA1 and vmB1 connected to that networks. You can see the scenario created through the openstack Dashboard.<br />
<br />
[[File:Openstack-provider-networks-example.png|center|thumb|600px|<div align=center><br />
'''Figure 3: Provider networks testing scenario'''</div>]]<br />
<br />
The commands used to create that networks and vms are the following (can be consulted also in the scenario XML file):<br />
<br />
<pre><br />
# Networks<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1000 vlan1000<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1001 vlan1001<br />
openstack subnet create --network vlan1000 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.2.0/24 --allocation-pool start=10.1.2.2,end=10.1.2.99 subvlan1000<br />
openstack subnet create --network vlan1001 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.3.0/24 --allocation-pool start=10.1.3.2,end=10.1.3.99 subvlan1001<br />
<br />
# VMs<br />
mkdir -p tmp<br />
openstack keypair create vmA1 > tmp/vmA1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vmA1 --nic net-id=vlan1000 --key-name vmA1<br />
openstack keypair create vmB1 > tmp/vmB1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vmB1 --nic net-id=vlan1001 --key-name vmB1<br />
</pre><br />
<br />
To demonstrate the connectivity of vmA1 and vmB1 to external systems connected on VLANs 1000/1001, you can start and additional virtual scenario which creates three additional systems: vmA2 (vlan 1000), vmB2 (vlan 1001) and vlan-router (connected to both vlans). To start it just execute:<br />
vnx -f openstack_lab-vms-vlan.xml -v -t<br />
<br />
[[File:Openstack_lab-vms-vlan-map.png|center|thumb|600px|<div align=center><br />
'''Figure 4: Provider networks demo external scenario'''</div>]]<br />
<br />
Once the scenario is started, you should be able to ping, traceroute and ssh among vmA1, vmB1, vmA2 and vmB2 using the following IP addresses:<br />
<ul><br />
<li>Virtual machines inside Openstack:</li><br />
<ul><br />
<li>vmA1: 10.1.2.20/24</li><br />
<li>vmB1: 10.1.3.20/24</li><br />
</ul><br />
<li>Virtual machines outside Openstack:</li><br />
<ul><br />
<li>vmA2: 10.1.2.100/24</li><br />
<li>vmB2: 10.1.3.100/24</li><br />
</ul><br />
</ul><br />
<br />
You can have a look at the virtual switch that supports the Openstack VLAN Network executing the following command in the host:<br />
ovs-vsctl show<br />
<br />
<br />
[[File:Vnx-demo-scenario-openstack-stein.png|center|thumb|600px|<div align=center><br />
'''Figure 5: Openstack Dashboard view of the demo virtual scenarios created'''</div>]]<br />
<br />
== Stopping or releasing the scenario ==<br />
<br />
To stop the scenario preserving the configuration and the changes made:<br />
<br />
vnx -f openstack_lab.xml -v --shutdown<br />
<br />
To start it again use:<br />
<br />
vnx -f openstack_lab.xml -v --start<br />
<br />
To stop the scenario destroying all the configuration and changes made:<br />
<br />
vnx -f openstack_lab.xml -v --destroy<br />
<br />
To unconfigure the NAT, just execute (change eth0 by the name of your external interface):<br />
<br />
vnx_config_nat -d ExtNet eth0<br />
<br />
== Other useful information ==<br />
<br />
To pack the scenario in a tgz file:<br />
<br />
bin/pack-scenario-with-rootfs # including rootfs<br />
bin/pack-scenario # without rootfs<br />
<br />
== Other Openstack Dashboard screen captures ==<br />
<br />
[[File:Vnx-demo-scenario-openstack-mitaka-compute-overview.png|center|thumb|600px|<div align=center><br />
'''Figure 4: Openstack Dashboard compute overview'''</div>]]<br />
<br />
[[File:Vnx-demo-scenario-openstack-mitaka-instances.png|center|thumb|600px|<div align=center><br />
'''Figure 5: Openstack Dashboard view of the demo virtual machines created'''</div>]]<br />
<br />
<?xml version="1.0" encoding="UTF-8"?><br />
<br />
<!--<br />
~~~~~~~~~~~~~~~~~~~~~~<br />
VNX Sample scenarios<br />
~~~~~~~~~~~~~~~~~~~~~~<br />
<br />
Name: openstack_tutorial-stein<br />
<br />
Description: This is an Openstack tutorial scenario designed to experiment with Openstack free and open-source <br />
software platform for cloud-computing. It is made of four LXC containers: <br />
- one controller<br />
- one network node<br />
- two compute nodes<br />
Openstack version used: Stein.<br />
The network configuration is based on the one named "Classic with Open vSwitch" described here:<br />
http://docs.openstack.org/liberty/networking-guide/scenario-classic-ovs.html<br />
<br />
Author: David Fernandez (david@dit.upm.es)<br />
<br />
This file is part of the Virtual Networks over LinuX (VNX) Project distribution. <br />
(www: http://www.dit.upm.es/vnx - e-mail: vnx@dit.upm.es) <br />
<br />
Copyright (C) 2019 Departamento de Ingenieria de Sistemas Telematicos (DIT)<br />
Universidad Politecnica de Madrid (UPM)<br />
SPAIN<br />
--><br />
<br />
<vnx xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"<br />
xsi:noNamespaceSchemaLocation="/usr/share/xml/vnx/vnx-2.00.xsd"><br />
<global><br />
<version>2.0</version><br />
<scenario_name>openstack_tutorial-stein</scenario_name><br />
<ssh_key>/root/.ssh/id_rsa.pub</ssh_key><br />
<automac offset="0"/><br />
<!--vm_mgmt type="none"/--><br />
<vm_mgmt type="private" network="10.20.0.0" mask="24" offset="0"><br />
<host_mapping /><br />
</vm_mgmt> <br />
<vm_defaults><br />
<console id="0" display="no"/><br />
<console id="1" display="yes"/><br />
</vm_defaults><br />
<cmd-seq seq="step1-6">step1,step2,step3,step3b,step4,step5,step6</cmd-seq><br />
<cmd-seq seq="step4">step41,step42,step43,step44</cmd-seq><br />
<cmd-seq seq="step5">step51,step52,step53</cmd-seq><br />
<!-- start-all for 'noconfig' scenario: all installation steps included --><br />
<!--cmd-seq seq="start-all">step1,step2,step3,step4,step5,step6</cmd-seq--><br />
<!-- start-all for configured scenario: only network and compute node steps included --><br />
<cmd-seq seq="step10">step101,step102</cmd-seq><br />
<cmd-seq seq="start-all">step00,step42,step43,step44,step52,step53</cmd-seq><br />
<cmd-seq seq="discover-hosts">step44</cmd-seq><br />
</global><br />
<br />
<net name="MgmtNet" mode="openvswitch" mtu="1450"/><br />
<net name="TunnNet" mode="openvswitch" mtu="1450"/><br />
<net name="ExtNet" mode="openvswitch" /><br />
<net name="VlanNet" mode="openvswitch" /><br />
<net name="virbr0" mode="virtual_bridge" managed="no"/><br />
<br />
<vm name="controller" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-controller</filesystem><br />
<mem>1G</mem><br />
<!--console id="0" display="yes"/--><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.11/24</ipv4><br />
</if><br />
<if id="2" net="ExtNet"><br />
<ipv4>10.0.10.11/24</ipv4><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-controller.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<filetree seq="on_boot" root="/root/">conf/controller/bin</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
chmod +x /root/bin/*<br />
</exec><br />
<exec seq="on_boot" type="verbatim"><br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
<br />
# Change owner of secret_key to horizon to avoid a 500 error when<br />
# accessing horizon (new problem arosed in v04)<br />
# See: https://ask.openstack.org/en/question/30059/getting-500-internal-server-error-while-accessing-horizon-dashboard-in-ubuntu-icehouse/<br />
chown horizon /var/lib/openstack-dashboard/secret_key<br />
<br />
# Stop nova services. Before being configured, they consume a lot of CPU<br />
service nova-scheduler stop<br />
service nova-api stop<br />
service nova-conductor stop<br />
</exec><br />
<br />
<exec seq="step00" type="verbatim"><br />
# Restart nova services<br />
service nova-scheduler start<br />
service nova-api start<br />
service nova-conductor start<br />
</exec><br />
<br />
<!-- STEP 1: Basic services --><br />
<filetree seq="step1" root="/etc/mysql/mariadb.conf.d/">conf/controller/mysql/mysqld_openstack.cnf</filetree><br />
<filetree seq="step1" root="/etc/">conf/controller/memcached/memcached.conf</filetree><br />
<filetree seq="step1" root="/etc/default/">conf/controller/etcd/etcd</filetree><br />
<!--filetree seq="step1" root="/etc/mongodb.conf">conf/controller/mongodb/mongodb.conf</filetree!--><br />
<exec seq="step1" type="verbatim"><br />
<br />
# Stop nova services. Before being configured, they consume a lot of CPU<br />
service nova-scheduler stop<br />
service nova-api stop<br />
service nova-conductor stop<br />
<br />
# Change all ocurrences of utf8mb4 to utf8. See comment above<br />
#for f in $( find /etc/mysql/mariadb.conf.d/ -type f ); do echo "Changing utf8mb4 to utf8 in file $f"; sed -i -e 's/utf8mb4/utf8/g' $f; done<br />
service mysql restart<br />
#mysql_secure_installation # to be run manually<br />
<br />
rabbitmqctl add_user openstack xxxx<br />
rabbitmqctl set_permissions openstack ".*" ".*" ".*" <br />
<br />
service memcached restart<br />
<br />
systemctl enable etcd<br />
systemctl start etcd<br />
<br />
#service mongodb stop<br />
#rm -f /var/lib/mongodb/journal/prealloc.*<br />
#service mongodb start<br />
</exec><br />
<br />
<!-- STEP 2: Identity service --><br />
<filetree seq="step2" root="/etc/keystone/">conf/controller/keystone/keystone.conf</filetree><br />
<filetree seq="step2" root="/root/bin/">conf/controller/keystone/admin-openrc.sh</filetree><br />
<filetree seq="step2" root="/root/bin/">conf/controller/keystone/demo-openrc.sh</filetree><br />
<exec seq="step2" type="verbatim"><br />
count=1; while ! mysqladmin ping ; do echo -n $count; echo ": waiting for mysql ..."; ((count++)) &amp;&amp; ((count==6)) &amp;&amp; echo "--" &amp;&amp; echo "-- ERROR: database not ready." &amp;&amp; echo "--" &amp;&amp; break; sleep 2; done<br />
</exec><br />
<exec seq="step2" type="verbatim"><br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE keystone;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
su -s /bin/sh -c "keystone-manage db_sync" keystone<br />
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone<br />
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone<br />
<br />
keystone-manage bootstrap --bootstrap-password xxxx \<br />
--bootstrap-admin-url http://controller:5000/v3/ \<br />
--bootstrap-internal-url http://controller:5000/v3/ \<br />
--bootstrap-public-url http://controller:5000/v3/ \<br />
--bootstrap-region-id RegionOne<br />
<br />
echo "ServerName controller" >> /etc/apache2/apache2.conf<br />
#ln -s /etc/apache2/sites-available/wsgi-keystone.conf /etc/apache2/sites-enabled<br />
service apache2 restart<br />
rm -f /var/lib/keystone/keystone.db<br />
sleep 5<br />
<br />
export OS_USERNAME=admin<br />
export OS_PASSWORD=xxxx<br />
export OS_PROJECT_NAME=admin<br />
export OS_USER_DOMAIN_NAME=Default<br />
export OS_PROJECT_DOMAIN_NAME=Default<br />
export OS_AUTH_URL=http://controller:5000/v3<br />
export OS_IDENTITY_API_VERSION=3<br />
<br />
# Create users and projects<br />
openstack project create --domain default --description "Service Project" service<br />
openstack project create --domain default --description "Demo Project" demo<br />
openstack user create --domain default --password=xxxx demo<br />
openstack role create user<br />
openstack role add --project demo --user demo user<br />
</exec><br />
<br />
<!-- <br />
STEP 3: Image service (Glance) <br />
--><br />
<filetree seq="step3" root="/etc/glance/">conf/controller/glance/glance-api.conf</filetree><br />
<filetree seq="step3" root="/etc/glance/">conf/controller/glance/glance-registry.conf</filetree><br />
<exec seq="step3" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE glance;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx glance<br />
openstack role add --project service --user glance admin<br />
openstack service create --name glance --description "OpenStack Image" image<br />
openstack endpoint create --region RegionOne image public http://controller:9292<br />
openstack endpoint create --region RegionOne image internal http://controller:9292<br />
openstack endpoint create --region RegionOne image admin http://controller:9292<br />
<br />
su -s /bin/sh -c "glance-manage db_sync" glance<br />
service glance-registry restart<br />
service glance-api restart<br />
#rm -f /var/lib/glance/glance.sqlite<br />
</exec><br />
<br />
<!-- <br />
STEP 3B: Placement service API<br />
--><br />
<filetree seq="step3b" root="/etc/placement/">conf/controller/placement/placement.conf</filetree><br />
<exec seq="step3b" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE placement;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx placement<br />
openstack role add --project service --user placement admin<br />
openstack service create --name placement --description "Placement API" placement<br />
openstack endpoint create --region RegionOne placement public http://controller:8778<br />
openstack endpoint create --region RegionOne placement internal http://controller:8778<br />
openstack endpoint create --region RegionOne placement admin http://controller:8778<br />
su -s /bin/sh -c "placement-manage db sync" placement<br />
service apache2 restart<br />
</exec><br />
<br />
<br />
<br />
<!-- STEP 4: Compute service (Nova) --><br />
<filetree seq="step41" root="/etc/nova/">conf/controller/nova/nova.conf</filetree><br />
<exec seq="step41" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova_api;"<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova;"<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova_cell0;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
<br />
openstack user create --domain default --password=xxxx nova<br />
openstack role add --project service --user nova admin<br />
openstack service create --name nova --description "OpenStack Compute" compute<br />
<br />
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1<br />
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1<br />
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1<br />
<br />
# Restart services stopped at step 1 to save CPU<br />
service nova-scheduler start<br />
service nova-api start<br />
service nova-conductor start<br />
<br />
su -s /bin/sh -c "nova-manage api_db sync" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova<br />
su -s /bin/sh -c "nova-manage db sync" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova<br />
<br />
service nova-api restart<br />
service nova-consoleauth restart<br />
service nova-scheduler restart<br />
service nova-conductor restart<br />
service nova-novncproxy restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<exec seq="step43" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Wait for compute1 hypervisor to be up<br />
while [ $( openstack hypervisor list --matching compute1 -f value -c State ) != 'up' ]; do echo "waiting for compute1 hypervisor..."; sleep 5; done<br />
</exec><br />
<exec seq="step43" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Wait for compute2 hypervisor to be up<br />
while [ $( openstack hypervisor list --matching compute2 -f value -c State ) != 'up' ]; do echo "waiting for compute2 hypervisor..."; sleep 5; done<br />
</exec><br />
<exec seq="step44" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
openstack hypervisor list<br />
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron) --><br />
<filetree seq="step51" root="/etc/neutron/">conf/controller/neutron/neutron.conf</filetree><br />
<!--filetree seq="step51" root="/etc/neutron/">conf/controller/neutron/neutron_lbaas.conf</filetree--><br />
<filetree seq="step51" root="/etc/neutron/plugins/ml2/">conf/controller/neutron/ml2_conf.ini</filetree><br />
<exec seq="step51" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE neutron;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx neutron<br />
openstack role add --project service --user neutron admin<br />
openstack service create --name neutron --description "OpenStack Networking" network<br />
openstack endpoint create --region RegionOne network public http://controller:9696<br />
openstack endpoint create --region RegionOne network internal http://controller:9696<br />
openstack endpoint create --region RegionOne network admin http://controller:9696<br />
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron<br />
<br />
# LBaaS<br />
neutron-db-manage --subproject neutron-lbaas upgrade head<br />
<br />
# FwaaS<br />
neutron-db-manage --subproject neutron-fwaas upgrade head<br />
<br />
# LBaaS Dashboard panels<br />
#git clone https://git.openstack.org/openstack/neutron-lbaas-dashboard<br />
#cd neutron-lbaas-dashboard<br />
#git checkout stable/mitaka<br />
#python setup.py install<br />
#cp neutron_lbaas_dashboard/enabled/_1481_project_ng_loadbalancersv2_panel.py /usr/share/openstack-dashboard/openstack_dashboard/local/enabled/<br />
#cd /usr/share/openstack-dashboard<br />
#./manage.py collectstatic --noinput<br />
#./manage.py compress<br />
#sudo service apache2 restart<br />
<br />
service nova-api restart<br />
service neutron-server restart<br />
</exec><br />
<br />
<!-- STEP 6: Dashboard service --><br />
<filetree seq="step6" root="/etc/openstack-dashboard/">conf/controller/dashboard/local_settings.py</filetree><br />
<exec seq="step6" type="verbatim"><br />
#chown www-data:www-data /var/lib/openstack-dashboard/secret_key<br />
rm /var/lib/openstack-dashboard/secret_key<br />
systemctl enable apache2<br />
service apache2 restart<br />
</exec><br />
<br />
<!-- STEP 7: Trove service --><br />
<cmd-seq seq="step7">step71,step72,step73</cmd-seq><br />
<exec seq="step71" type="verbatim"><br />
apt-get -y install python-trove python-troveclient python-glanceclient trove-common trove-api trove-taskmanager trove-conductor python-pip<br />
pip install trove-dashboard==7.0.0.0b2<br />
</exec><br />
<br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-conductor.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-taskmanager.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-guestagent.conf</filetree><br />
<exec seq="step72" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE trove;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
<br />
openstack user create --domain default --password xxxx trove<br />
openstack role add --project service --user trove admin<br />
openstack service create --name trove --description "Database" database<br />
openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s<br />
<br />
su -s /bin/sh -c "trove-manage db_sync" trove<br />
<br />
service trove-api restart<br />
service trove-taskmanager restart<br />
service trove-conductor restart<br />
<br />
# Install trove_dashboard<br />
cp -a /usr/local/lib/python2.7/dist-packages/trove_dashboard/enabled/* /usr/share/openstack-dashboard/openstack_dashboard/local/enabled/<br />
service apache2 restart<br />
<br />
</exec><br />
<br />
<exec seq="step73" type="verbatim"><br />
#wget -P /tmp/images http://tarballs.openstack.org/trove/images/ubuntu/mariadb.qcow2<br />
wget -P /tmp/images/ http://138.4.7.228/download/vnx/filesystems/ostack-images/trove/mariadb.qcow2<br />
glance image-create --name "trove-mariadb" --file /tmp/images/mariadb.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/mariadb.qcow2<br />
su -s /bin/sh -c "trove-manage --config-file /etc/trove/trove.conf datastore_update mysql ''" trove <br />
su -s /bin/sh -c "trove-manage --config-file /etc/trove/trove.conf datastore_version_update mysql mariadb mariadb glance_image_ID '' 1" trove<br />
<br />
# Create example database<br />
openstack flavor show m1.smaller >/dev/null 2>&amp;1 || openstack flavor create m1.smaller --ram 512 --disk 3 --vcpus 1 --id 6<br />
#trove create mysql_instance_1 m1.smaller --size 1 --databases myDB --users userA:xxxx --datastore_version mariadb --datastore mysql<br />
</exec><br />
<br />
<!-- STEP 8: Heat service --><br />
<!--cmd-seq seq="step8">step81,step82</cmd-seq--><br />
<br />
<filetree seq="step8" root="/etc/heat/">conf/controller/heat/heat.conf</filetree><br />
<filetree seq="step8" root="/root/heat/">conf/controller/heat/examples</filetree><br />
<exec seq="step8" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE heat;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password xxxx heat<br />
openstack role add --project service --user heat admin<br />
openstack service create --name heat --description "Orchestration" orchestration<br />
openstack service create --name heat-cfn --description "Orchestration" cloudformation<br />
openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1<br />
openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1<br />
openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1<br />
openstack domain create --description "Stack projects and users" heat<br />
openstack user create --domain heat --password xxxx heat_domain_admin<br />
openstack role add --domain heat --user-domain heat --user heat_domain_admin admin<br />
openstack role create heat_stack_owner<br />
openstack role add --project demo --user demo heat_stack_owner<br />
openstack role create heat_stack_user<br />
<br />
su -s /bin/sh -c "heat-manage db_sync" heat<br />
service heat-api restart<br />
service heat-api-cfn restart<br />
service heat-engine restart<br />
<br />
# Install Orchestration interface in Dashboard<br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get install -y gettext<br />
pip3 install heat-dashboard<br />
<br />
cd /root<br />
git clone https://github.com/openstack/heat-dashboard.git<br />
cd heat-dashboard/<br />
git checkout stable/stein<br />
cp heat_dashboard/enabled/_[1-9]*.py /usr/share/openstack-dashboard/openstack_dashboard/local/enabled<br />
python3 ./manage.py compilemessages<br />
cd /usr/share/openstack-dashboard<br />
DJANGO_SETTINGS_MODULE=openstack_dashboard.settings python3 manage.py collectstatic --noinput<br />
DJANGO_SETTINGS_MODULE=openstack_dashboard.settings python3 manage.py compress --force<br />
rm /var/lib/openstack-dashboard/secret_key<br />
service apache2 restart<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-heat" type="verbatim"><br />
source /root/bin/demo-openrc.sh<br />
<br />
# Create internal network<br />
openstack network create net-heat<br />
openstack subnet create --network net-heat --gateway 10.1.10.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.10.0/24 --allocation-pool start=10.1.10.8,end=10.1.10.100 subnet-heat<br />
<br />
mkdir -p /root/keys<br />
openstack keypair create key-heat > /root/keys/key-heat<br />
#export NET_ID=$(openstack network list | awk '/ net-heat / { print $2 }')<br />
export NET_ID=$( openstack network list --name net-heat -f value -c ID )<br />
openstack stack create -t /root/heat/examples/demo-template.yml --parameter "NetID=$NET_ID" stack<br />
<br />
</exec><br />
<br />
<br />
<!-- STEP 9: Tacker service --><br />
<cmd-seq seq="step9">step91,step92</cmd-seq><br />
<br />
<exec seq="step91" type="verbatim"><br />
apt-get -y install python-pip git<br />
pip install --upgrade pip<br />
</exec><br />
<br />
<filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/default-vim-config.yaml</filetree><br />
<filetree seq="step92" root="/root/tacker/">conf/controller/tacker/examples</filetree><br />
<exec seq="step92" type="verbatim"><br />
sed -i -e 's/.*"resource_types:OS::Nova::Flavor":.*/ "resource_types:OS::Nova::Flavor": "role:admin",/' /etc/heat/policy.json <br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE tacker;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password xxxx tacker<br />
openstack role add --project service --user tacker admin<br />
openstack service create --name tacker --description "Tacker Project" nfv-orchestration<br />
openstack endpoint create --region RegionOne nfv-orchestration public http://controller:9890/<br />
openstack endpoint create --region RegionOne nfv-orchestration internal http://controller:9890/<br />
openstack endpoint create --region RegionOne nfv-orchestration admin http://controller:9890/<br />
<br />
mkdir -p /root/tacker<br />
cd /root/tacker<br />
git clone https://github.com/openstack/tacker<br />
cd tacker<br />
git checkout stable/ocata<br />
pip install -r requirements.txt<br />
pip install tosca-parser<br />
python setup.py install<br />
mkdir -p /var/log/tacker<br />
<br />
/usr/local/bin/tacker-db-manage --config-file /usr/local/etc/tacker/tacker.conf upgrade head<br />
<br />
# Tacker client<br />
cd /root/tacker<br />
git clone https://github.com/openstack/python-tackerclient<br />
cd python-tackerclient<br />
git checkout stable/ocata<br />
python setup.py install<br />
<br />
# Tacker horizon<br />
cd /root/tacker<br />
git clone https://github.com/openstack/tacker-horizon<br />
cd tacker-horizon<br />
git checkout stable/ocata<br />
python setup.py install<br />
cp tacker_horizon/enabled/* /usr/share/openstack-dashboard/openstack_dashboard/enabled/<br />
service apache2 restart<br />
<br />
# Start tacker server<br />
mkdir -p /var/log/tacker<br />
nohup python /usr/local/bin/tacker-server \<br />
--config-file /usr/local/etc/tacker/tacker.conf \<br />
--log-file /var/log/tacker/tacker.log &amp;<br />
<br />
# Register default VIM<br />
tacker vim-register --is-default --config-file /usr/local/etc/tacker/default-vim-config.yaml \<br />
--description "Default VIM" "Openstack-VIM"<br />
<br />
</exec><br />
<exec seq="step93" type="verbatim"><br />
nohup python /usr/local/bin/tacker-server \<br />
--config-file /usr/local/etc/tacker/tacker.conf \<br />
--log-file /var/log/tacker/tacker.log &amp;<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-tacker" type="verbatim"><br />
source /root/bin/demo-openrc.sh<br />
<br />
# Create internal network<br />
openstack network create net-tacker<br />
openstack subnet create --network net-tacker --gateway 10.1.11.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.11.0/24 --allocation-pool start=10.1.11.8,end=10.1.11.100 subnet-tacker<br />
<br />
cd /root/tacker/examples<br />
tacker vnfd-create --vnfd-file sample-vnfd.yaml testd<br />
<br />
# Falla con error: <br />
# ERROR: Property error: : resources.VDU1.properties.image: : No module named v2.client<br />
tacker vnf-create --vnfd-id $( tacker vnfd-list | awk '/ testd / { print $2 }' ) test<br />
<br />
</exec><br />
<br />
<!--filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<exec seq="step92" type="verbatim"><br />
</exec--><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
#apt-get -y install python-pip<br />
#pip install --upgrade pip<br />
#pip install gnocchi[mysql,keystone] gnocchiclient<br />
apt-get -y install ceilometer-collector ceilometer-agent-central ceilometer-agent-notification python-ceilometerclient<br />
apt-get -y install gnocchi-common gnocchi-api gnocchi-metricd gnocchi-statsd python-gnocchiclient<br />
<br />
</exec><br />
<br />
<filetree seq="step102" root="/etc/ceilometer/">conf/controller/ceilometer/ceilometer.conf</filetree><br />
<!--filetree seq="step102" root="/etc/mongodb.conf">conf/controller/mongodb/mongodb.conf</filetree--><br />
<!--filetree seq="step102" root="/etc/apache2/sites-available/ceilometer.conf">conf/controller/ceilometer/apache/ceilometer.conf</filetree--><br />
<!--filetree seq="step102" root="/etc/apache2/sites-available/gnocchi.conf">conf/controller/ceilometer/apache/gnocchi.conf</filetree--><br />
<filetree seq="step102" root="/etc/gnocchi/">conf/controller/ceilometer/gnocchi.conf</filetree><br />
<filetree seq="step102" root="/etc/gnocchi/">conf/controller/ceilometer/api-paste.ini</filetree><br />
<exec seq="step102" type="verbatim"><br />
<br />
# Create gnocchi database<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE gnocchidb default character set utf8;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON gnocchidb.* TO 'gnocchi'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON gnocchidb.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
# Ceilometer<br />
source /root/bin/admin-openrc.sh <br />
openstack user create --domain default --password xxxx ceilometer<br />
openstack role add --project service --user ceilometer admin<br />
openstack service create --name ceilometer --description "Telemetry" metering<br />
openstack user create --domain default --password xxxx gnocchi<br />
openstack role add --project service --user gnocchi admin<br />
openstack service create --name gnocchi --description "Metric Service" metric<br />
openstack endpoint create --region RegionOne metric public http://controller:8041<br />
openstack endpoint create --region RegionOne metric internal http://controller:8041<br />
openstack endpoint create --region RegionOne metric admin http://controller:8041<br />
<br />
mkdir -p /var/cache/gnocchi<br />
chown gnocchi:gnocchi -R /var/cache/gnocchi<br />
mkdir -p /var/lib/gnocchi<br />
chown gnocchi:gnocchi -R /var/lib/gnocchi<br />
<br />
gnocchi-upgrade<br />
sed -i 's/8000/8041/g' /usr/bin/gnocchi-api<br />
# Correct error in gnocchi-api startup script<br />
sed -i -e 's/exec $DAEMON $DAEMON_ARGS/exec $DAEMON -- $DAEMON_ARGS/' /etc/init.d/gnocchi-api<br />
systemctl enable gnocchi-api.service gnocchi-metricd.service gnocchi-statsd.service<br />
systemctl start gnocchi-api.service gnocchi-metricd.service gnocchi-statsd.service<br />
<br />
ceilometer-upgrade --skip-metering-database<br />
service ceilometer-agent-central restart<br />
service ceilometer-agent-notification restart<br />
service ceilometer-collector restart<br />
<br />
# Enable Glance service meters<br />
crudini --set /etc/glance/glance-api.conf DEFAULT transport_url rabbit://openstack:xxxx@controller<br />
crudini --set /etc/glance/glance-api.conf oslo_messaging_notifications driver messagingv2<br />
crudini --set /etc/glance/glance-registry.conf DEFAULT transport_url rabbit://openstack:xxxx@controller<br />
crudini --set /etc/glance/glance-registry.conf oslo_messaging_notifications driver messagingv2<br />
<br />
service glance-registry restart<br />
service glance-api restart<br />
<br />
# Enable Neutron service meters<br />
crudini --set /etc/neutron/neutron.conf oslo_messaging_notifications driver messagingv2<br />
service neutron-server restart<br />
<br />
# Enable Heat service meters<br />
crudini --set /etc/heat/heat.conf oslo_messaging_notifications driver messagingv2<br />
service heat-api restart<br />
service heat-api-cfn restart<br />
service heat-engine restart<br />
<br />
#crudini --set /etc/glance/glance-api.conf DEFAULT rpc_backend rabbit<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_notifications driver messagingv2<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_host controller<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_userid openstack<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_password xxxx<br />
<br />
#crudini --set /etc/glance/glance-registry.conf DEFAULT rpc_backend rabbit<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_notifications driver messagingv2<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_host controller<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_userid openstack<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_password xxxx<br />
<br />
<br />
</exec><br />
<br />
<br />
<exec seq="load-img" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create flavors if not created<br />
openstack flavor show m1.nano >/dev/null 2>&amp;1 || openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano<br />
openstack flavor show m1.tiny >/dev/null 2>&amp;1 || openstack flavor create --id 1 --vcpus 1 --ram 512 --disk 1 m1.tiny<br />
openstack flavor show m1.smaller >/dev/null 2>&amp;1 || openstack flavor create --id 6 --vcpus 1 --ram 512 --disk 3 m1.smaller<br />
<br />
# CentOS image<br />
# Cirros image <br />
#wget -P /tmp/images http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img<br />
wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/cirros-0.3.4-x86_64-disk-vnx.qcow2<br />
glance image-create --name "cirros-0.3.4-x86_64-vnx" --file /tmp/images/cirros-0.3.4-x86_64-disk-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/cirros-0.3.4-x86_64-disk*.qcow2<br />
<br />
# Ubuntu image (trusty)<br />
#wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/trusty-server-cloudimg-amd64-disk1-vnx.qcow2<br />
#glance image-create --name "trusty-server-cloudimg-amd64-vnx" --file /tmp/images/trusty-server-cloudimg-amd64-disk1-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
#rm /tmp/images/trusty-server-cloudimg-amd64-disk1*.qcow2<br />
<br />
# Ubuntu image (xenial)<br />
wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/xenial-server-cloudimg-amd64-disk1-vnx.qcow2<br />
glance image-create --name "xenial-server-cloudimg-amd64-vnx" --file /tmp/images/xenial-server-cloudimg-amd64-disk1-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/xenial-server-cloudimg-amd64-disk1*.qcow2<br />
<br />
#wget -P /tmp/images http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2<br />
#glance image-create --name "CentOS-7-x86_64" --file /tmp/images/CentOS-7-x86_64-GenericCloud.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
#rm /tmp/images/CentOS-7-x86_64-GenericCloud.qcow2<br />
</exec><br />
<br />
<exec seq="create-demo-scenario" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create internal network<br />
#neutron net-create net0<br />
openstack network create net0<br />
#neutron subnet-create net0 10.1.1.0/24 --name subnet0 --gateway 10.1.1.1 --dns-nameserver 8.8.8.8<br />
openstack subnet create --network net0 --gateway 10.1.1.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.1.0/24 --allocation-pool start=10.1.1.8,end=10.1.1.100 subnet0<br />
<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm1 > /root/keys/vm1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm1 --nic net-id=net0 --key-name vm1<br />
<br />
# Create external network<br />
#neutron net-create ExtNet --provider:physical_network provider --provider:network_type flat --router:external --shared<br />
openstack network create --share --external --provider-physical-network provider --provider-network-type flat ExtNet<br />
#neutron subnet-create --name ExtSubnet --allocation-pool start=10.0.10.100,end=10.0.10.200 --dns-nameserver 10.0.10.1 --gateway 10.0.10.1 ExtNet 10.0.10.0/24<br />
openstack subnet create --network ExtNet --gateway 10.0.10.1 --dns-nameserver 10.0.10.1 --subnet-range 10.0.10.0/24 --allocation-pool start=10.0.10.100,end=10.0.10.200 ExtSubNet<br />
#neutron router-create r0<br />
openstack router create r0<br />
#neutron router-gateway-set r0 ExtNet<br />
openstack router set r0 --external-gateway ExtNet<br />
#neutron router-interface-add r0 subnet0<br />
openstack router add subnet r0 subnet0<br />
<br />
<br />
# Assign floating IP address to vm1<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm1<br />
openstack server add floating ip vm1 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
<br />
<br />
# Create security group rules to allow ICMP, SSH and WWW access<br />
openstack security group rule create --proto icmp --dst-port 0 default<br />
openstack security group rule create --proto tcp --dst-port 80 default<br />
openstack security group rule create --proto tcp --dst-port 22 default<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-vm2" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm2 > /root/keys/vm2<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm2 --nic net-id=net0 --key-name vm2<br />
# Assign floating IP address to vm2<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm2<br />
openstack server add floating ip vm2 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
</exec><br />
<br />
<exec seq="create-demo-vm3" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm3 > /root/keys/vm3<br />
openstack server create --flavor m1.smaller --image xenial-server-cloudimg-amd64-vnx vm3 --nic net-id=net0 --key-name vm3<br />
# Assign floating IP address to vm3<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm3<br />
openstack server add floating ip vm3 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
</exec><br />
<br />
<exec seq="create-vlan-demo-scenario" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create vlan based networks and subnetworks<br />
#neutron net-create vlan1000 --shared --provider:physical_network vlan --provider:network_type vlan --provider:segmentation_id 1000<br />
#neutron net-create vlan1001 --shared --provider:physical_network vlan --provider:network_type vlan --provider:segmentation_id 1001<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1000 vlan1000<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1001 vlan1001<br />
#neutron subnet-create vlan1000 10.1.2.0/24 --name vlan1000-subnet --allocation-pool start=10.1.2.2,end=10.1.2.99 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8<br />
#neutron subnet-create vlan1001 10.1.3.0/24 --name vlan1001-subnet --allocation-pool start=10.1.3.2,end=10.1.3.99 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8<br />
openstack subnet create --network vlan1000 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.2.0/24 --allocation-pool start=10.1.2.2,end=10.1.2.99 subvlan1000<br />
openstack subnet create --network vlan1001 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.3.0/24 --allocation-pool start=10.1.3.2,end=10.1.3.99 subvlan1001<br />
<br />
<br />
# Create virtual machine<br />
mkdir -p tmp<br />
openstack keypair create vm3 > tmp/vm3<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm3 --nic net-id=vlan1000 --key-name vm3<br />
openstack keypair create vm4 > tmp/vm4<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm4 --nic net-id=vlan1001 --key-name vm4<br />
<br />
<br />
# Create security group rules to allow ICMP, SSH and WWW access<br />
openstack security group rule create --proto icmp --dst-port 0 default<br />
openstack security group rule create --proto tcp --dst-port 80 default<br />
openstack security group rule create --proto tcp --dst-port 22 default<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
echo "--"<br />
echo "-- Keystone (identity)"<br />
echo "--"<br />
echo "Command: openstack --os-auth-url http://controller:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue"<br />
openstack --os-auth-url http://controller:35357/v3 \<br />
--os-project-domain-name default --os-user-domain-name default \<br />
--os-project-name admin --os-username admin token issue<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Glance (images)"<br />
echo "--"<br />
echo "Command: openstack image list"<br />
openstack image list<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Nova (compute)"<br />
echo "--"<br />
echo "Command: openstack compute service list"<br />
openstack compute service list<br />
echo "Command: openstack hypervisor service list"<br />
openstack hypervisor service list<br />
echo "Command: openstack catalog list"<br />
openstack catalog list<br />
echo "Command: nova-status upgrade check"<br />
nova-status upgrade check<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Neutron (network)"<br />
echo "--"<br />
echo "Command: openstack extension list --network"<br />
openstack extension list --network<br />
echo "Command: openstack network agent list"<br />
openstack network agent list<br />
echo "Command: openstack security group list"<br />
openstack security group list<br />
echo "Command: openstack security group rule list"<br />
openstack security group rule list<br />
</exec><br />
<br />
</vm><br />
<br />
<vm name="network" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-network</filesystem><br />
<!--vm name="network" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-network</filesystem--><br />
<mem>1G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.21/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.21/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="4" net="ExtNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<forwarding type="ip" /><br />
<forwarding type="ipv6" /><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts<br />
rm /root/hosts<br />
</exec><br />
<exec seq="on_boot" type="verbatim"><br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --><br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<filetree seq="on_boot" root="/root/">conf/network/bin</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
chmod +x /root/bin/*<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) --><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/neutron.conf</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/metadata_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/plugins/ml2/">conf/network/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step52" root="/etc/neutron/plugins/ml2/">conf/network/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/l3_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/dhcp_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/dnsmasq-neutron.conf</filetree><br />
<!--filetree seq="step52" root="/etc/neutron/">conf/network/neutron/fwaas_driver.ini</filetree--><br />
<!--filetree seq="step52" root="/etc/neutron/">conf/network/neutron/lbaas_agent.ini</filetree--><br />
<exec seq="step52" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
#ovs-vsctl add-br br-ex<br />
ovs-vsctl add-port br-provider eth4<br />
<br />
service neutron-lbaasv2-agent restart<br />
service openvswitch-switch restart<br />
<br />
service neutron-openvswitch-agent restart<br />
#service neutron-linuxbridge-agent restart<br />
service neutron-dhcp-agent restart<br />
service neutron-metadata-agent restart<br />
service neutron-l3-agent restart<br />
<br />
rm -f /var/lib/neutron/neutron.sqlite<br />
</exec><br />
<br />
</vm><br />
<br />
<br />
<vm name="compute1" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-compute</filesystem><br />
<!--vm name="compute1" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64" vcpu="2"--><br />
<!--filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-compute</filesystem--><br />
<mem>2G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.31/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.31/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
# Create /dev/net/tun device <br />
#mkdir -p /dev/net/<br />
#mknod -m 666 /dev/net/tun c 10 200<br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<!-- STEP 42: Compute service (Nova) --><br />
<filetree seq="step42" root="/etc/nova/">conf/compute1/nova/nova.conf</filetree><br />
<filetree seq="step42" root="/etc/nova/">conf/compute1/nova/nova-compute.conf</filetree><br />
<exec seq="step42" type="verbatim"><br />
service nova-compute restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron) --><br />
<filetree seq="step53" root="/etc/neutron/">conf/compute1/neutron/neutron.conf</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/ml2_conf.ini</filetree!--><br />
<exec seq="step53" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
service openvswitch-switch restart<br />
service nova-compute restart<br />
service neutron-openvswitch-agent restart<br />
</exec><br />
<br />
<!--filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<exec seq="step92" type="verbatim"><br />
</exec--><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get -y install ceilometer-agent-compute<br />
</exec><br />
<filetree seq="step102" root="/etc/ceilometer/">conf/compute2/ceilometer/ceilometer.conf</filetree><br />
<exec seq="step102" type="verbatim"><br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit True<br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit_period hour<br />
crudini --set /etc/nova/nova.conf DEFAULT notify_on_state_change vm_and_task_state<br />
crudini --set /etc/nova/nova.conf oslo_messaging_notifications driver messagingv2<br />
service ceilometer-agent-compute restart<br />
service nova-compute restart<br />
</exec><br />
<br />
</vm><br />
<br />
<vm name="compute2" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-compute</filesystem><br />
<!--vm name="compute2" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64" vcpu="2"--><br />
<!--filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-compute</filesystem!--><br />
<mem>2G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.32/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.32/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
# Create /dev/net/tun device <br />
#mkdir -p /dev/net/<br />
#mknod -m 666 /dev/net/tun c 10 200<br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<!-- STEP 42: Compute service (Nova) --><br />
<filetree seq="step42" root="/etc/nova/">conf/compute2/nova/nova.conf</filetree><br />
<filetree seq="step42" root="/etc/nova/">conf/compute2/nova/nova-compute.conf</filetree><br />
<exec seq="step42" type="verbatim"><br />
service nova-compute restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) --><br />
<filetree seq="step53" root="/etc/neutron/">conf/compute2/neutron/neutron.conf</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/ml2_conf.ini</filetree!--><br />
<exec seq="step53" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
service openvswitch-switch restart<br />
service nova-compute restart<br />
service neutron-openvswitch-agent restart<br />
</exec><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get -y install ceilometer-agent-compute<br />
</exec><br />
<filetree seq="step102" root="/etc/ceilometer/">conf/compute2/ceilometer/ceilometer.conf</filetree><br />
<exec seq="step102" type="verbatim"><br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit True<br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit_period hour<br />
crudini --set /etc/nova/nova.conf DEFAULT notify_on_state_change vm_and_task_state<br />
crudini --set /etc/nova/nova.conf oslo_messaging_notifications driver messagingv2<br />
service ceilometer-agent-compute restart<br />
service nova-compute restart<br />
</exec><br />
<br />
</vm><br />
<br />
<br />
<host><br />
<hostif net="ExtNet"><br />
<ipv4>10.0.10.1/24</ipv4><br />
</hostif><br />
<hostif net="MgmtNet"><br />
<ipv4>10.0.0.1/24</ipv4><br />
</hostif><br />
<exec seq="step00" type="verbatim"><br />
echo "--\n-- Waiting for all VMs to be ssh ready...\n--"<br />
</exec><br />
<exec seq="step00" type="verbatim"><br />
# Wait till ssh is accesible in all VMs<br />
while ! $( nc -z controller 22 ); do sleep 1; done<br />
while ! $( nc -z network 22 ); do sleep 1; done<br />
while ! $( nc -z compute1 22 ); do sleep 1; done<br />
while ! $( nc -z compute2 22 ); do sleep 1; done<br />
</exec><br />
<exec seq="step00" type="verbatim"><br />
echo "-- ...OK\n--"<br />
</exec><br />
</host><br />
<br />
</vnx></div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Vnx-labo-openstack-4nodes-classic-ovs-stein&diff=2630
Vnx-labo-openstack-4nodes-classic-ovs-stein
2019-08-26T14:27:02Z
<p>David: /* Provider networks example */</p>
<hr />
<div>{{Title|VNX Openstack Stein four nodes classic scenario using Open vSwitch}}<br />
<br />
== Introduction ==<br />
<br />
This is an Openstack tutorial scenario designed to experiment with Openstack free and open-source software platform for cloud-computing.<br />
<br />
The scenario is made of four virtual machines: a controller node, a network node and two compute nodes, all based on LXC. Optionally, new compute nodes can be added by starting additional VNX scenarios.<br />
<br />
Openstack version used is Stein (April 2019) over Ubuntu 18.04 LTS. The deployment scenario is the one that was named "Classic with Open vSwitch" and was described in previous versions of Openstack documentation (https://docs.openstack.org/liberty/networking-guide/scenario-classic-ovs.html). It is prepared to experiment either with the deployment of virtual machines using "Self Service Networks" provided by Openstack, or with the use of external "Provider networks".<br />
<br />
The scenario is similar to the ones developed by Raul Alvarez to test OpenDaylight-Openstack integration, but instead of using Devstack to configure Openstack nodes, the configuration is done by means of commands integrated into the VNX scenario following [https://docs.openstack.org/stein/install/ Openstack Stein installation recipes].<br />
<br />
[[File:Openstack_tutorial-stein.png|center|thumb|600px|<div align=center><br />
'''Figure 1: Openstack tutorial scenario'''</div>]]<br />
<br />
== Requirements ==<br />
<br />
To use the scenario you need a Linux computer (Ubuntu 16.04 or later recommended) with VNX software installed. At least 8GB of memory are needed to execute the scenario. <br />
<br />
See how to install VNX here: http://vnx.dit.upm.es/vnx/index.php/Vnx-install<br />
<br />
If already installed, update VNX to the latest version with:<br />
<br />
vnx_update<br />
<br />
== Installation ==<br />
<br />
Download the scenario with the virtual machines images included and unpack it:<br />
<br />
wget http://idefix.dit.upm.es/vnx/examples/openstack/openstack_lab-stein_4n_classic_ovs-v01-with-rootfs.tgz<br />
sudo vnx --unpack openstack_lab-stein_4n_classic_ovs-v01-with-rootfs.tgz<br />
<br />
== Starting the scenario ==<br />
<br />
Start the scenario and configure it and load an example cirros and ubuntu images with:<br />
cd openstack_lab-stein_4n_classic_ovs-v01<br />
# Start the scenario<br />
sudo vnx -f openstack_lab.xml -v --create<br />
# Wait for all consoles to have started and configure all Openstack services<br />
vnx -f openstack_lab.xml -v -x start-all<br />
# Load vm images in GLANCE<br />
vnx -f openstack_lab.xml -v -x load-img<br />
<br />
<br />
[[File:Tutorial-openstack-stein-4n-classic-ovs.png|center|thumb|600px|<div align=center><br />
'''Figure 2: Openstack tutorial detailed topology'''</div>]]<br />
<br />
Once started, you can connect to Openstack Dashboard (default/admin/xxxx) starting a browser and pointing it to the controller horizon page. For example:<br />
<br />
firefox 10.0.10.11/horizon<br />
<br />
== Self Service networks example ==<br />
<br />
Access the network topology Dashboard page (Project->Network->Network topology) and create a simple demo scenario inside Openstack:<br />
<br />
vnx -f openstack_lab.xml -v -x create-demo-scenario<br />
<br />
You should see the simple scenario as it is being created through the Dashboard.<br />
<br />
Once created you should be able to access vm1 console, to ping or ssh from the host to the vm1 or the opposite (see the floating IP assigned to vm1 in the Dashboard, probably 10.0.10.102).<br />
<br />
You can create a second Cirros virtual machine (vm2) or a third Ubuntu virtual machine and test connectivity among all virtual machines with: <br />
<br />
vnx -f openstack_lab.xml -v -x create-demo-vm2<br />
vnx -f openstack_lab.xml -v -x create-demo-vm3<br />
<br />
To allow external Internet access from vm1 you hace to configure a NAT in the host. You can easily do it using '''vnx_config_nat''' command distributed with VNX. Just find out the name of the public network interface of your host (i.e eth0) and execute:<br />
<br />
vnx_config_nat ExtNet eth0<br />
<br />
Besides, you can access the Openstack controller by ssh from the host and execute management commands directly:<br />
slogin root@controller # root/xxxx<br />
source bin/admin-openrc.sh # Load admin credentials<br />
<br />
For example, to show the virtual machines started:<br />
openstack server list<br />
<br />
You can also execute that commands from the host where the virtual scenario is started. For that purpose you need to install openstack client first:<br />
<br />
pip install python-openstackclient<br />
source bin/admin-openrc.sh # Load admin credentials<br />
openstack server list<br />
<br />
== Provider networks example ==<br />
<br />
Compute nodes in the Openstack virtual lab scenario have two network interfaces for internal and external connections: <br />
* '''eth2''', connected to Tunnels network and used to connect with VMs in other compute nodes or routers in the network node<br />
* '''eth3''', connected to VLANs network and used to connect with VMs in other compute nodes and also to connect to external systems through the Providers networks infrastructure. <br />
<br />
To demonstrate how Openstack VMs can be connected with external systems though the VLAN network switches, an additional demo scenario is included. Just execute:<br />
vnx -f openstack_lab.xml -v -x create-vlan-demo-scenario<br />
<br />
That scenario will create two new networks and subnetworks associated with VLANs 1000 and 1001, and two VMs, vmA1 and vmB1 connected to that networks. You can see the scenario created through the openstack Dashboard.<br />
<br />
[[File:Openstack-provider-networks-example.png|center|thumb|600px|<div align=center><br />
'''Figure 3: Provider networks testing scenario'''</div>]]<br />
<br />
The commands used to create that networks and vms are the following (can be consulted also in the scenario XML file):<br />
<br />
<pre><br />
# Networks<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1000 vlan1000<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1001 vlan1001<br />
openstack subnet create --network vlan1000 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.2.0/24 --allocation-pool start=10.1.2.2,end=10.1.2.99 subvlan1000<br />
openstack subnet create --network vlan1001 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.3.0/24 --allocation-pool start=10.1.3.2,end=10.1.3.99 subvlan1001<br />
<br />
# VMs<br />
mkdir -p tmp<br />
openstack keypair create vmA1 > tmp/vmA1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vmA1 --nic net-id=vlan1000 --key-name vmA1<br />
openstack keypair create vmB1 > tmp/vmB1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vmB1 --nic net-id=vlan1001 --key-name vmB1<br />
</pre><br />
<br />
To demonstrate the connectivity of vmA1 and vmB1 to external systems connected on VLANs 1000/1001, you can start and additional virtual scenario which creates three additional systems: vmA2 (vlan 1000), vmB2 (vlan 1001) and vlan-router (connected to both vlans). To start it just execute:<br />
vnx -f openstack_lab-vms-vlan.xml -v -t<br />
<br />
[[File:Openstack_lab-vms-vlan-map.png|center|thumb|600px|<div align=center><br />
'''Figure 3: Provider networks demo external scenario'''</div>]]<br />
<br />
Once the scenario is started, you should be able to ping, traceroute and ssh among vmA1, vmB1, vmA2 and vmB2 using the following IP addresses:<br />
<ul><br />
<li>Virtual machines inside Openstack:</li><br />
<ul><br />
<li>vmA1: 10.1.2.20/24</li><br />
<li>vmB1: 10.1.3.20/24</li><br />
</ul><br />
<li>Virtual machines outside Openstack:</li><br />
<ul><br />
<li>vmA2: 10.1.2.100/24</li><br />
<li>vmB2: 10.1.3.100/24</li><br />
</ul><br />
</ul><br />
<br />
You can have a look at the virtual switch that supports the Openstack VLAN Network executing the following command in the host:<br />
ovs-vsctl show<br />
<br />
<br />
[[File:Vnx-demo-scenario-openstack-stein.png|center|thumb|600px|<div align=center><br />
'''Figure 4: Openstack Dashboard view of the demo virtual scenarios created'''</div>]]<br />
<br />
== Stopping or releasing the scenario ==<br />
<br />
To stop the scenario preserving the configuration and the changes made:<br />
<br />
vnx -f openstack_lab.xml -v --shutdown<br />
<br />
To start it again use:<br />
<br />
vnx -f openstack_lab.xml -v --start<br />
<br />
To stop the scenario destroying all the configuration and changes made:<br />
<br />
vnx -f openstack_lab.xml -v --destroy<br />
<br />
To unconfigure the NAT, just execute (change eth0 by the name of your external interface):<br />
<br />
vnx_config_nat -d ExtNet eth0<br />
<br />
== Other useful information ==<br />
<br />
To pack the scenario in a tgz file:<br />
<br />
bin/pack-scenario-with-rootfs # including rootfs<br />
bin/pack-scenario # without rootfs<br />
<br />
== Other Openstack Dashboard screen captures ==<br />
<br />
[[File:Vnx-demo-scenario-openstack-mitaka-compute-overview.png|center|thumb|600px|<div align=center><br />
'''Figure 4: Openstack Dashboard compute overview'''</div>]]<br />
<br />
[[File:Vnx-demo-scenario-openstack-mitaka-instances.png|center|thumb|600px|<div align=center><br />
'''Figure 5: Openstack Dashboard view of the demo virtual machines created'''</div>]]<br />
<br />
<?xml version="1.0" encoding="UTF-8"?><br />
<br />
<!--<br />
~~~~~~~~~~~~~~~~~~~~~~<br />
VNX Sample scenarios<br />
~~~~~~~~~~~~~~~~~~~~~~<br />
<br />
Name: openstack_tutorial-stein<br />
<br />
Description: This is an Openstack tutorial scenario designed to experiment with Openstack free and open-source <br />
software platform for cloud-computing. It is made of four LXC containers: <br />
- one controller<br />
- one network node<br />
- two compute nodes<br />
Openstack version used: Stein.<br />
The network configuration is based on the one named "Classic with Open vSwitch" described here:<br />
http://docs.openstack.org/liberty/networking-guide/scenario-classic-ovs.html<br />
<br />
Author: David Fernandez (david@dit.upm.es)<br />
<br />
This file is part of the Virtual Networks over LinuX (VNX) Project distribution. <br />
(www: http://www.dit.upm.es/vnx - e-mail: vnx@dit.upm.es) <br />
<br />
Copyright (C) 2019 Departamento de Ingenieria de Sistemas Telematicos (DIT)<br />
Universidad Politecnica de Madrid (UPM)<br />
SPAIN<br />
--><br />
<br />
<vnx xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"<br />
xsi:noNamespaceSchemaLocation="/usr/share/xml/vnx/vnx-2.00.xsd"><br />
<global><br />
<version>2.0</version><br />
<scenario_name>openstack_tutorial-stein</scenario_name><br />
<ssh_key>/root/.ssh/id_rsa.pub</ssh_key><br />
<automac offset="0"/><br />
<!--vm_mgmt type="none"/--><br />
<vm_mgmt type="private" network="10.20.0.0" mask="24" offset="0"><br />
<host_mapping /><br />
</vm_mgmt> <br />
<vm_defaults><br />
<console id="0" display="no"/><br />
<console id="1" display="yes"/><br />
</vm_defaults><br />
<cmd-seq seq="step1-6">step1,step2,step3,step3b,step4,step5,step6</cmd-seq><br />
<cmd-seq seq="step4">step41,step42,step43,step44</cmd-seq><br />
<cmd-seq seq="step5">step51,step52,step53</cmd-seq><br />
<!-- start-all for 'noconfig' scenario: all installation steps included --><br />
<!--cmd-seq seq="start-all">step1,step2,step3,step4,step5,step6</cmd-seq--><br />
<!-- start-all for configured scenario: only network and compute node steps included --><br />
<cmd-seq seq="step10">step101,step102</cmd-seq><br />
<cmd-seq seq="start-all">step00,step42,step43,step44,step52,step53</cmd-seq><br />
<cmd-seq seq="discover-hosts">step44</cmd-seq><br />
</global><br />
<br />
<net name="MgmtNet" mode="openvswitch" mtu="1450"/><br />
<net name="TunnNet" mode="openvswitch" mtu="1450"/><br />
<net name="ExtNet" mode="openvswitch" /><br />
<net name="VlanNet" mode="openvswitch" /><br />
<net name="virbr0" mode="virtual_bridge" managed="no"/><br />
<br />
<vm name="controller" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-controller</filesystem><br />
<mem>1G</mem><br />
<!--console id="0" display="yes"/--><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.11/24</ipv4><br />
</if><br />
<if id="2" net="ExtNet"><br />
<ipv4>10.0.10.11/24</ipv4><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-controller.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<filetree seq="on_boot" root="/root/">conf/controller/bin</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
chmod +x /root/bin/*<br />
</exec><br />
<exec seq="on_boot" type="verbatim"><br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
<br />
# Change owner of secret_key to horizon to avoid a 500 error when<br />
# accessing horizon (new problem arosed in v04)<br />
# See: https://ask.openstack.org/en/question/30059/getting-500-internal-server-error-while-accessing-horizon-dashboard-in-ubuntu-icehouse/<br />
chown horizon /var/lib/openstack-dashboard/secret_key<br />
<br />
# Stop nova services. Before being configured, they consume a lot of CPU<br />
service nova-scheduler stop<br />
service nova-api stop<br />
service nova-conductor stop<br />
</exec><br />
<br />
<exec seq="step00" type="verbatim"><br />
# Restart nova services<br />
service nova-scheduler start<br />
service nova-api start<br />
service nova-conductor start<br />
</exec><br />
<br />
<!-- STEP 1: Basic services --><br />
<filetree seq="step1" root="/etc/mysql/mariadb.conf.d/">conf/controller/mysql/mysqld_openstack.cnf</filetree><br />
<filetree seq="step1" root="/etc/">conf/controller/memcached/memcached.conf</filetree><br />
<filetree seq="step1" root="/etc/default/">conf/controller/etcd/etcd</filetree><br />
<!--filetree seq="step1" root="/etc/mongodb.conf">conf/controller/mongodb/mongodb.conf</filetree!--><br />
<exec seq="step1" type="verbatim"><br />
<br />
# Stop nova services. Before being configured, they consume a lot of CPU<br />
service nova-scheduler stop<br />
service nova-api stop<br />
service nova-conductor stop<br />
<br />
# Change all ocurrences of utf8mb4 to utf8. See comment above<br />
#for f in $( find /etc/mysql/mariadb.conf.d/ -type f ); do echo "Changing utf8mb4 to utf8 in file $f"; sed -i -e 's/utf8mb4/utf8/g' $f; done<br />
service mysql restart<br />
#mysql_secure_installation # to be run manually<br />
<br />
rabbitmqctl add_user openstack xxxx<br />
rabbitmqctl set_permissions openstack ".*" ".*" ".*" <br />
<br />
service memcached restart<br />
<br />
systemctl enable etcd<br />
systemctl start etcd<br />
<br />
#service mongodb stop<br />
#rm -f /var/lib/mongodb/journal/prealloc.*<br />
#service mongodb start<br />
</exec><br />
<br />
<!-- STEP 2: Identity service --><br />
<filetree seq="step2" root="/etc/keystone/">conf/controller/keystone/keystone.conf</filetree><br />
<filetree seq="step2" root="/root/bin/">conf/controller/keystone/admin-openrc.sh</filetree><br />
<filetree seq="step2" root="/root/bin/">conf/controller/keystone/demo-openrc.sh</filetree><br />
<exec seq="step2" type="verbatim"><br />
count=1; while ! mysqladmin ping ; do echo -n $count; echo ": waiting for mysql ..."; ((count++)) &amp;&amp; ((count==6)) &amp;&amp; echo "--" &amp;&amp; echo "-- ERROR: database not ready." &amp;&amp; echo "--" &amp;&amp; break; sleep 2; done<br />
</exec><br />
<exec seq="step2" type="verbatim"><br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE keystone;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
su -s /bin/sh -c "keystone-manage db_sync" keystone<br />
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone<br />
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone<br />
<br />
keystone-manage bootstrap --bootstrap-password xxxx \<br />
--bootstrap-admin-url http://controller:5000/v3/ \<br />
--bootstrap-internal-url http://controller:5000/v3/ \<br />
--bootstrap-public-url http://controller:5000/v3/ \<br />
--bootstrap-region-id RegionOne<br />
<br />
echo "ServerName controller" >> /etc/apache2/apache2.conf<br />
#ln -s /etc/apache2/sites-available/wsgi-keystone.conf /etc/apache2/sites-enabled<br />
service apache2 restart<br />
rm -f /var/lib/keystone/keystone.db<br />
sleep 5<br />
<br />
export OS_USERNAME=admin<br />
export OS_PASSWORD=xxxx<br />
export OS_PROJECT_NAME=admin<br />
export OS_USER_DOMAIN_NAME=Default<br />
export OS_PROJECT_DOMAIN_NAME=Default<br />
export OS_AUTH_URL=http://controller:5000/v3<br />
export OS_IDENTITY_API_VERSION=3<br />
<br />
# Create users and projects<br />
openstack project create --domain default --description "Service Project" service<br />
openstack project create --domain default --description "Demo Project" demo<br />
openstack user create --domain default --password=xxxx demo<br />
openstack role create user<br />
openstack role add --project demo --user demo user<br />
</exec><br />
<br />
<!-- <br />
STEP 3: Image service (Glance) <br />
--><br />
<filetree seq="step3" root="/etc/glance/">conf/controller/glance/glance-api.conf</filetree><br />
<filetree seq="step3" root="/etc/glance/">conf/controller/glance/glance-registry.conf</filetree><br />
<exec seq="step3" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE glance;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx glance<br />
openstack role add --project service --user glance admin<br />
openstack service create --name glance --description "OpenStack Image" image<br />
openstack endpoint create --region RegionOne image public http://controller:9292<br />
openstack endpoint create --region RegionOne image internal http://controller:9292<br />
openstack endpoint create --region RegionOne image admin http://controller:9292<br />
<br />
su -s /bin/sh -c "glance-manage db_sync" glance<br />
service glance-registry restart<br />
service glance-api restart<br />
#rm -f /var/lib/glance/glance.sqlite<br />
</exec><br />
<br />
<!-- <br />
STEP 3B: Placement service API<br />
--><br />
<filetree seq="step3b" root="/etc/placement/">conf/controller/placement/placement.conf</filetree><br />
<exec seq="step3b" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE placement;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx placement<br />
openstack role add --project service --user placement admin<br />
openstack service create --name placement --description "Placement API" placement<br />
openstack endpoint create --region RegionOne placement public http://controller:8778<br />
openstack endpoint create --region RegionOne placement internal http://controller:8778<br />
openstack endpoint create --region RegionOne placement admin http://controller:8778<br />
su -s /bin/sh -c "placement-manage db sync" placement<br />
service apache2 restart<br />
</exec><br />
<br />
<br />
<br />
<!-- STEP 4: Compute service (Nova) --><br />
<filetree seq="step41" root="/etc/nova/">conf/controller/nova/nova.conf</filetree><br />
<exec seq="step41" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova_api;"<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova;"<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova_cell0;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
<br />
openstack user create --domain default --password=xxxx nova<br />
openstack role add --project service --user nova admin<br />
openstack service create --name nova --description "OpenStack Compute" compute<br />
<br />
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1<br />
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1<br />
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1<br />
<br />
# Restart services stopped at step 1 to save CPU<br />
service nova-scheduler start<br />
service nova-api start<br />
service nova-conductor start<br />
<br />
su -s /bin/sh -c "nova-manage api_db sync" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova<br />
su -s /bin/sh -c "nova-manage db sync" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova<br />
<br />
service nova-api restart<br />
service nova-consoleauth restart<br />
service nova-scheduler restart<br />
service nova-conductor restart<br />
service nova-novncproxy restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<exec seq="step43" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Wait for compute1 hypervisor to be up<br />
while [ $( openstack hypervisor list --matching compute1 -f value -c State ) != 'up' ]; do echo "waiting for compute1 hypervisor..."; sleep 5; done<br />
</exec><br />
<exec seq="step43" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Wait for compute2 hypervisor to be up<br />
while [ $( openstack hypervisor list --matching compute2 -f value -c State ) != 'up' ]; do echo "waiting for compute2 hypervisor..."; sleep 5; done<br />
</exec><br />
<exec seq="step44" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
openstack hypervisor list<br />
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron) --><br />
<filetree seq="step51" root="/etc/neutron/">conf/controller/neutron/neutron.conf</filetree><br />
<!--filetree seq="step51" root="/etc/neutron/">conf/controller/neutron/neutron_lbaas.conf</filetree--><br />
<filetree seq="step51" root="/etc/neutron/plugins/ml2/">conf/controller/neutron/ml2_conf.ini</filetree><br />
<exec seq="step51" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE neutron;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx neutron<br />
openstack role add --project service --user neutron admin<br />
openstack service create --name neutron --description "OpenStack Networking" network<br />
openstack endpoint create --region RegionOne network public http://controller:9696<br />
openstack endpoint create --region RegionOne network internal http://controller:9696<br />
openstack endpoint create --region RegionOne network admin http://controller:9696<br />
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron<br />
<br />
# LBaaS<br />
neutron-db-manage --subproject neutron-lbaas upgrade head<br />
<br />
# FwaaS<br />
neutron-db-manage --subproject neutron-fwaas upgrade head<br />
<br />
# LBaaS Dashboard panels<br />
#git clone https://git.openstack.org/openstack/neutron-lbaas-dashboard<br />
#cd neutron-lbaas-dashboard<br />
#git checkout stable/mitaka<br />
#python setup.py install<br />
#cp neutron_lbaas_dashboard/enabled/_1481_project_ng_loadbalancersv2_panel.py /usr/share/openstack-dashboard/openstack_dashboard/local/enabled/<br />
#cd /usr/share/openstack-dashboard<br />
#./manage.py collectstatic --noinput<br />
#./manage.py compress<br />
#sudo service apache2 restart<br />
<br />
service nova-api restart<br />
service neutron-server restart<br />
</exec><br />
<br />
<!-- STEP 6: Dashboard service --><br />
<filetree seq="step6" root="/etc/openstack-dashboard/">conf/controller/dashboard/local_settings.py</filetree><br />
<exec seq="step6" type="verbatim"><br />
#chown www-data:www-data /var/lib/openstack-dashboard/secret_key<br />
rm /var/lib/openstack-dashboard/secret_key<br />
systemctl enable apache2<br />
service apache2 restart<br />
</exec><br />
<br />
<!-- STEP 7: Trove service --><br />
<cmd-seq seq="step7">step71,step72,step73</cmd-seq><br />
<exec seq="step71" type="verbatim"><br />
apt-get -y install python-trove python-troveclient python-glanceclient trove-common trove-api trove-taskmanager trove-conductor python-pip<br />
pip install trove-dashboard==7.0.0.0b2<br />
</exec><br />
<br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-conductor.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-taskmanager.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-guestagent.conf</filetree><br />
<exec seq="step72" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE trove;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
<br />
openstack user create --domain default --password xxxx trove<br />
openstack role add --project service --user trove admin<br />
openstack service create --name trove --description "Database" database<br />
openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s<br />
<br />
su -s /bin/sh -c "trove-manage db_sync" trove<br />
<br />
service trove-api restart<br />
service trove-taskmanager restart<br />
service trove-conductor restart<br />
<br />
# Install trove_dashboard<br />
cp -a /usr/local/lib/python2.7/dist-packages/trove_dashboard/enabled/* /usr/share/openstack-dashboard/openstack_dashboard/local/enabled/<br />
service apache2 restart<br />
<br />
</exec><br />
<br />
<exec seq="step73" type="verbatim"><br />
#wget -P /tmp/images http://tarballs.openstack.org/trove/images/ubuntu/mariadb.qcow2<br />
wget -P /tmp/images/ http://138.4.7.228/download/vnx/filesystems/ostack-images/trove/mariadb.qcow2<br />
glance image-create --name "trove-mariadb" --file /tmp/images/mariadb.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/mariadb.qcow2<br />
su -s /bin/sh -c "trove-manage --config-file /etc/trove/trove.conf datastore_update mysql ''" trove <br />
su -s /bin/sh -c "trove-manage --config-file /etc/trove/trove.conf datastore_version_update mysql mariadb mariadb glance_image_ID '' 1" trove<br />
<br />
# Create example database<br />
openstack flavor show m1.smaller >/dev/null 2>&amp;1 || openstack flavor create m1.smaller --ram 512 --disk 3 --vcpus 1 --id 6<br />
#trove create mysql_instance_1 m1.smaller --size 1 --databases myDB --users userA:xxxx --datastore_version mariadb --datastore mysql<br />
</exec><br />
<br />
<!-- STEP 8: Heat service --><br />
<!--cmd-seq seq="step8">step81,step82</cmd-seq--><br />
<br />
<filetree seq="step8" root="/etc/heat/">conf/controller/heat/heat.conf</filetree><br />
<filetree seq="step8" root="/root/heat/">conf/controller/heat/examples</filetree><br />
<exec seq="step8" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE heat;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password xxxx heat<br />
openstack role add --project service --user heat admin<br />
openstack service create --name heat --description "Orchestration" orchestration<br />
openstack service create --name heat-cfn --description "Orchestration" cloudformation<br />
openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1<br />
openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1<br />
openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1<br />
openstack domain create --description "Stack projects and users" heat<br />
openstack user create --domain heat --password xxxx heat_domain_admin<br />
openstack role add --domain heat --user-domain heat --user heat_domain_admin admin<br />
openstack role create heat_stack_owner<br />
openstack role add --project demo --user demo heat_stack_owner<br />
openstack role create heat_stack_user<br />
<br />
su -s /bin/sh -c "heat-manage db_sync" heat<br />
service heat-api restart<br />
service heat-api-cfn restart<br />
service heat-engine restart<br />
<br />
# Install Orchestration interface in Dashboard<br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get install -y gettext<br />
pip3 install heat-dashboard<br />
<br />
cd /root<br />
git clone https://github.com/openstack/heat-dashboard.git<br />
cd heat-dashboard/<br />
git checkout stable/stein<br />
cp heat_dashboard/enabled/_[1-9]*.py /usr/share/openstack-dashboard/openstack_dashboard/local/enabled<br />
python3 ./manage.py compilemessages<br />
cd /usr/share/openstack-dashboard<br />
DJANGO_SETTINGS_MODULE=openstack_dashboard.settings python3 manage.py collectstatic --noinput<br />
DJANGO_SETTINGS_MODULE=openstack_dashboard.settings python3 manage.py compress --force<br />
rm /var/lib/openstack-dashboard/secret_key<br />
service apache2 restart<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-heat" type="verbatim"><br />
source /root/bin/demo-openrc.sh<br />
<br />
# Create internal network<br />
openstack network create net-heat<br />
openstack subnet create --network net-heat --gateway 10.1.10.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.10.0/24 --allocation-pool start=10.1.10.8,end=10.1.10.100 subnet-heat<br />
<br />
mkdir -p /root/keys<br />
openstack keypair create key-heat > /root/keys/key-heat<br />
#export NET_ID=$(openstack network list | awk '/ net-heat / { print $2 }')<br />
export NET_ID=$( openstack network list --name net-heat -f value -c ID )<br />
openstack stack create -t /root/heat/examples/demo-template.yml --parameter "NetID=$NET_ID" stack<br />
<br />
</exec><br />
<br />
<br />
<!-- STEP 9: Tacker service --><br />
<cmd-seq seq="step9">step91,step92</cmd-seq><br />
<br />
<exec seq="step91" type="verbatim"><br />
apt-get -y install python-pip git<br />
pip install --upgrade pip<br />
</exec><br />
<br />
<filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/default-vim-config.yaml</filetree><br />
<filetree seq="step92" root="/root/tacker/">conf/controller/tacker/examples</filetree><br />
<exec seq="step92" type="verbatim"><br />
sed -i -e 's/.*"resource_types:OS::Nova::Flavor":.*/ "resource_types:OS::Nova::Flavor": "role:admin",/' /etc/heat/policy.json <br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE tacker;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password xxxx tacker<br />
openstack role add --project service --user tacker admin<br />
openstack service create --name tacker --description "Tacker Project" nfv-orchestration<br />
openstack endpoint create --region RegionOne nfv-orchestration public http://controller:9890/<br />
openstack endpoint create --region RegionOne nfv-orchestration internal http://controller:9890/<br />
openstack endpoint create --region RegionOne nfv-orchestration admin http://controller:9890/<br />
<br />
mkdir -p /root/tacker<br />
cd /root/tacker<br />
git clone https://github.com/openstack/tacker<br />
cd tacker<br />
git checkout stable/ocata<br />
pip install -r requirements.txt<br />
pip install tosca-parser<br />
python setup.py install<br />
mkdir -p /var/log/tacker<br />
<br />
/usr/local/bin/tacker-db-manage --config-file /usr/local/etc/tacker/tacker.conf upgrade head<br />
<br />
# Tacker client<br />
cd /root/tacker<br />
git clone https://github.com/openstack/python-tackerclient<br />
cd python-tackerclient<br />
git checkout stable/ocata<br />
python setup.py install<br />
<br />
# Tacker horizon<br />
cd /root/tacker<br />
git clone https://github.com/openstack/tacker-horizon<br />
cd tacker-horizon<br />
git checkout stable/ocata<br />
python setup.py install<br />
cp tacker_horizon/enabled/* /usr/share/openstack-dashboard/openstack_dashboard/enabled/<br />
service apache2 restart<br />
<br />
# Start tacker server<br />
mkdir -p /var/log/tacker<br />
nohup python /usr/local/bin/tacker-server \<br />
--config-file /usr/local/etc/tacker/tacker.conf \<br />
--log-file /var/log/tacker/tacker.log &amp;<br />
<br />
# Register default VIM<br />
tacker vim-register --is-default --config-file /usr/local/etc/tacker/default-vim-config.yaml \<br />
--description "Default VIM" "Openstack-VIM"<br />
<br />
</exec><br />
<exec seq="step93" type="verbatim"><br />
nohup python /usr/local/bin/tacker-server \<br />
--config-file /usr/local/etc/tacker/tacker.conf \<br />
--log-file /var/log/tacker/tacker.log &amp;<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-tacker" type="verbatim"><br />
source /root/bin/demo-openrc.sh<br />
<br />
# Create internal network<br />
openstack network create net-tacker<br />
openstack subnet create --network net-tacker --gateway 10.1.11.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.11.0/24 --allocation-pool start=10.1.11.8,end=10.1.11.100 subnet-tacker<br />
<br />
cd /root/tacker/examples<br />
tacker vnfd-create --vnfd-file sample-vnfd.yaml testd<br />
<br />
# Falla con error: <br />
# ERROR: Property error: : resources.VDU1.properties.image: : No module named v2.client<br />
tacker vnf-create --vnfd-id $( tacker vnfd-list | awk '/ testd / { print $2 }' ) test<br />
<br />
</exec><br />
<br />
<!--filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<exec seq="step92" type="verbatim"><br />
</exec--><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
#apt-get -y install python-pip<br />
#pip install --upgrade pip<br />
#pip install gnocchi[mysql,keystone] gnocchiclient<br />
apt-get -y install ceilometer-collector ceilometer-agent-central ceilometer-agent-notification python-ceilometerclient<br />
apt-get -y install gnocchi-common gnocchi-api gnocchi-metricd gnocchi-statsd python-gnocchiclient<br />
<br />
</exec><br />
<br />
<filetree seq="step102" root="/etc/ceilometer/">conf/controller/ceilometer/ceilometer.conf</filetree><br />
<!--filetree seq="step102" root="/etc/mongodb.conf">conf/controller/mongodb/mongodb.conf</filetree--><br />
<!--filetree seq="step102" root="/etc/apache2/sites-available/ceilometer.conf">conf/controller/ceilometer/apache/ceilometer.conf</filetree--><br />
<!--filetree seq="step102" root="/etc/apache2/sites-available/gnocchi.conf">conf/controller/ceilometer/apache/gnocchi.conf</filetree--><br />
<filetree seq="step102" root="/etc/gnocchi/">conf/controller/ceilometer/gnocchi.conf</filetree><br />
<filetree seq="step102" root="/etc/gnocchi/">conf/controller/ceilometer/api-paste.ini</filetree><br />
<exec seq="step102" type="verbatim"><br />
<br />
# Create gnocchi database<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE gnocchidb default character set utf8;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON gnocchidb.* TO 'gnocchi'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON gnocchidb.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
# Ceilometer<br />
source /root/bin/admin-openrc.sh <br />
openstack user create --domain default --password xxxx ceilometer<br />
openstack role add --project service --user ceilometer admin<br />
openstack service create --name ceilometer --description "Telemetry" metering<br />
openstack user create --domain default --password xxxx gnocchi<br />
openstack role add --project service --user gnocchi admin<br />
openstack service create --name gnocchi --description "Metric Service" metric<br />
openstack endpoint create --region RegionOne metric public http://controller:8041<br />
openstack endpoint create --region RegionOne metric internal http://controller:8041<br />
openstack endpoint create --region RegionOne metric admin http://controller:8041<br />
<br />
mkdir -p /var/cache/gnocchi<br />
chown gnocchi:gnocchi -R /var/cache/gnocchi<br />
mkdir -p /var/lib/gnocchi<br />
chown gnocchi:gnocchi -R /var/lib/gnocchi<br />
<br />
gnocchi-upgrade<br />
sed -i 's/8000/8041/g' /usr/bin/gnocchi-api<br />
# Correct error in gnocchi-api startup script<br />
sed -i -e 's/exec $DAEMON $DAEMON_ARGS/exec $DAEMON -- $DAEMON_ARGS/' /etc/init.d/gnocchi-api<br />
systemctl enable gnocchi-api.service gnocchi-metricd.service gnocchi-statsd.service<br />
systemctl start gnocchi-api.service gnocchi-metricd.service gnocchi-statsd.service<br />
<br />
ceilometer-upgrade --skip-metering-database<br />
service ceilometer-agent-central restart<br />
service ceilometer-agent-notification restart<br />
service ceilometer-collector restart<br />
<br />
# Enable Glance service meters<br />
crudini --set /etc/glance/glance-api.conf DEFAULT transport_url rabbit://openstack:xxxx@controller<br />
crudini --set /etc/glance/glance-api.conf oslo_messaging_notifications driver messagingv2<br />
crudini --set /etc/glance/glance-registry.conf DEFAULT transport_url rabbit://openstack:xxxx@controller<br />
crudini --set /etc/glance/glance-registry.conf oslo_messaging_notifications driver messagingv2<br />
<br />
service glance-registry restart<br />
service glance-api restart<br />
<br />
# Enable Neutron service meters<br />
crudini --set /etc/neutron/neutron.conf oslo_messaging_notifications driver messagingv2<br />
service neutron-server restart<br />
<br />
# Enable Heat service meters<br />
crudini --set /etc/heat/heat.conf oslo_messaging_notifications driver messagingv2<br />
service heat-api restart<br />
service heat-api-cfn restart<br />
service heat-engine restart<br />
<br />
#crudini --set /etc/glance/glance-api.conf DEFAULT rpc_backend rabbit<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_notifications driver messagingv2<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_host controller<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_userid openstack<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_password xxxx<br />
<br />
#crudini --set /etc/glance/glance-registry.conf DEFAULT rpc_backend rabbit<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_notifications driver messagingv2<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_host controller<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_userid openstack<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_password xxxx<br />
<br />
<br />
</exec><br />
<br />
<br />
<exec seq="load-img" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create flavors if not created<br />
openstack flavor show m1.nano >/dev/null 2>&amp;1 || openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano<br />
openstack flavor show m1.tiny >/dev/null 2>&amp;1 || openstack flavor create --id 1 --vcpus 1 --ram 512 --disk 1 m1.tiny<br />
openstack flavor show m1.smaller >/dev/null 2>&amp;1 || openstack flavor create --id 6 --vcpus 1 --ram 512 --disk 3 m1.smaller<br />
<br />
# CentOS image<br />
# Cirros image <br />
#wget -P /tmp/images http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img<br />
wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/cirros-0.3.4-x86_64-disk-vnx.qcow2<br />
glance image-create --name "cirros-0.3.4-x86_64-vnx" --file /tmp/images/cirros-0.3.4-x86_64-disk-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/cirros-0.3.4-x86_64-disk*.qcow2<br />
<br />
# Ubuntu image (trusty)<br />
#wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/trusty-server-cloudimg-amd64-disk1-vnx.qcow2<br />
#glance image-create --name "trusty-server-cloudimg-amd64-vnx" --file /tmp/images/trusty-server-cloudimg-amd64-disk1-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
#rm /tmp/images/trusty-server-cloudimg-amd64-disk1*.qcow2<br />
<br />
# Ubuntu image (xenial)<br />
wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/xenial-server-cloudimg-amd64-disk1-vnx.qcow2<br />
glance image-create --name "xenial-server-cloudimg-amd64-vnx" --file /tmp/images/xenial-server-cloudimg-amd64-disk1-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/xenial-server-cloudimg-amd64-disk1*.qcow2<br />
<br />
#wget -P /tmp/images http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2<br />
#glance image-create --name "CentOS-7-x86_64" --file /tmp/images/CentOS-7-x86_64-GenericCloud.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
#rm /tmp/images/CentOS-7-x86_64-GenericCloud.qcow2<br />
</exec><br />
<br />
<exec seq="create-demo-scenario" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create internal network<br />
#neutron net-create net0<br />
openstack network create net0<br />
#neutron subnet-create net0 10.1.1.0/24 --name subnet0 --gateway 10.1.1.1 --dns-nameserver 8.8.8.8<br />
openstack subnet create --network net0 --gateway 10.1.1.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.1.0/24 --allocation-pool start=10.1.1.8,end=10.1.1.100 subnet0<br />
<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm1 > /root/keys/vm1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm1 --nic net-id=net0 --key-name vm1<br />
<br />
# Create external network<br />
#neutron net-create ExtNet --provider:physical_network provider --provider:network_type flat --router:external --shared<br />
openstack network create --share --external --provider-physical-network provider --provider-network-type flat ExtNet<br />
#neutron subnet-create --name ExtSubnet --allocation-pool start=10.0.10.100,end=10.0.10.200 --dns-nameserver 10.0.10.1 --gateway 10.0.10.1 ExtNet 10.0.10.0/24<br />
openstack subnet create --network ExtNet --gateway 10.0.10.1 --dns-nameserver 10.0.10.1 --subnet-range 10.0.10.0/24 --allocation-pool start=10.0.10.100,end=10.0.10.200 ExtSubNet<br />
#neutron router-create r0<br />
openstack router create r0<br />
#neutron router-gateway-set r0 ExtNet<br />
openstack router set r0 --external-gateway ExtNet<br />
#neutron router-interface-add r0 subnet0<br />
openstack router add subnet r0 subnet0<br />
<br />
<br />
# Assign floating IP address to vm1<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm1<br />
openstack server add floating ip vm1 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
<br />
<br />
# Create security group rules to allow ICMP, SSH and WWW access<br />
openstack security group rule create --proto icmp --dst-port 0 default<br />
openstack security group rule create --proto tcp --dst-port 80 default<br />
openstack security group rule create --proto tcp --dst-port 22 default<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-vm2" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm2 > /root/keys/vm2<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm2 --nic net-id=net0 --key-name vm2<br />
# Assign floating IP address to vm2<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm2<br />
openstack server add floating ip vm2 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
</exec><br />
<br />
<exec seq="create-demo-vm3" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm3 > /root/keys/vm3<br />
openstack server create --flavor m1.smaller --image xenial-server-cloudimg-amd64-vnx vm3 --nic net-id=net0 --key-name vm3<br />
# Assign floating IP address to vm3<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm3<br />
openstack server add floating ip vm3 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
</exec><br />
<br />
<exec seq="create-vlan-demo-scenario" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create vlan based networks and subnetworks<br />
#neutron net-create vlan1000 --shared --provider:physical_network vlan --provider:network_type vlan --provider:segmentation_id 1000<br />
#neutron net-create vlan1001 --shared --provider:physical_network vlan --provider:network_type vlan --provider:segmentation_id 1001<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1000 vlan1000<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1001 vlan1001<br />
#neutron subnet-create vlan1000 10.1.2.0/24 --name vlan1000-subnet --allocation-pool start=10.1.2.2,end=10.1.2.99 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8<br />
#neutron subnet-create vlan1001 10.1.3.0/24 --name vlan1001-subnet --allocation-pool start=10.1.3.2,end=10.1.3.99 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8<br />
openstack subnet create --network vlan1000 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.2.0/24 --allocation-pool start=10.1.2.2,end=10.1.2.99 subvlan1000<br />
openstack subnet create --network vlan1001 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.3.0/24 --allocation-pool start=10.1.3.2,end=10.1.3.99 subvlan1001<br />
<br />
<br />
# Create virtual machine<br />
mkdir -p tmp<br />
openstack keypair create vm3 > tmp/vm3<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm3 --nic net-id=vlan1000 --key-name vm3<br />
openstack keypair create vm4 > tmp/vm4<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm4 --nic net-id=vlan1001 --key-name vm4<br />
<br />
<br />
# Create security group rules to allow ICMP, SSH and WWW access<br />
openstack security group rule create --proto icmp --dst-port 0 default<br />
openstack security group rule create --proto tcp --dst-port 80 default<br />
openstack security group rule create --proto tcp --dst-port 22 default<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
echo "--"<br />
echo "-- Keystone (identity)"<br />
echo "--"<br />
echo "Command: openstack --os-auth-url http://controller:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue"<br />
openstack --os-auth-url http://controller:35357/v3 \<br />
--os-project-domain-name default --os-user-domain-name default \<br />
--os-project-name admin --os-username admin token issue<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Glance (images)"<br />
echo "--"<br />
echo "Command: openstack image list"<br />
openstack image list<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Nova (compute)"<br />
echo "--"<br />
echo "Command: openstack compute service list"<br />
openstack compute service list<br />
echo "Command: openstack hypervisor service list"<br />
openstack hypervisor service list<br />
echo "Command: openstack catalog list"<br />
openstack catalog list<br />
echo "Command: nova-status upgrade check"<br />
nova-status upgrade check<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Neutron (network)"<br />
echo "--"<br />
echo "Command: openstack extension list --network"<br />
openstack extension list --network<br />
echo "Command: openstack network agent list"<br />
openstack network agent list<br />
echo "Command: openstack security group list"<br />
openstack security group list<br />
echo "Command: openstack security group rule list"<br />
openstack security group rule list<br />
</exec><br />
<br />
</vm><br />
<br />
<vm name="network" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-network</filesystem><br />
<!--vm name="network" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-network</filesystem--><br />
<mem>1G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.21/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.21/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="4" net="ExtNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<forwarding type="ip" /><br />
<forwarding type="ipv6" /><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts<br />
rm /root/hosts<br />
</exec><br />
<exec seq="on_boot" type="verbatim"><br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --><br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<filetree seq="on_boot" root="/root/">conf/network/bin</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
chmod +x /root/bin/*<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) --><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/neutron.conf</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/metadata_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/plugins/ml2/">conf/network/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step52" root="/etc/neutron/plugins/ml2/">conf/network/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/l3_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/dhcp_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/dnsmasq-neutron.conf</filetree><br />
<!--filetree seq="step52" root="/etc/neutron/">conf/network/neutron/fwaas_driver.ini</filetree--><br />
<!--filetree seq="step52" root="/etc/neutron/">conf/network/neutron/lbaas_agent.ini</filetree--><br />
<exec seq="step52" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
#ovs-vsctl add-br br-ex<br />
ovs-vsctl add-port br-provider eth4<br />
<br />
service neutron-lbaasv2-agent restart<br />
service openvswitch-switch restart<br />
<br />
service neutron-openvswitch-agent restart<br />
#service neutron-linuxbridge-agent restart<br />
service neutron-dhcp-agent restart<br />
service neutron-metadata-agent restart<br />
service neutron-l3-agent restart<br />
<br />
rm -f /var/lib/neutron/neutron.sqlite<br />
</exec><br />
<br />
</vm><br />
<br />
<br />
<vm name="compute1" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-compute</filesystem><br />
<!--vm name="compute1" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64" vcpu="2"--><br />
<!--filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-compute</filesystem--><br />
<mem>2G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.31/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.31/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
# Create /dev/net/tun device <br />
#mkdir -p /dev/net/<br />
#mknod -m 666 /dev/net/tun c 10 200<br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<!-- STEP 42: Compute service (Nova) --><br />
<filetree seq="step42" root="/etc/nova/">conf/compute1/nova/nova.conf</filetree><br />
<filetree seq="step42" root="/etc/nova/">conf/compute1/nova/nova-compute.conf</filetree><br />
<exec seq="step42" type="verbatim"><br />
service nova-compute restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron) --><br />
<filetree seq="step53" root="/etc/neutron/">conf/compute1/neutron/neutron.conf</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/ml2_conf.ini</filetree!--><br />
<exec seq="step53" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
service openvswitch-switch restart<br />
service nova-compute restart<br />
service neutron-openvswitch-agent restart<br />
</exec><br />
<br />
<!--filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<exec seq="step92" type="verbatim"><br />
</exec--><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get -y install ceilometer-agent-compute<br />
</exec><br />
<filetree seq="step102" root="/etc/ceilometer/">conf/compute2/ceilometer/ceilometer.conf</filetree><br />
<exec seq="step102" type="verbatim"><br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit True<br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit_period hour<br />
crudini --set /etc/nova/nova.conf DEFAULT notify_on_state_change vm_and_task_state<br />
crudini --set /etc/nova/nova.conf oslo_messaging_notifications driver messagingv2<br />
service ceilometer-agent-compute restart<br />
service nova-compute restart<br />
</exec><br />
<br />
</vm><br />
<br />
<vm name="compute2" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-compute</filesystem><br />
<!--vm name="compute2" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64" vcpu="2"--><br />
<!--filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-compute</filesystem!--><br />
<mem>2G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.32/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.32/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
# Create /dev/net/tun device <br />
#mkdir -p /dev/net/<br />
#mknod -m 666 /dev/net/tun c 10 200<br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<!-- STEP 42: Compute service (Nova) --><br />
<filetree seq="step42" root="/etc/nova/">conf/compute2/nova/nova.conf</filetree><br />
<filetree seq="step42" root="/etc/nova/">conf/compute2/nova/nova-compute.conf</filetree><br />
<exec seq="step42" type="verbatim"><br />
service nova-compute restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) --><br />
<filetree seq="step53" root="/etc/neutron/">conf/compute2/neutron/neutron.conf</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/ml2_conf.ini</filetree!--><br />
<exec seq="step53" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
service openvswitch-switch restart<br />
service nova-compute restart<br />
service neutron-openvswitch-agent restart<br />
</exec><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get -y install ceilometer-agent-compute<br />
</exec><br />
<filetree seq="step102" root="/etc/ceilometer/">conf/compute2/ceilometer/ceilometer.conf</filetree><br />
<exec seq="step102" type="verbatim"><br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit True<br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit_period hour<br />
crudini --set /etc/nova/nova.conf DEFAULT notify_on_state_change vm_and_task_state<br />
crudini --set /etc/nova/nova.conf oslo_messaging_notifications driver messagingv2<br />
service ceilometer-agent-compute restart<br />
service nova-compute restart<br />
</exec><br />
<br />
</vm><br />
<br />
<br />
<host><br />
<hostif net="ExtNet"><br />
<ipv4>10.0.10.1/24</ipv4><br />
</hostif><br />
<hostif net="MgmtNet"><br />
<ipv4>10.0.0.1/24</ipv4><br />
</hostif><br />
<exec seq="step00" type="verbatim"><br />
echo "--\n-- Waiting for all VMs to be ssh ready...\n--"<br />
</exec><br />
<exec seq="step00" type="verbatim"><br />
# Wait till ssh is accesible in all VMs<br />
while ! $( nc -z controller 22 ); do sleep 1; done<br />
while ! $( nc -z network 22 ); do sleep 1; done<br />
while ! $( nc -z compute1 22 ); do sleep 1; done<br />
while ! $( nc -z compute2 22 ); do sleep 1; done<br />
</exec><br />
<exec seq="step00" type="verbatim"><br />
echo "-- ...OK\n--"<br />
</exec><br />
</host><br />
<br />
</vnx></div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Vnx-labo-openstack-4nodes-classic-ovs-stein&diff=2629
Vnx-labo-openstack-4nodes-classic-ovs-stein
2019-08-26T14:25:01Z
<p>David: /* Provider networks example */</p>
<hr />
<div>{{Title|VNX Openstack Stein four nodes classic scenario using Open vSwitch}}<br />
<br />
== Introduction ==<br />
<br />
This is an Openstack tutorial scenario designed to experiment with Openstack free and open-source software platform for cloud-computing.<br />
<br />
The scenario is made of four virtual machines: a controller node, a network node and two compute nodes, all based on LXC. Optionally, new compute nodes can be added by starting additional VNX scenarios.<br />
<br />
Openstack version used is Stein (April 2019) over Ubuntu 18.04 LTS. The deployment scenario is the one that was named "Classic with Open vSwitch" and was described in previous versions of Openstack documentation (https://docs.openstack.org/liberty/networking-guide/scenario-classic-ovs.html). It is prepared to experiment either with the deployment of virtual machines using "Self Service Networks" provided by Openstack, or with the use of external "Provider networks".<br />
<br />
The scenario is similar to the ones developed by Raul Alvarez to test OpenDaylight-Openstack integration, but instead of using Devstack to configure Openstack nodes, the configuration is done by means of commands integrated into the VNX scenario following [https://docs.openstack.org/stein/install/ Openstack Stein installation recipes].<br />
<br />
[[File:Openstack_tutorial-stein.png|center|thumb|600px|<div align=center><br />
'''Figure 1: Openstack tutorial scenario'''</div>]]<br />
<br />
== Requirements ==<br />
<br />
To use the scenario you need a Linux computer (Ubuntu 16.04 or later recommended) with VNX software installed. At least 8GB of memory are needed to execute the scenario. <br />
<br />
See how to install VNX here: http://vnx.dit.upm.es/vnx/index.php/Vnx-install<br />
<br />
If already installed, update VNX to the latest version with:<br />
<br />
vnx_update<br />
<br />
== Installation ==<br />
<br />
Download the scenario with the virtual machines images included and unpack it:<br />
<br />
wget http://idefix.dit.upm.es/vnx/examples/openstack/openstack_lab-stein_4n_classic_ovs-v01-with-rootfs.tgz<br />
sudo vnx --unpack openstack_lab-stein_4n_classic_ovs-v01-with-rootfs.tgz<br />
<br />
== Starting the scenario ==<br />
<br />
Start the scenario and configure it and load an example cirros and ubuntu images with:<br />
cd openstack_lab-stein_4n_classic_ovs-v01<br />
# Start the scenario<br />
sudo vnx -f openstack_lab.xml -v --create<br />
# Wait for all consoles to have started and configure all Openstack services<br />
vnx -f openstack_lab.xml -v -x start-all<br />
# Load vm images in GLANCE<br />
vnx -f openstack_lab.xml -v -x load-img<br />
<br />
<br />
[[File:Tutorial-openstack-stein-4n-classic-ovs.png|center|thumb|600px|<div align=center><br />
'''Figure 2: Openstack tutorial detailed topology'''</div>]]<br />
<br />
Once started, you can connect to Openstack Dashboard (default/admin/xxxx) starting a browser and pointing it to the controller horizon page. For example:<br />
<br />
firefox 10.0.10.11/horizon<br />
<br />
== Self Service networks example ==<br />
<br />
Access the network topology Dashboard page (Project->Network->Network topology) and create a simple demo scenario inside Openstack:<br />
<br />
vnx -f openstack_lab.xml -v -x create-demo-scenario<br />
<br />
You should see the simple scenario as it is being created through the Dashboard.<br />
<br />
Once created you should be able to access vm1 console, to ping or ssh from the host to the vm1 or the opposite (see the floating IP assigned to vm1 in the Dashboard, probably 10.0.10.102).<br />
<br />
You can create a second Cirros virtual machine (vm2) or a third Ubuntu virtual machine and test connectivity among all virtual machines with: <br />
<br />
vnx -f openstack_lab.xml -v -x create-demo-vm2<br />
vnx -f openstack_lab.xml -v -x create-demo-vm3<br />
<br />
To allow external Internet access from vm1 you hace to configure a NAT in the host. You can easily do it using '''vnx_config_nat''' command distributed with VNX. Just find out the name of the public network interface of your host (i.e eth0) and execute:<br />
<br />
vnx_config_nat ExtNet eth0<br />
<br />
Besides, you can access the Openstack controller by ssh from the host and execute management commands directly:<br />
slogin root@controller # root/xxxx<br />
source bin/admin-openrc.sh # Load admin credentials<br />
<br />
For example, to show the virtual machines started:<br />
openstack server list<br />
<br />
You can also execute that commands from the host where the virtual scenario is started. For that purpose you need to install openstack client first:<br />
<br />
pip install python-openstackclient<br />
source bin/admin-openrc.sh # Load admin credentials<br />
openstack server list<br />
<br />
== Provider networks example ==<br />
<br />
Compute nodes in the Openstack virtual lab scenario have two network interfaces for internal and external connections: <br />
* '''eth2''', connected to Tunnels network and used to connect with VMs in other compute nodes or routers in the network node<br />
* '''eth3''', connected to VLANs network and used to connect with VMs in other compute nodes and also to connect to external systems through the Providers networks infrastructure. <br />
<br />
To demonstrate how Openstack VMs can be connected with external systems though the VLAN network switches, an additional demo scenario is included. Just execute:<br />
vnx -f openstack_lab.xml -v -x create-vlan-demo-scenario<br />
<br />
That scenario will create two new networks and subnetworks associated with VLANs 1000 and 1001, and two VMs, vmA1 and vmB1 connected to that networks. You can see the scenario created through the openstack Dashboard.<br />
<br />
[[File:Openstack-provider-networks-example.png|center|thumb|600px|<div align=center><br />
'''Figure 3: Provider networks testing scenario'''</div>]]<br />
<br />
The commands used to create that networks and vms are the following (can be consulted also in the scenario XML file):<br />
<br />
<pre><br />
# Networks<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1000 vlan1000<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1001 vlan1001<br />
openstack subnet create --network vlan1000 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.2.0/24 --allocation-pool start=10.1.2.2,end=10.1.2.99 subvlan1000<br />
openstack subnet create --network vlan1001 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.3.0/24 --allocation-pool start=10.1.3.2,end=10.1.3.99 subvlan1001<br />
<br />
# VMs<br />
mkdir -p tmp<br />
openstack keypair create vmA1 > tmp/vmA1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vmA1 --nic net-id=vlan1000 --key-name vmA1<br />
openstack keypair create vmB1 > tmp/vmB1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vmB1 --nic net-id=vlan1001 --key-name vmB1<br />
</pre><br />
<br />
To demonstrate the connectivity of vm3 and vm4 with external systems connected on VLANs 1000/1001, you can start and additional virtual scenario which creates three additional systems: vmA (vlan 1000), vmB (vlan 1001) and vlan-router (connected to both vlans). To start it just execute:<br />
vnx -f openstack_lab-vms-vlan.xml -v -t<br />
<br />
[[File:Openstack_lab-vms-vlan-map.png|center|thumb|600px|<div align=center><br />
'''Figure 3: Provider networks demo external scenario'''</div>]]<br />
<br />
Once the scenario is started, you should be able to ping, traceroute and ssh among vm3, vm4, vmA and vmB using the following IP addresses:<br />
<ul><br />
<li>Virtual machines inside Openstack:</li><br />
<ul><br />
<li>vmA1: 10.1.2.20/24</li><br />
<li>vmB1: 10.1.3.20/24</li><br />
</ul><br />
<li>Virtual machines outside Openstack:</li><br />
<ul><br />
<li>vmA2: 10.1.2.100/24</li><br />
<li>vmB2: 10.1.3.100/24</li><br />
</ul><br />
</ul><br />
<br />
You can have a look at the virtual switch that supports the Openstack VLAN Network executing the following command in the host:<br />
ovs-vsctl show<br />
<br />
<br />
[[File:Vnx-demo-scenario-openstack-stein.png|center|thumb|600px|<div align=center><br />
'''Figure 4: Openstack Dashboard view of the demo virtual scenarios created'''</div>]]<br />
<br />
== Stopping or releasing the scenario ==<br />
<br />
To stop the scenario preserving the configuration and the changes made:<br />
<br />
vnx -f openstack_lab.xml -v --shutdown<br />
<br />
To start it again use:<br />
<br />
vnx -f openstack_lab.xml -v --start<br />
<br />
To stop the scenario destroying all the configuration and changes made:<br />
<br />
vnx -f openstack_lab.xml -v --destroy<br />
<br />
To unconfigure the NAT, just execute (change eth0 by the name of your external interface):<br />
<br />
vnx_config_nat -d ExtNet eth0<br />
<br />
== Other useful information ==<br />
<br />
To pack the scenario in a tgz file:<br />
<br />
bin/pack-scenario-with-rootfs # including rootfs<br />
bin/pack-scenario # without rootfs<br />
<br />
== Other Openstack Dashboard screen captures ==<br />
<br />
[[File:Vnx-demo-scenario-openstack-mitaka-compute-overview.png|center|thumb|600px|<div align=center><br />
'''Figure 4: Openstack Dashboard compute overview'''</div>]]<br />
<br />
[[File:Vnx-demo-scenario-openstack-mitaka-instances.png|center|thumb|600px|<div align=center><br />
'''Figure 5: Openstack Dashboard view of the demo virtual machines created'''</div>]]<br />
<br />
<?xml version="1.0" encoding="UTF-8"?><br />
<br />
<!--<br />
~~~~~~~~~~~~~~~~~~~~~~<br />
VNX Sample scenarios<br />
~~~~~~~~~~~~~~~~~~~~~~<br />
<br />
Name: openstack_tutorial-stein<br />
<br />
Description: This is an Openstack tutorial scenario designed to experiment with Openstack free and open-source <br />
software platform for cloud-computing. It is made of four LXC containers: <br />
- one controller<br />
- one network node<br />
- two compute nodes<br />
Openstack version used: Stein.<br />
The network configuration is based on the one named "Classic with Open vSwitch" described here:<br />
http://docs.openstack.org/liberty/networking-guide/scenario-classic-ovs.html<br />
<br />
Author: David Fernandez (david@dit.upm.es)<br />
<br />
This file is part of the Virtual Networks over LinuX (VNX) Project distribution. <br />
(www: http://www.dit.upm.es/vnx - e-mail: vnx@dit.upm.es) <br />
<br />
Copyright (C) 2019 Departamento de Ingenieria de Sistemas Telematicos (DIT)<br />
Universidad Politecnica de Madrid (UPM)<br />
SPAIN<br />
--><br />
<br />
<vnx xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"<br />
xsi:noNamespaceSchemaLocation="/usr/share/xml/vnx/vnx-2.00.xsd"><br />
<global><br />
<version>2.0</version><br />
<scenario_name>openstack_tutorial-stein</scenario_name><br />
<ssh_key>/root/.ssh/id_rsa.pub</ssh_key><br />
<automac offset="0"/><br />
<!--vm_mgmt type="none"/--><br />
<vm_mgmt type="private" network="10.20.0.0" mask="24" offset="0"><br />
<host_mapping /><br />
</vm_mgmt> <br />
<vm_defaults><br />
<console id="0" display="no"/><br />
<console id="1" display="yes"/><br />
</vm_defaults><br />
<cmd-seq seq="step1-6">step1,step2,step3,step3b,step4,step5,step6</cmd-seq><br />
<cmd-seq seq="step4">step41,step42,step43,step44</cmd-seq><br />
<cmd-seq seq="step5">step51,step52,step53</cmd-seq><br />
<!-- start-all for 'noconfig' scenario: all installation steps included --><br />
<!--cmd-seq seq="start-all">step1,step2,step3,step4,step5,step6</cmd-seq--><br />
<!-- start-all for configured scenario: only network and compute node steps included --><br />
<cmd-seq seq="step10">step101,step102</cmd-seq><br />
<cmd-seq seq="start-all">step00,step42,step43,step44,step52,step53</cmd-seq><br />
<cmd-seq seq="discover-hosts">step44</cmd-seq><br />
</global><br />
<br />
<net name="MgmtNet" mode="openvswitch" mtu="1450"/><br />
<net name="TunnNet" mode="openvswitch" mtu="1450"/><br />
<net name="ExtNet" mode="openvswitch" /><br />
<net name="VlanNet" mode="openvswitch" /><br />
<net name="virbr0" mode="virtual_bridge" managed="no"/><br />
<br />
<vm name="controller" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-controller</filesystem><br />
<mem>1G</mem><br />
<!--console id="0" display="yes"/--><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.11/24</ipv4><br />
</if><br />
<if id="2" net="ExtNet"><br />
<ipv4>10.0.10.11/24</ipv4><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-controller.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<filetree seq="on_boot" root="/root/">conf/controller/bin</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
chmod +x /root/bin/*<br />
</exec><br />
<exec seq="on_boot" type="verbatim"><br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
<br />
# Change owner of secret_key to horizon to avoid a 500 error when<br />
# accessing horizon (new problem arosed in v04)<br />
# See: https://ask.openstack.org/en/question/30059/getting-500-internal-server-error-while-accessing-horizon-dashboard-in-ubuntu-icehouse/<br />
chown horizon /var/lib/openstack-dashboard/secret_key<br />
<br />
# Stop nova services. Before being configured, they consume a lot of CPU<br />
service nova-scheduler stop<br />
service nova-api stop<br />
service nova-conductor stop<br />
</exec><br />
<br />
<exec seq="step00" type="verbatim"><br />
# Restart nova services<br />
service nova-scheduler start<br />
service nova-api start<br />
service nova-conductor start<br />
</exec><br />
<br />
<!-- STEP 1: Basic services --><br />
<filetree seq="step1" root="/etc/mysql/mariadb.conf.d/">conf/controller/mysql/mysqld_openstack.cnf</filetree><br />
<filetree seq="step1" root="/etc/">conf/controller/memcached/memcached.conf</filetree><br />
<filetree seq="step1" root="/etc/default/">conf/controller/etcd/etcd</filetree><br />
<!--filetree seq="step1" root="/etc/mongodb.conf">conf/controller/mongodb/mongodb.conf</filetree!--><br />
<exec seq="step1" type="verbatim"><br />
<br />
# Stop nova services. Before being configured, they consume a lot of CPU<br />
service nova-scheduler stop<br />
service nova-api stop<br />
service nova-conductor stop<br />
<br />
# Change all ocurrences of utf8mb4 to utf8. See comment above<br />
#for f in $( find /etc/mysql/mariadb.conf.d/ -type f ); do echo "Changing utf8mb4 to utf8 in file $f"; sed -i -e 's/utf8mb4/utf8/g' $f; done<br />
service mysql restart<br />
#mysql_secure_installation # to be run manually<br />
<br />
rabbitmqctl add_user openstack xxxx<br />
rabbitmqctl set_permissions openstack ".*" ".*" ".*" <br />
<br />
service memcached restart<br />
<br />
systemctl enable etcd<br />
systemctl start etcd<br />
<br />
#service mongodb stop<br />
#rm -f /var/lib/mongodb/journal/prealloc.*<br />
#service mongodb start<br />
</exec><br />
<br />
<!-- STEP 2: Identity service --><br />
<filetree seq="step2" root="/etc/keystone/">conf/controller/keystone/keystone.conf</filetree><br />
<filetree seq="step2" root="/root/bin/">conf/controller/keystone/admin-openrc.sh</filetree><br />
<filetree seq="step2" root="/root/bin/">conf/controller/keystone/demo-openrc.sh</filetree><br />
<exec seq="step2" type="verbatim"><br />
count=1; while ! mysqladmin ping ; do echo -n $count; echo ": waiting for mysql ..."; ((count++)) &amp;&amp; ((count==6)) &amp;&amp; echo "--" &amp;&amp; echo "-- ERROR: database not ready." &amp;&amp; echo "--" &amp;&amp; break; sleep 2; done<br />
</exec><br />
<exec seq="step2" type="verbatim"><br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE keystone;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
su -s /bin/sh -c "keystone-manage db_sync" keystone<br />
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone<br />
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone<br />
<br />
keystone-manage bootstrap --bootstrap-password xxxx \<br />
--bootstrap-admin-url http://controller:5000/v3/ \<br />
--bootstrap-internal-url http://controller:5000/v3/ \<br />
--bootstrap-public-url http://controller:5000/v3/ \<br />
--bootstrap-region-id RegionOne<br />
<br />
echo "ServerName controller" >> /etc/apache2/apache2.conf<br />
#ln -s /etc/apache2/sites-available/wsgi-keystone.conf /etc/apache2/sites-enabled<br />
service apache2 restart<br />
rm -f /var/lib/keystone/keystone.db<br />
sleep 5<br />
<br />
export OS_USERNAME=admin<br />
export OS_PASSWORD=xxxx<br />
export OS_PROJECT_NAME=admin<br />
export OS_USER_DOMAIN_NAME=Default<br />
export OS_PROJECT_DOMAIN_NAME=Default<br />
export OS_AUTH_URL=http://controller:5000/v3<br />
export OS_IDENTITY_API_VERSION=3<br />
<br />
# Create users and projects<br />
openstack project create --domain default --description "Service Project" service<br />
openstack project create --domain default --description "Demo Project" demo<br />
openstack user create --domain default --password=xxxx demo<br />
openstack role create user<br />
openstack role add --project demo --user demo user<br />
</exec><br />
<br />
<!-- <br />
STEP 3: Image service (Glance) <br />
--><br />
<filetree seq="step3" root="/etc/glance/">conf/controller/glance/glance-api.conf</filetree><br />
<filetree seq="step3" root="/etc/glance/">conf/controller/glance/glance-registry.conf</filetree><br />
<exec seq="step3" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE glance;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx glance<br />
openstack role add --project service --user glance admin<br />
openstack service create --name glance --description "OpenStack Image" image<br />
openstack endpoint create --region RegionOne image public http://controller:9292<br />
openstack endpoint create --region RegionOne image internal http://controller:9292<br />
openstack endpoint create --region RegionOne image admin http://controller:9292<br />
<br />
su -s /bin/sh -c "glance-manage db_sync" glance<br />
service glance-registry restart<br />
service glance-api restart<br />
#rm -f /var/lib/glance/glance.sqlite<br />
</exec><br />
<br />
<!-- <br />
STEP 3B: Placement service API<br />
--><br />
<filetree seq="step3b" root="/etc/placement/">conf/controller/placement/placement.conf</filetree><br />
<exec seq="step3b" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE placement;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx placement<br />
openstack role add --project service --user placement admin<br />
openstack service create --name placement --description "Placement API" placement<br />
openstack endpoint create --region RegionOne placement public http://controller:8778<br />
openstack endpoint create --region RegionOne placement internal http://controller:8778<br />
openstack endpoint create --region RegionOne placement admin http://controller:8778<br />
su -s /bin/sh -c "placement-manage db sync" placement<br />
service apache2 restart<br />
</exec><br />
<br />
<br />
<br />
<!-- STEP 4: Compute service (Nova) --><br />
<filetree seq="step41" root="/etc/nova/">conf/controller/nova/nova.conf</filetree><br />
<exec seq="step41" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova_api;"<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova;"<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova_cell0;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
<br />
openstack user create --domain default --password=xxxx nova<br />
openstack role add --project service --user nova admin<br />
openstack service create --name nova --description "OpenStack Compute" compute<br />
<br />
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1<br />
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1<br />
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1<br />
<br />
# Restart services stopped at step 1 to save CPU<br />
service nova-scheduler start<br />
service nova-api start<br />
service nova-conductor start<br />
<br />
su -s /bin/sh -c "nova-manage api_db sync" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova<br />
su -s /bin/sh -c "nova-manage db sync" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova<br />
<br />
service nova-api restart<br />
service nova-consoleauth restart<br />
service nova-scheduler restart<br />
service nova-conductor restart<br />
service nova-novncproxy restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<exec seq="step43" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Wait for compute1 hypervisor to be up<br />
while [ $( openstack hypervisor list --matching compute1 -f value -c State ) != 'up' ]; do echo "waiting for compute1 hypervisor..."; sleep 5; done<br />
</exec><br />
<exec seq="step43" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Wait for compute2 hypervisor to be up<br />
while [ $( openstack hypervisor list --matching compute2 -f value -c State ) != 'up' ]; do echo "waiting for compute2 hypervisor..."; sleep 5; done<br />
</exec><br />
<exec seq="step44" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
openstack hypervisor list<br />
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron) --><br />
<filetree seq="step51" root="/etc/neutron/">conf/controller/neutron/neutron.conf</filetree><br />
<!--filetree seq="step51" root="/etc/neutron/">conf/controller/neutron/neutron_lbaas.conf</filetree--><br />
<filetree seq="step51" root="/etc/neutron/plugins/ml2/">conf/controller/neutron/ml2_conf.ini</filetree><br />
<exec seq="step51" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE neutron;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx neutron<br />
openstack role add --project service --user neutron admin<br />
openstack service create --name neutron --description "OpenStack Networking" network<br />
openstack endpoint create --region RegionOne network public http://controller:9696<br />
openstack endpoint create --region RegionOne network internal http://controller:9696<br />
openstack endpoint create --region RegionOne network admin http://controller:9696<br />
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron<br />
<br />
# LBaaS<br />
neutron-db-manage --subproject neutron-lbaas upgrade head<br />
<br />
# FwaaS<br />
neutron-db-manage --subproject neutron-fwaas upgrade head<br />
<br />
# LBaaS Dashboard panels<br />
#git clone https://git.openstack.org/openstack/neutron-lbaas-dashboard<br />
#cd neutron-lbaas-dashboard<br />
#git checkout stable/mitaka<br />
#python setup.py install<br />
#cp neutron_lbaas_dashboard/enabled/_1481_project_ng_loadbalancersv2_panel.py /usr/share/openstack-dashboard/openstack_dashboard/local/enabled/<br />
#cd /usr/share/openstack-dashboard<br />
#./manage.py collectstatic --noinput<br />
#./manage.py compress<br />
#sudo service apache2 restart<br />
<br />
service nova-api restart<br />
service neutron-server restart<br />
</exec><br />
<br />
<!-- STEP 6: Dashboard service --><br />
<filetree seq="step6" root="/etc/openstack-dashboard/">conf/controller/dashboard/local_settings.py</filetree><br />
<exec seq="step6" type="verbatim"><br />
#chown www-data:www-data /var/lib/openstack-dashboard/secret_key<br />
rm /var/lib/openstack-dashboard/secret_key<br />
systemctl enable apache2<br />
service apache2 restart<br />
</exec><br />
<br />
<!-- STEP 7: Trove service --><br />
<cmd-seq seq="step7">step71,step72,step73</cmd-seq><br />
<exec seq="step71" type="verbatim"><br />
apt-get -y install python-trove python-troveclient python-glanceclient trove-common trove-api trove-taskmanager trove-conductor python-pip<br />
pip install trove-dashboard==7.0.0.0b2<br />
</exec><br />
<br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-conductor.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-taskmanager.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-guestagent.conf</filetree><br />
<exec seq="step72" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE trove;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
<br />
openstack user create --domain default --password xxxx trove<br />
openstack role add --project service --user trove admin<br />
openstack service create --name trove --description "Database" database<br />
openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s<br />
<br />
su -s /bin/sh -c "trove-manage db_sync" trove<br />
<br />
service trove-api restart<br />
service trove-taskmanager restart<br />
service trove-conductor restart<br />
<br />
# Install trove_dashboard<br />
cp -a /usr/local/lib/python2.7/dist-packages/trove_dashboard/enabled/* /usr/share/openstack-dashboard/openstack_dashboard/local/enabled/<br />
service apache2 restart<br />
<br />
</exec><br />
<br />
<exec seq="step73" type="verbatim"><br />
#wget -P /tmp/images http://tarballs.openstack.org/trove/images/ubuntu/mariadb.qcow2<br />
wget -P /tmp/images/ http://138.4.7.228/download/vnx/filesystems/ostack-images/trove/mariadb.qcow2<br />
glance image-create --name "trove-mariadb" --file /tmp/images/mariadb.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/mariadb.qcow2<br />
su -s /bin/sh -c "trove-manage --config-file /etc/trove/trove.conf datastore_update mysql ''" trove <br />
su -s /bin/sh -c "trove-manage --config-file /etc/trove/trove.conf datastore_version_update mysql mariadb mariadb glance_image_ID '' 1" trove<br />
<br />
# Create example database<br />
openstack flavor show m1.smaller >/dev/null 2>&amp;1 || openstack flavor create m1.smaller --ram 512 --disk 3 --vcpus 1 --id 6<br />
#trove create mysql_instance_1 m1.smaller --size 1 --databases myDB --users userA:xxxx --datastore_version mariadb --datastore mysql<br />
</exec><br />
<br />
<!-- STEP 8: Heat service --><br />
<!--cmd-seq seq="step8">step81,step82</cmd-seq--><br />
<br />
<filetree seq="step8" root="/etc/heat/">conf/controller/heat/heat.conf</filetree><br />
<filetree seq="step8" root="/root/heat/">conf/controller/heat/examples</filetree><br />
<exec seq="step8" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE heat;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password xxxx heat<br />
openstack role add --project service --user heat admin<br />
openstack service create --name heat --description "Orchestration" orchestration<br />
openstack service create --name heat-cfn --description "Orchestration" cloudformation<br />
openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1<br />
openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1<br />
openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1<br />
openstack domain create --description "Stack projects and users" heat<br />
openstack user create --domain heat --password xxxx heat_domain_admin<br />
openstack role add --domain heat --user-domain heat --user heat_domain_admin admin<br />
openstack role create heat_stack_owner<br />
openstack role add --project demo --user demo heat_stack_owner<br />
openstack role create heat_stack_user<br />
<br />
su -s /bin/sh -c "heat-manage db_sync" heat<br />
service heat-api restart<br />
service heat-api-cfn restart<br />
service heat-engine restart<br />
<br />
# Install Orchestration interface in Dashboard<br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get install -y gettext<br />
pip3 install heat-dashboard<br />
<br />
cd /root<br />
git clone https://github.com/openstack/heat-dashboard.git<br />
cd heat-dashboard/<br />
git checkout stable/stein<br />
cp heat_dashboard/enabled/_[1-9]*.py /usr/share/openstack-dashboard/openstack_dashboard/local/enabled<br />
python3 ./manage.py compilemessages<br />
cd /usr/share/openstack-dashboard<br />
DJANGO_SETTINGS_MODULE=openstack_dashboard.settings python3 manage.py collectstatic --noinput<br />
DJANGO_SETTINGS_MODULE=openstack_dashboard.settings python3 manage.py compress --force<br />
rm /var/lib/openstack-dashboard/secret_key<br />
service apache2 restart<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-heat" type="verbatim"><br />
source /root/bin/demo-openrc.sh<br />
<br />
# Create internal network<br />
openstack network create net-heat<br />
openstack subnet create --network net-heat --gateway 10.1.10.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.10.0/24 --allocation-pool start=10.1.10.8,end=10.1.10.100 subnet-heat<br />
<br />
mkdir -p /root/keys<br />
openstack keypair create key-heat > /root/keys/key-heat<br />
#export NET_ID=$(openstack network list | awk '/ net-heat / { print $2 }')<br />
export NET_ID=$( openstack network list --name net-heat -f value -c ID )<br />
openstack stack create -t /root/heat/examples/demo-template.yml --parameter "NetID=$NET_ID" stack<br />
<br />
</exec><br />
<br />
<br />
<!-- STEP 9: Tacker service --><br />
<cmd-seq seq="step9">step91,step92</cmd-seq><br />
<br />
<exec seq="step91" type="verbatim"><br />
apt-get -y install python-pip git<br />
pip install --upgrade pip<br />
</exec><br />
<br />
<filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/default-vim-config.yaml</filetree><br />
<filetree seq="step92" root="/root/tacker/">conf/controller/tacker/examples</filetree><br />
<exec seq="step92" type="verbatim"><br />
sed -i -e 's/.*"resource_types:OS::Nova::Flavor":.*/ "resource_types:OS::Nova::Flavor": "role:admin",/' /etc/heat/policy.json <br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE tacker;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password xxxx tacker<br />
openstack role add --project service --user tacker admin<br />
openstack service create --name tacker --description "Tacker Project" nfv-orchestration<br />
openstack endpoint create --region RegionOne nfv-orchestration public http://controller:9890/<br />
openstack endpoint create --region RegionOne nfv-orchestration internal http://controller:9890/<br />
openstack endpoint create --region RegionOne nfv-orchestration admin http://controller:9890/<br />
<br />
mkdir -p /root/tacker<br />
cd /root/tacker<br />
git clone https://github.com/openstack/tacker<br />
cd tacker<br />
git checkout stable/ocata<br />
pip install -r requirements.txt<br />
pip install tosca-parser<br />
python setup.py install<br />
mkdir -p /var/log/tacker<br />
<br />
/usr/local/bin/tacker-db-manage --config-file /usr/local/etc/tacker/tacker.conf upgrade head<br />
<br />
# Tacker client<br />
cd /root/tacker<br />
git clone https://github.com/openstack/python-tackerclient<br />
cd python-tackerclient<br />
git checkout stable/ocata<br />
python setup.py install<br />
<br />
# Tacker horizon<br />
cd /root/tacker<br />
git clone https://github.com/openstack/tacker-horizon<br />
cd tacker-horizon<br />
git checkout stable/ocata<br />
python setup.py install<br />
cp tacker_horizon/enabled/* /usr/share/openstack-dashboard/openstack_dashboard/enabled/<br />
service apache2 restart<br />
<br />
# Start tacker server<br />
mkdir -p /var/log/tacker<br />
nohup python /usr/local/bin/tacker-server \<br />
--config-file /usr/local/etc/tacker/tacker.conf \<br />
--log-file /var/log/tacker/tacker.log &amp;<br />
<br />
# Register default VIM<br />
tacker vim-register --is-default --config-file /usr/local/etc/tacker/default-vim-config.yaml \<br />
--description "Default VIM" "Openstack-VIM"<br />
<br />
</exec><br />
<exec seq="step93" type="verbatim"><br />
nohup python /usr/local/bin/tacker-server \<br />
--config-file /usr/local/etc/tacker/tacker.conf \<br />
--log-file /var/log/tacker/tacker.log &amp;<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-tacker" type="verbatim"><br />
source /root/bin/demo-openrc.sh<br />
<br />
# Create internal network<br />
openstack network create net-tacker<br />
openstack subnet create --network net-tacker --gateway 10.1.11.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.11.0/24 --allocation-pool start=10.1.11.8,end=10.1.11.100 subnet-tacker<br />
<br />
cd /root/tacker/examples<br />
tacker vnfd-create --vnfd-file sample-vnfd.yaml testd<br />
<br />
# Falla con error: <br />
# ERROR: Property error: : resources.VDU1.properties.image: : No module named v2.client<br />
tacker vnf-create --vnfd-id $( tacker vnfd-list | awk '/ testd / { print $2 }' ) test<br />
<br />
</exec><br />
<br />
<!--filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<exec seq="step92" type="verbatim"><br />
</exec--><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
#apt-get -y install python-pip<br />
#pip install --upgrade pip<br />
#pip install gnocchi[mysql,keystone] gnocchiclient<br />
apt-get -y install ceilometer-collector ceilometer-agent-central ceilometer-agent-notification python-ceilometerclient<br />
apt-get -y install gnocchi-common gnocchi-api gnocchi-metricd gnocchi-statsd python-gnocchiclient<br />
<br />
</exec><br />
<br />
<filetree seq="step102" root="/etc/ceilometer/">conf/controller/ceilometer/ceilometer.conf</filetree><br />
<!--filetree seq="step102" root="/etc/mongodb.conf">conf/controller/mongodb/mongodb.conf</filetree--><br />
<!--filetree seq="step102" root="/etc/apache2/sites-available/ceilometer.conf">conf/controller/ceilometer/apache/ceilometer.conf</filetree--><br />
<!--filetree seq="step102" root="/etc/apache2/sites-available/gnocchi.conf">conf/controller/ceilometer/apache/gnocchi.conf</filetree--><br />
<filetree seq="step102" root="/etc/gnocchi/">conf/controller/ceilometer/gnocchi.conf</filetree><br />
<filetree seq="step102" root="/etc/gnocchi/">conf/controller/ceilometer/api-paste.ini</filetree><br />
<exec seq="step102" type="verbatim"><br />
<br />
# Create gnocchi database<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE gnocchidb default character set utf8;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON gnocchidb.* TO 'gnocchi'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON gnocchidb.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
# Ceilometer<br />
source /root/bin/admin-openrc.sh <br />
openstack user create --domain default --password xxxx ceilometer<br />
openstack role add --project service --user ceilometer admin<br />
openstack service create --name ceilometer --description "Telemetry" metering<br />
openstack user create --domain default --password xxxx gnocchi<br />
openstack role add --project service --user gnocchi admin<br />
openstack service create --name gnocchi --description "Metric Service" metric<br />
openstack endpoint create --region RegionOne metric public http://controller:8041<br />
openstack endpoint create --region RegionOne metric internal http://controller:8041<br />
openstack endpoint create --region RegionOne metric admin http://controller:8041<br />
<br />
mkdir -p /var/cache/gnocchi<br />
chown gnocchi:gnocchi -R /var/cache/gnocchi<br />
mkdir -p /var/lib/gnocchi<br />
chown gnocchi:gnocchi -R /var/lib/gnocchi<br />
<br />
gnocchi-upgrade<br />
sed -i 's/8000/8041/g' /usr/bin/gnocchi-api<br />
# Correct error in gnocchi-api startup script<br />
sed -i -e 's/exec $DAEMON $DAEMON_ARGS/exec $DAEMON -- $DAEMON_ARGS/' /etc/init.d/gnocchi-api<br />
systemctl enable gnocchi-api.service gnocchi-metricd.service gnocchi-statsd.service<br />
systemctl start gnocchi-api.service gnocchi-metricd.service gnocchi-statsd.service<br />
<br />
ceilometer-upgrade --skip-metering-database<br />
service ceilometer-agent-central restart<br />
service ceilometer-agent-notification restart<br />
service ceilometer-collector restart<br />
<br />
# Enable Glance service meters<br />
crudini --set /etc/glance/glance-api.conf DEFAULT transport_url rabbit://openstack:xxxx@controller<br />
crudini --set /etc/glance/glance-api.conf oslo_messaging_notifications driver messagingv2<br />
crudini --set /etc/glance/glance-registry.conf DEFAULT transport_url rabbit://openstack:xxxx@controller<br />
crudini --set /etc/glance/glance-registry.conf oslo_messaging_notifications driver messagingv2<br />
<br />
service glance-registry restart<br />
service glance-api restart<br />
<br />
# Enable Neutron service meters<br />
crudini --set /etc/neutron/neutron.conf oslo_messaging_notifications driver messagingv2<br />
service neutron-server restart<br />
<br />
# Enable Heat service meters<br />
crudini --set /etc/heat/heat.conf oslo_messaging_notifications driver messagingv2<br />
service heat-api restart<br />
service heat-api-cfn restart<br />
service heat-engine restart<br />
<br />
#crudini --set /etc/glance/glance-api.conf DEFAULT rpc_backend rabbit<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_notifications driver messagingv2<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_host controller<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_userid openstack<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_password xxxx<br />
<br />
#crudini --set /etc/glance/glance-registry.conf DEFAULT rpc_backend rabbit<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_notifications driver messagingv2<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_host controller<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_userid openstack<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_password xxxx<br />
<br />
<br />
</exec><br />
<br />
<br />
<exec seq="load-img" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create flavors if not created<br />
openstack flavor show m1.nano >/dev/null 2>&amp;1 || openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano<br />
openstack flavor show m1.tiny >/dev/null 2>&amp;1 || openstack flavor create --id 1 --vcpus 1 --ram 512 --disk 1 m1.tiny<br />
openstack flavor show m1.smaller >/dev/null 2>&amp;1 || openstack flavor create --id 6 --vcpus 1 --ram 512 --disk 3 m1.smaller<br />
<br />
# CentOS image<br />
# Cirros image <br />
#wget -P /tmp/images http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img<br />
wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/cirros-0.3.4-x86_64-disk-vnx.qcow2<br />
glance image-create --name "cirros-0.3.4-x86_64-vnx" --file /tmp/images/cirros-0.3.4-x86_64-disk-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/cirros-0.3.4-x86_64-disk*.qcow2<br />
<br />
# Ubuntu image (trusty)<br />
#wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/trusty-server-cloudimg-amd64-disk1-vnx.qcow2<br />
#glance image-create --name "trusty-server-cloudimg-amd64-vnx" --file /tmp/images/trusty-server-cloudimg-amd64-disk1-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
#rm /tmp/images/trusty-server-cloudimg-amd64-disk1*.qcow2<br />
<br />
# Ubuntu image (xenial)<br />
wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/xenial-server-cloudimg-amd64-disk1-vnx.qcow2<br />
glance image-create --name "xenial-server-cloudimg-amd64-vnx" --file /tmp/images/xenial-server-cloudimg-amd64-disk1-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/xenial-server-cloudimg-amd64-disk1*.qcow2<br />
<br />
#wget -P /tmp/images http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2<br />
#glance image-create --name "CentOS-7-x86_64" --file /tmp/images/CentOS-7-x86_64-GenericCloud.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
#rm /tmp/images/CentOS-7-x86_64-GenericCloud.qcow2<br />
</exec><br />
<br />
<exec seq="create-demo-scenario" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create internal network<br />
#neutron net-create net0<br />
openstack network create net0<br />
#neutron subnet-create net0 10.1.1.0/24 --name subnet0 --gateway 10.1.1.1 --dns-nameserver 8.8.8.8<br />
openstack subnet create --network net0 --gateway 10.1.1.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.1.0/24 --allocation-pool start=10.1.1.8,end=10.1.1.100 subnet0<br />
<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm1 > /root/keys/vm1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm1 --nic net-id=net0 --key-name vm1<br />
<br />
# Create external network<br />
#neutron net-create ExtNet --provider:physical_network provider --provider:network_type flat --router:external --shared<br />
openstack network create --share --external --provider-physical-network provider --provider-network-type flat ExtNet<br />
#neutron subnet-create --name ExtSubnet --allocation-pool start=10.0.10.100,end=10.0.10.200 --dns-nameserver 10.0.10.1 --gateway 10.0.10.1 ExtNet 10.0.10.0/24<br />
openstack subnet create --network ExtNet --gateway 10.0.10.1 --dns-nameserver 10.0.10.1 --subnet-range 10.0.10.0/24 --allocation-pool start=10.0.10.100,end=10.0.10.200 ExtSubNet<br />
#neutron router-create r0<br />
openstack router create r0<br />
#neutron router-gateway-set r0 ExtNet<br />
openstack router set r0 --external-gateway ExtNet<br />
#neutron router-interface-add r0 subnet0<br />
openstack router add subnet r0 subnet0<br />
<br />
<br />
# Assign floating IP address to vm1<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm1<br />
openstack server add floating ip vm1 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
<br />
<br />
# Create security group rules to allow ICMP, SSH and WWW access<br />
openstack security group rule create --proto icmp --dst-port 0 default<br />
openstack security group rule create --proto tcp --dst-port 80 default<br />
openstack security group rule create --proto tcp --dst-port 22 default<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-vm2" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm2 > /root/keys/vm2<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm2 --nic net-id=net0 --key-name vm2<br />
# Assign floating IP address to vm2<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm2<br />
openstack server add floating ip vm2 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
</exec><br />
<br />
<exec seq="create-demo-vm3" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm3 > /root/keys/vm3<br />
openstack server create --flavor m1.smaller --image xenial-server-cloudimg-amd64-vnx vm3 --nic net-id=net0 --key-name vm3<br />
# Assign floating IP address to vm3<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm3<br />
openstack server add floating ip vm3 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
</exec><br />
<br />
<exec seq="create-vlan-demo-scenario" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create vlan based networks and subnetworks<br />
#neutron net-create vlan1000 --shared --provider:physical_network vlan --provider:network_type vlan --provider:segmentation_id 1000<br />
#neutron net-create vlan1001 --shared --provider:physical_network vlan --provider:network_type vlan --provider:segmentation_id 1001<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1000 vlan1000<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1001 vlan1001<br />
#neutron subnet-create vlan1000 10.1.2.0/24 --name vlan1000-subnet --allocation-pool start=10.1.2.2,end=10.1.2.99 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8<br />
#neutron subnet-create vlan1001 10.1.3.0/24 --name vlan1001-subnet --allocation-pool start=10.1.3.2,end=10.1.3.99 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8<br />
openstack subnet create --network vlan1000 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.2.0/24 --allocation-pool start=10.1.2.2,end=10.1.2.99 subvlan1000<br />
openstack subnet create --network vlan1001 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.3.0/24 --allocation-pool start=10.1.3.2,end=10.1.3.99 subvlan1001<br />
<br />
<br />
# Create virtual machine<br />
mkdir -p tmp<br />
openstack keypair create vm3 > tmp/vm3<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm3 --nic net-id=vlan1000 --key-name vm3<br />
openstack keypair create vm4 > tmp/vm4<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm4 --nic net-id=vlan1001 --key-name vm4<br />
<br />
<br />
# Create security group rules to allow ICMP, SSH and WWW access<br />
openstack security group rule create --proto icmp --dst-port 0 default<br />
openstack security group rule create --proto tcp --dst-port 80 default<br />
openstack security group rule create --proto tcp --dst-port 22 default<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
echo "--"<br />
echo "-- Keystone (identity)"<br />
echo "--"<br />
echo "Command: openstack --os-auth-url http://controller:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue"<br />
openstack --os-auth-url http://controller:35357/v3 \<br />
--os-project-domain-name default --os-user-domain-name default \<br />
--os-project-name admin --os-username admin token issue<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Glance (images)"<br />
echo "--"<br />
echo "Command: openstack image list"<br />
openstack image list<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Nova (compute)"<br />
echo "--"<br />
echo "Command: openstack compute service list"<br />
openstack compute service list<br />
echo "Command: openstack hypervisor service list"<br />
openstack hypervisor service list<br />
echo "Command: openstack catalog list"<br />
openstack catalog list<br />
echo "Command: nova-status upgrade check"<br />
nova-status upgrade check<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Neutron (network)"<br />
echo "--"<br />
echo "Command: openstack extension list --network"<br />
openstack extension list --network<br />
echo "Command: openstack network agent list"<br />
openstack network agent list<br />
echo "Command: openstack security group list"<br />
openstack security group list<br />
echo "Command: openstack security group rule list"<br />
openstack security group rule list<br />
</exec><br />
<br />
</vm><br />
<br />
<vm name="network" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-network</filesystem><br />
<!--vm name="network" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-network</filesystem--><br />
<mem>1G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.21/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.21/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="4" net="ExtNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<forwarding type="ip" /><br />
<forwarding type="ipv6" /><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts<br />
rm /root/hosts<br />
</exec><br />
<exec seq="on_boot" type="verbatim"><br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --><br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<filetree seq="on_boot" root="/root/">conf/network/bin</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
chmod +x /root/bin/*<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) --><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/neutron.conf</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/metadata_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/plugins/ml2/">conf/network/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step52" root="/etc/neutron/plugins/ml2/">conf/network/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/l3_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/dhcp_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/dnsmasq-neutron.conf</filetree><br />
<!--filetree seq="step52" root="/etc/neutron/">conf/network/neutron/fwaas_driver.ini</filetree--><br />
<!--filetree seq="step52" root="/etc/neutron/">conf/network/neutron/lbaas_agent.ini</filetree--><br />
<exec seq="step52" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
#ovs-vsctl add-br br-ex<br />
ovs-vsctl add-port br-provider eth4<br />
<br />
service neutron-lbaasv2-agent restart<br />
service openvswitch-switch restart<br />
<br />
service neutron-openvswitch-agent restart<br />
#service neutron-linuxbridge-agent restart<br />
service neutron-dhcp-agent restart<br />
service neutron-metadata-agent restart<br />
service neutron-l3-agent restart<br />
<br />
rm -f /var/lib/neutron/neutron.sqlite<br />
</exec><br />
<br />
</vm><br />
<br />
<br />
<vm name="compute1" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-compute</filesystem><br />
<!--vm name="compute1" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64" vcpu="2"--><br />
<!--filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-compute</filesystem--><br />
<mem>2G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.31/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.31/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
# Create /dev/net/tun device <br />
#mkdir -p /dev/net/<br />
#mknod -m 666 /dev/net/tun c 10 200<br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<!-- STEP 42: Compute service (Nova) --><br />
<filetree seq="step42" root="/etc/nova/">conf/compute1/nova/nova.conf</filetree><br />
<filetree seq="step42" root="/etc/nova/">conf/compute1/nova/nova-compute.conf</filetree><br />
<exec seq="step42" type="verbatim"><br />
service nova-compute restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron) --><br />
<filetree seq="step53" root="/etc/neutron/">conf/compute1/neutron/neutron.conf</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/ml2_conf.ini</filetree!--><br />
<exec seq="step53" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
service openvswitch-switch restart<br />
service nova-compute restart<br />
service neutron-openvswitch-agent restart<br />
</exec><br />
<br />
<!--filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<exec seq="step92" type="verbatim"><br />
</exec--><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get -y install ceilometer-agent-compute<br />
</exec><br />
<filetree seq="step102" root="/etc/ceilometer/">conf/compute2/ceilometer/ceilometer.conf</filetree><br />
<exec seq="step102" type="verbatim"><br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit True<br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit_period hour<br />
crudini --set /etc/nova/nova.conf DEFAULT notify_on_state_change vm_and_task_state<br />
crudini --set /etc/nova/nova.conf oslo_messaging_notifications driver messagingv2<br />
service ceilometer-agent-compute restart<br />
service nova-compute restart<br />
</exec><br />
<br />
</vm><br />
<br />
<vm name="compute2" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-compute</filesystem><br />
<!--vm name="compute2" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64" vcpu="2"--><br />
<!--filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-compute</filesystem!--><br />
<mem>2G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.32/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.32/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
# Create /dev/net/tun device <br />
#mkdir -p /dev/net/<br />
#mknod -m 666 /dev/net/tun c 10 200<br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<!-- STEP 42: Compute service (Nova) --><br />
<filetree seq="step42" root="/etc/nova/">conf/compute2/nova/nova.conf</filetree><br />
<filetree seq="step42" root="/etc/nova/">conf/compute2/nova/nova-compute.conf</filetree><br />
<exec seq="step42" type="verbatim"><br />
service nova-compute restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) --><br />
<filetree seq="step53" root="/etc/neutron/">conf/compute2/neutron/neutron.conf</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/ml2_conf.ini</filetree!--><br />
<exec seq="step53" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
service openvswitch-switch restart<br />
service nova-compute restart<br />
service neutron-openvswitch-agent restart<br />
</exec><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get -y install ceilometer-agent-compute<br />
</exec><br />
<filetree seq="step102" root="/etc/ceilometer/">conf/compute2/ceilometer/ceilometer.conf</filetree><br />
<exec seq="step102" type="verbatim"><br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit True<br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit_period hour<br />
crudini --set /etc/nova/nova.conf DEFAULT notify_on_state_change vm_and_task_state<br />
crudini --set /etc/nova/nova.conf oslo_messaging_notifications driver messagingv2<br />
service ceilometer-agent-compute restart<br />
service nova-compute restart<br />
</exec><br />
<br />
</vm><br />
<br />
<br />
<host><br />
<hostif net="ExtNet"><br />
<ipv4>10.0.10.1/24</ipv4><br />
</hostif><br />
<hostif net="MgmtNet"><br />
<ipv4>10.0.0.1/24</ipv4><br />
</hostif><br />
<exec seq="step00" type="verbatim"><br />
echo "--\n-- Waiting for all VMs to be ssh ready...\n--"<br />
</exec><br />
<exec seq="step00" type="verbatim"><br />
# Wait till ssh is accesible in all VMs<br />
while ! $( nc -z controller 22 ); do sleep 1; done<br />
while ! $( nc -z network 22 ); do sleep 1; done<br />
while ! $( nc -z compute1 22 ); do sleep 1; done<br />
while ! $( nc -z compute2 22 ); do sleep 1; done<br />
</exec><br />
<exec seq="step00" type="verbatim"><br />
echo "-- ...OK\n--"<br />
</exec><br />
</host><br />
<br />
</vnx></div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Vnx-labo-openstack-4nodes-classic-ovs-stein&diff=2628
Vnx-labo-openstack-4nodes-classic-ovs-stein
2019-08-26T14:19:36Z
<p>David: /* Provider networks example */</p>
<hr />
<div>{{Title|VNX Openstack Stein four nodes classic scenario using Open vSwitch}}<br />
<br />
== Introduction ==<br />
<br />
This is an Openstack tutorial scenario designed to experiment with Openstack free and open-source software platform for cloud-computing.<br />
<br />
The scenario is made of four virtual machines: a controller node, a network node and two compute nodes, all based on LXC. Optionally, new compute nodes can be added by starting additional VNX scenarios.<br />
<br />
Openstack version used is Stein (April 2019) over Ubuntu 18.04 LTS. The deployment scenario is the one that was named "Classic with Open vSwitch" and was described in previous versions of Openstack documentation (https://docs.openstack.org/liberty/networking-guide/scenario-classic-ovs.html). It is prepared to experiment either with the deployment of virtual machines using "Self Service Networks" provided by Openstack, or with the use of external "Provider networks".<br />
<br />
The scenario is similar to the ones developed by Raul Alvarez to test OpenDaylight-Openstack integration, but instead of using Devstack to configure Openstack nodes, the configuration is done by means of commands integrated into the VNX scenario following [https://docs.openstack.org/stein/install/ Openstack Stein installation recipes].<br />
<br />
[[File:Openstack_tutorial-stein.png|center|thumb|600px|<div align=center><br />
'''Figure 1: Openstack tutorial scenario'''</div>]]<br />
<br />
== Requirements ==<br />
<br />
To use the scenario you need a Linux computer (Ubuntu 16.04 or later recommended) with VNX software installed. At least 8GB of memory are needed to execute the scenario. <br />
<br />
See how to install VNX here: http://vnx.dit.upm.es/vnx/index.php/Vnx-install<br />
<br />
If already installed, update VNX to the latest version with:<br />
<br />
vnx_update<br />
<br />
== Installation ==<br />
<br />
Download the scenario with the virtual machines images included and unpack it:<br />
<br />
wget http://idefix.dit.upm.es/vnx/examples/openstack/openstack_lab-stein_4n_classic_ovs-v01-with-rootfs.tgz<br />
sudo vnx --unpack openstack_lab-stein_4n_classic_ovs-v01-with-rootfs.tgz<br />
<br />
== Starting the scenario ==<br />
<br />
Start the scenario and configure it and load an example cirros and ubuntu images with:<br />
cd openstack_lab-stein_4n_classic_ovs-v01<br />
# Start the scenario<br />
sudo vnx -f openstack_lab.xml -v --create<br />
# Wait for all consoles to have started and configure all Openstack services<br />
vnx -f openstack_lab.xml -v -x start-all<br />
# Load vm images in GLANCE<br />
vnx -f openstack_lab.xml -v -x load-img<br />
<br />
<br />
[[File:Tutorial-openstack-stein-4n-classic-ovs.png|center|thumb|600px|<div align=center><br />
'''Figure 2: Openstack tutorial detailed topology'''</div>]]<br />
<br />
Once started, you can connect to Openstack Dashboard (default/admin/xxxx) starting a browser and pointing it to the controller horizon page. For example:<br />
<br />
firefox 10.0.10.11/horizon<br />
<br />
== Self Service networks example ==<br />
<br />
Access the network topology Dashboard page (Project->Network->Network topology) and create a simple demo scenario inside Openstack:<br />
<br />
vnx -f openstack_lab.xml -v -x create-demo-scenario<br />
<br />
You should see the simple scenario as it is being created through the Dashboard.<br />
<br />
Once created you should be able to access vm1 console, to ping or ssh from the host to the vm1 or the opposite (see the floating IP assigned to vm1 in the Dashboard, probably 10.0.10.102).<br />
<br />
You can create a second Cirros virtual machine (vm2) or a third Ubuntu virtual machine and test connectivity among all virtual machines with: <br />
<br />
vnx -f openstack_lab.xml -v -x create-demo-vm2<br />
vnx -f openstack_lab.xml -v -x create-demo-vm3<br />
<br />
To allow external Internet access from vm1 you hace to configure a NAT in the host. You can easily do it using '''vnx_config_nat''' command distributed with VNX. Just find out the name of the public network interface of your host (i.e eth0) and execute:<br />
<br />
vnx_config_nat ExtNet eth0<br />
<br />
Besides, you can access the Openstack controller by ssh from the host and execute management commands directly:<br />
slogin root@controller # root/xxxx<br />
source bin/admin-openrc.sh # Load admin credentials<br />
<br />
For example, to show the virtual machines started:<br />
openstack server list<br />
<br />
You can also execute that commands from the host where the virtual scenario is started. For that purpose you need to install openstack client first:<br />
<br />
pip install python-openstackclient<br />
source bin/admin-openrc.sh # Load admin credentials<br />
openstack server list<br />
<br />
== Provider networks example ==<br />
<br />
Compute nodes in the Openstack virtual lab scenario have two network interfaces for internal and external connections: <br />
* '''eth2''', connected to Tunnels network and used to connect with VMs in other compute nodes or routers in the network node<br />
* '''eth3''', connected to VLANs network and used to connect with VMs in other compute nodes and also to connect to external systems through the Providers networks infrastructure. <br />
<br />
To demonstrate how Openstack VMs can be connected with external systems though the VLAN network switches, an additional demo scenario is included. Just execute:<br />
vnx -f openstack_lab.xml -v -x create-vlan-demo-scenario<br />
<br />
That scenario will create two new networks and subnetworks associated with VLANs 1000 and 1001, and two VMs, vmA1 and vmB1 connected to that networks. You can see the scenario created through the openstack Dashboard.<br />
<br />
[[File:Openstack-provider-networks-example.png|center|thumb|600px|<div align=center><br />
'''Figure 3: Provider networks testing scenario'''</div>]]<br />
<br />
The commands used to create that networks and vms are the following (can be consulted also in the scenario XML file):<br />
<br />
<pre><br />
# Networks<br />
neutron net-create vlan1000 --shared --provider:physical_network vlan --provider:network_type vlan --provider:segmentation_id 1000<br />
neutron net-create vlan1001 --shared --provider:physical_network vlan --provider:network_type vlan --provider:segmentation_id 1001<br />
neutron subnet-create vlan1000 10.1.2.0/24 --name vlan1000-subnet --allocation-pool start=10.1.2.2,end=10.1.2.99 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8<br />
neutron subnet-create vlan1001 10.1.3.0/24 --name vlan1001-subnet --allocation-pool start=10.1.3.2,end=10.1.3.99 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8<br />
<br />
# VMs<br />
mkdir -p tmp<br />
openstack keypair create vm3 > tmp/vm3<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm3 --nic net-id=vlan1000<br />
openstack keypair create vm4 > tmp/vm4<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm4 --nic net-id=vlan1001<br />
</pre><br />
<br />
To demonstrate the connectivity of vm3 and vm4 with external systems connected on VLANs 1000/1001, you can start and additional virtual scenario which creates three additional systems: vmA (vlan 1000), vmB (vlan 1001) and vlan-router (connected to both vlans). To start it just execute:<br />
vnx -f openstack_lab-vms-vlan.xml -v -t<br />
<br />
[[File:Openstack_lab-vms-vlan-map.png|center|thumb|600px|<div align=center><br />
'''Figure 3: Provider networks demo external scenario'''</div>]]<br />
<br />
Once the scenario is started, you should be able to ping, traceroute and ssh among vm3, vm4, vmA and vmB using the following IP addresses:<br />
<ul><br />
<li>Virtual machines inside Openstack:</li><br />
<ul><br />
<li>vmA1: 10.1.2.20/24</li><br />
<li>vmB1: 10.1.3.20/24</li><br />
</ul><br />
<li>Virtual machines outside Openstack:</li><br />
<ul><br />
<li>vmA2: 10.1.2.100/24</li><br />
<li>vmB2: 10.1.3.100/24</li><br />
</ul><br />
</ul><br />
<br />
You can have a look at the virtual switch that supports the Openstack VLAN Network executing the following command in the host:<br />
ovs-vsctl show<br />
<br />
<br />
[[File:Vnx-demo-scenario-openstack-stein.png|center|thumb|600px|<div align=center><br />
'''Figure 4: Openstack Dashboard view of the demo virtual scenarios created'''</div>]]<br />
<br />
== Stopping or releasing the scenario ==<br />
<br />
To stop the scenario preserving the configuration and the changes made:<br />
<br />
vnx -f openstack_lab.xml -v --shutdown<br />
<br />
To start it again use:<br />
<br />
vnx -f openstack_lab.xml -v --start<br />
<br />
To stop the scenario destroying all the configuration and changes made:<br />
<br />
vnx -f openstack_lab.xml -v --destroy<br />
<br />
To unconfigure the NAT, just execute (change eth0 by the name of your external interface):<br />
<br />
vnx_config_nat -d ExtNet eth0<br />
<br />
== Other useful information ==<br />
<br />
To pack the scenario in a tgz file:<br />
<br />
bin/pack-scenario-with-rootfs # including rootfs<br />
bin/pack-scenario # without rootfs<br />
<br />
== Other Openstack Dashboard screen captures ==<br />
<br />
[[File:Vnx-demo-scenario-openstack-mitaka-compute-overview.png|center|thumb|600px|<div align=center><br />
'''Figure 4: Openstack Dashboard compute overview'''</div>]]<br />
<br />
[[File:Vnx-demo-scenario-openstack-mitaka-instances.png|center|thumb|600px|<div align=center><br />
'''Figure 5: Openstack Dashboard view of the demo virtual machines created'''</div>]]<br />
<br />
<?xml version="1.0" encoding="UTF-8"?><br />
<br />
<!--<br />
~~~~~~~~~~~~~~~~~~~~~~<br />
VNX Sample scenarios<br />
~~~~~~~~~~~~~~~~~~~~~~<br />
<br />
Name: openstack_tutorial-stein<br />
<br />
Description: This is an Openstack tutorial scenario designed to experiment with Openstack free and open-source <br />
software platform for cloud-computing. It is made of four LXC containers: <br />
- one controller<br />
- one network node<br />
- two compute nodes<br />
Openstack version used: Stein.<br />
The network configuration is based on the one named "Classic with Open vSwitch" described here:<br />
http://docs.openstack.org/liberty/networking-guide/scenario-classic-ovs.html<br />
<br />
Author: David Fernandez (david@dit.upm.es)<br />
<br />
This file is part of the Virtual Networks over LinuX (VNX) Project distribution. <br />
(www: http://www.dit.upm.es/vnx - e-mail: vnx@dit.upm.es) <br />
<br />
Copyright (C) 2019 Departamento de Ingenieria de Sistemas Telematicos (DIT)<br />
Universidad Politecnica de Madrid (UPM)<br />
SPAIN<br />
--><br />
<br />
<vnx xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"<br />
xsi:noNamespaceSchemaLocation="/usr/share/xml/vnx/vnx-2.00.xsd"><br />
<global><br />
<version>2.0</version><br />
<scenario_name>openstack_tutorial-stein</scenario_name><br />
<ssh_key>/root/.ssh/id_rsa.pub</ssh_key><br />
<automac offset="0"/><br />
<!--vm_mgmt type="none"/--><br />
<vm_mgmt type="private" network="10.20.0.0" mask="24" offset="0"><br />
<host_mapping /><br />
</vm_mgmt> <br />
<vm_defaults><br />
<console id="0" display="no"/><br />
<console id="1" display="yes"/><br />
</vm_defaults><br />
<cmd-seq seq="step1-6">step1,step2,step3,step3b,step4,step5,step6</cmd-seq><br />
<cmd-seq seq="step4">step41,step42,step43,step44</cmd-seq><br />
<cmd-seq seq="step5">step51,step52,step53</cmd-seq><br />
<!-- start-all for 'noconfig' scenario: all installation steps included --><br />
<!--cmd-seq seq="start-all">step1,step2,step3,step4,step5,step6</cmd-seq--><br />
<!-- start-all for configured scenario: only network and compute node steps included --><br />
<cmd-seq seq="step10">step101,step102</cmd-seq><br />
<cmd-seq seq="start-all">step00,step42,step43,step44,step52,step53</cmd-seq><br />
<cmd-seq seq="discover-hosts">step44</cmd-seq><br />
</global><br />
<br />
<net name="MgmtNet" mode="openvswitch" mtu="1450"/><br />
<net name="TunnNet" mode="openvswitch" mtu="1450"/><br />
<net name="ExtNet" mode="openvswitch" /><br />
<net name="VlanNet" mode="openvswitch" /><br />
<net name="virbr0" mode="virtual_bridge" managed="no"/><br />
<br />
<vm name="controller" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-controller</filesystem><br />
<mem>1G</mem><br />
<!--console id="0" display="yes"/--><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.11/24</ipv4><br />
</if><br />
<if id="2" net="ExtNet"><br />
<ipv4>10.0.10.11/24</ipv4><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-controller.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<filetree seq="on_boot" root="/root/">conf/controller/bin</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
chmod +x /root/bin/*<br />
</exec><br />
<exec seq="on_boot" type="verbatim"><br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
<br />
# Change owner of secret_key to horizon to avoid a 500 error when<br />
# accessing horizon (new problem arosed in v04)<br />
# See: https://ask.openstack.org/en/question/30059/getting-500-internal-server-error-while-accessing-horizon-dashboard-in-ubuntu-icehouse/<br />
chown horizon /var/lib/openstack-dashboard/secret_key<br />
<br />
# Stop nova services. Before being configured, they consume a lot of CPU<br />
service nova-scheduler stop<br />
service nova-api stop<br />
service nova-conductor stop<br />
</exec><br />
<br />
<exec seq="step00" type="verbatim"><br />
# Restart nova services<br />
service nova-scheduler start<br />
service nova-api start<br />
service nova-conductor start<br />
</exec><br />
<br />
<!-- STEP 1: Basic services --><br />
<filetree seq="step1" root="/etc/mysql/mariadb.conf.d/">conf/controller/mysql/mysqld_openstack.cnf</filetree><br />
<filetree seq="step1" root="/etc/">conf/controller/memcached/memcached.conf</filetree><br />
<filetree seq="step1" root="/etc/default/">conf/controller/etcd/etcd</filetree><br />
<!--filetree seq="step1" root="/etc/mongodb.conf">conf/controller/mongodb/mongodb.conf</filetree!--><br />
<exec seq="step1" type="verbatim"><br />
<br />
# Stop nova services. Before being configured, they consume a lot of CPU<br />
service nova-scheduler stop<br />
service nova-api stop<br />
service nova-conductor stop<br />
<br />
# Change all ocurrences of utf8mb4 to utf8. See comment above<br />
#for f in $( find /etc/mysql/mariadb.conf.d/ -type f ); do echo "Changing utf8mb4 to utf8 in file $f"; sed -i -e 's/utf8mb4/utf8/g' $f; done<br />
service mysql restart<br />
#mysql_secure_installation # to be run manually<br />
<br />
rabbitmqctl add_user openstack xxxx<br />
rabbitmqctl set_permissions openstack ".*" ".*" ".*" <br />
<br />
service memcached restart<br />
<br />
systemctl enable etcd<br />
systemctl start etcd<br />
<br />
#service mongodb stop<br />
#rm -f /var/lib/mongodb/journal/prealloc.*<br />
#service mongodb start<br />
</exec><br />
<br />
<!-- STEP 2: Identity service --><br />
<filetree seq="step2" root="/etc/keystone/">conf/controller/keystone/keystone.conf</filetree><br />
<filetree seq="step2" root="/root/bin/">conf/controller/keystone/admin-openrc.sh</filetree><br />
<filetree seq="step2" root="/root/bin/">conf/controller/keystone/demo-openrc.sh</filetree><br />
<exec seq="step2" type="verbatim"><br />
count=1; while ! mysqladmin ping ; do echo -n $count; echo ": waiting for mysql ..."; ((count++)) &amp;&amp; ((count==6)) &amp;&amp; echo "--" &amp;&amp; echo "-- ERROR: database not ready." &amp;&amp; echo "--" &amp;&amp; break; sleep 2; done<br />
</exec><br />
<exec seq="step2" type="verbatim"><br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE keystone;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
su -s /bin/sh -c "keystone-manage db_sync" keystone<br />
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone<br />
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone<br />
<br />
keystone-manage bootstrap --bootstrap-password xxxx \<br />
--bootstrap-admin-url http://controller:5000/v3/ \<br />
--bootstrap-internal-url http://controller:5000/v3/ \<br />
--bootstrap-public-url http://controller:5000/v3/ \<br />
--bootstrap-region-id RegionOne<br />
<br />
echo "ServerName controller" >> /etc/apache2/apache2.conf<br />
#ln -s /etc/apache2/sites-available/wsgi-keystone.conf /etc/apache2/sites-enabled<br />
service apache2 restart<br />
rm -f /var/lib/keystone/keystone.db<br />
sleep 5<br />
<br />
export OS_USERNAME=admin<br />
export OS_PASSWORD=xxxx<br />
export OS_PROJECT_NAME=admin<br />
export OS_USER_DOMAIN_NAME=Default<br />
export OS_PROJECT_DOMAIN_NAME=Default<br />
export OS_AUTH_URL=http://controller:5000/v3<br />
export OS_IDENTITY_API_VERSION=3<br />
<br />
# Create users and projects<br />
openstack project create --domain default --description "Service Project" service<br />
openstack project create --domain default --description "Demo Project" demo<br />
openstack user create --domain default --password=xxxx demo<br />
openstack role create user<br />
openstack role add --project demo --user demo user<br />
</exec><br />
<br />
<!-- <br />
STEP 3: Image service (Glance) <br />
--><br />
<filetree seq="step3" root="/etc/glance/">conf/controller/glance/glance-api.conf</filetree><br />
<filetree seq="step3" root="/etc/glance/">conf/controller/glance/glance-registry.conf</filetree><br />
<exec seq="step3" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE glance;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx glance<br />
openstack role add --project service --user glance admin<br />
openstack service create --name glance --description "OpenStack Image" image<br />
openstack endpoint create --region RegionOne image public http://controller:9292<br />
openstack endpoint create --region RegionOne image internal http://controller:9292<br />
openstack endpoint create --region RegionOne image admin http://controller:9292<br />
<br />
su -s /bin/sh -c "glance-manage db_sync" glance<br />
service glance-registry restart<br />
service glance-api restart<br />
#rm -f /var/lib/glance/glance.sqlite<br />
</exec><br />
<br />
<!-- <br />
STEP 3B: Placement service API<br />
--><br />
<filetree seq="step3b" root="/etc/placement/">conf/controller/placement/placement.conf</filetree><br />
<exec seq="step3b" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE placement;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx placement<br />
openstack role add --project service --user placement admin<br />
openstack service create --name placement --description "Placement API" placement<br />
openstack endpoint create --region RegionOne placement public http://controller:8778<br />
openstack endpoint create --region RegionOne placement internal http://controller:8778<br />
openstack endpoint create --region RegionOne placement admin http://controller:8778<br />
su -s /bin/sh -c "placement-manage db sync" placement<br />
service apache2 restart<br />
</exec><br />
<br />
<br />
<br />
<!-- STEP 4: Compute service (Nova) --><br />
<filetree seq="step41" root="/etc/nova/">conf/controller/nova/nova.conf</filetree><br />
<exec seq="step41" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova_api;"<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova;"<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova_cell0;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
<br />
openstack user create --domain default --password=xxxx nova<br />
openstack role add --project service --user nova admin<br />
openstack service create --name nova --description "OpenStack Compute" compute<br />
<br />
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1<br />
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1<br />
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1<br />
<br />
# Restart services stopped at step 1 to save CPU<br />
service nova-scheduler start<br />
service nova-api start<br />
service nova-conductor start<br />
<br />
su -s /bin/sh -c "nova-manage api_db sync" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova<br />
su -s /bin/sh -c "nova-manage db sync" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova<br />
<br />
service nova-api restart<br />
service nova-consoleauth restart<br />
service nova-scheduler restart<br />
service nova-conductor restart<br />
service nova-novncproxy restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<exec seq="step43" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Wait for compute1 hypervisor to be up<br />
while [ $( openstack hypervisor list --matching compute1 -f value -c State ) != 'up' ]; do echo "waiting for compute1 hypervisor..."; sleep 5; done<br />
</exec><br />
<exec seq="step43" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Wait for compute2 hypervisor to be up<br />
while [ $( openstack hypervisor list --matching compute2 -f value -c State ) != 'up' ]; do echo "waiting for compute2 hypervisor..."; sleep 5; done<br />
</exec><br />
<exec seq="step44" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
openstack hypervisor list<br />
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron) --><br />
<filetree seq="step51" root="/etc/neutron/">conf/controller/neutron/neutron.conf</filetree><br />
<!--filetree seq="step51" root="/etc/neutron/">conf/controller/neutron/neutron_lbaas.conf</filetree--><br />
<filetree seq="step51" root="/etc/neutron/plugins/ml2/">conf/controller/neutron/ml2_conf.ini</filetree><br />
<exec seq="step51" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE neutron;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx neutron<br />
openstack role add --project service --user neutron admin<br />
openstack service create --name neutron --description "OpenStack Networking" network<br />
openstack endpoint create --region RegionOne network public http://controller:9696<br />
openstack endpoint create --region RegionOne network internal http://controller:9696<br />
openstack endpoint create --region RegionOne network admin http://controller:9696<br />
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron<br />
<br />
# LBaaS<br />
neutron-db-manage --subproject neutron-lbaas upgrade head<br />
<br />
# FwaaS<br />
neutron-db-manage --subproject neutron-fwaas upgrade head<br />
<br />
# LBaaS Dashboard panels<br />
#git clone https://git.openstack.org/openstack/neutron-lbaas-dashboard<br />
#cd neutron-lbaas-dashboard<br />
#git checkout stable/mitaka<br />
#python setup.py install<br />
#cp neutron_lbaas_dashboard/enabled/_1481_project_ng_loadbalancersv2_panel.py /usr/share/openstack-dashboard/openstack_dashboard/local/enabled/<br />
#cd /usr/share/openstack-dashboard<br />
#./manage.py collectstatic --noinput<br />
#./manage.py compress<br />
#sudo service apache2 restart<br />
<br />
service nova-api restart<br />
service neutron-server restart<br />
</exec><br />
<br />
<!-- STEP 6: Dashboard service --><br />
<filetree seq="step6" root="/etc/openstack-dashboard/">conf/controller/dashboard/local_settings.py</filetree><br />
<exec seq="step6" type="verbatim"><br />
#chown www-data:www-data /var/lib/openstack-dashboard/secret_key<br />
rm /var/lib/openstack-dashboard/secret_key<br />
systemctl enable apache2<br />
service apache2 restart<br />
</exec><br />
<br />
<!-- STEP 7: Trove service --><br />
<cmd-seq seq="step7">step71,step72,step73</cmd-seq><br />
<exec seq="step71" type="verbatim"><br />
apt-get -y install python-trove python-troveclient python-glanceclient trove-common trove-api trove-taskmanager trove-conductor python-pip<br />
pip install trove-dashboard==7.0.0.0b2<br />
</exec><br />
<br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-conductor.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-taskmanager.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-guestagent.conf</filetree><br />
<exec seq="step72" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE trove;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
<br />
openstack user create --domain default --password xxxx trove<br />
openstack role add --project service --user trove admin<br />
openstack service create --name trove --description "Database" database<br />
openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s<br />
<br />
su -s /bin/sh -c "trove-manage db_sync" trove<br />
<br />
service trove-api restart<br />
service trove-taskmanager restart<br />
service trove-conductor restart<br />
<br />
# Install trove_dashboard<br />
cp -a /usr/local/lib/python2.7/dist-packages/trove_dashboard/enabled/* /usr/share/openstack-dashboard/openstack_dashboard/local/enabled/<br />
service apache2 restart<br />
<br />
</exec><br />
<br />
<exec seq="step73" type="verbatim"><br />
#wget -P /tmp/images http://tarballs.openstack.org/trove/images/ubuntu/mariadb.qcow2<br />
wget -P /tmp/images/ http://138.4.7.228/download/vnx/filesystems/ostack-images/trove/mariadb.qcow2<br />
glance image-create --name "trove-mariadb" --file /tmp/images/mariadb.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/mariadb.qcow2<br />
su -s /bin/sh -c "trove-manage --config-file /etc/trove/trove.conf datastore_update mysql ''" trove <br />
su -s /bin/sh -c "trove-manage --config-file /etc/trove/trove.conf datastore_version_update mysql mariadb mariadb glance_image_ID '' 1" trove<br />
<br />
# Create example database<br />
openstack flavor show m1.smaller >/dev/null 2>&amp;1 || openstack flavor create m1.smaller --ram 512 --disk 3 --vcpus 1 --id 6<br />
#trove create mysql_instance_1 m1.smaller --size 1 --databases myDB --users userA:xxxx --datastore_version mariadb --datastore mysql<br />
</exec><br />
<br />
<!-- STEP 8: Heat service --><br />
<!--cmd-seq seq="step8">step81,step82</cmd-seq--><br />
<br />
<filetree seq="step8" root="/etc/heat/">conf/controller/heat/heat.conf</filetree><br />
<filetree seq="step8" root="/root/heat/">conf/controller/heat/examples</filetree><br />
<exec seq="step8" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE heat;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password xxxx heat<br />
openstack role add --project service --user heat admin<br />
openstack service create --name heat --description "Orchestration" orchestration<br />
openstack service create --name heat-cfn --description "Orchestration" cloudformation<br />
openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1<br />
openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1<br />
openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1<br />
openstack domain create --description "Stack projects and users" heat<br />
openstack user create --domain heat --password xxxx heat_domain_admin<br />
openstack role add --domain heat --user-domain heat --user heat_domain_admin admin<br />
openstack role create heat_stack_owner<br />
openstack role add --project demo --user demo heat_stack_owner<br />
openstack role create heat_stack_user<br />
<br />
su -s /bin/sh -c "heat-manage db_sync" heat<br />
service heat-api restart<br />
service heat-api-cfn restart<br />
service heat-engine restart<br />
<br />
# Install Orchestration interface in Dashboard<br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get install -y gettext<br />
pip3 install heat-dashboard<br />
<br />
cd /root<br />
git clone https://github.com/openstack/heat-dashboard.git<br />
cd heat-dashboard/<br />
git checkout stable/stein<br />
cp heat_dashboard/enabled/_[1-9]*.py /usr/share/openstack-dashboard/openstack_dashboard/local/enabled<br />
python3 ./manage.py compilemessages<br />
cd /usr/share/openstack-dashboard<br />
DJANGO_SETTINGS_MODULE=openstack_dashboard.settings python3 manage.py collectstatic --noinput<br />
DJANGO_SETTINGS_MODULE=openstack_dashboard.settings python3 manage.py compress --force<br />
rm /var/lib/openstack-dashboard/secret_key<br />
service apache2 restart<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-heat" type="verbatim"><br />
source /root/bin/demo-openrc.sh<br />
<br />
# Create internal network<br />
openstack network create net-heat<br />
openstack subnet create --network net-heat --gateway 10.1.10.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.10.0/24 --allocation-pool start=10.1.10.8,end=10.1.10.100 subnet-heat<br />
<br />
mkdir -p /root/keys<br />
openstack keypair create key-heat > /root/keys/key-heat<br />
#export NET_ID=$(openstack network list | awk '/ net-heat / { print $2 }')<br />
export NET_ID=$( openstack network list --name net-heat -f value -c ID )<br />
openstack stack create -t /root/heat/examples/demo-template.yml --parameter "NetID=$NET_ID" stack<br />
<br />
</exec><br />
<br />
<br />
<!-- STEP 9: Tacker service --><br />
<cmd-seq seq="step9">step91,step92</cmd-seq><br />
<br />
<exec seq="step91" type="verbatim"><br />
apt-get -y install python-pip git<br />
pip install --upgrade pip<br />
</exec><br />
<br />
<filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/default-vim-config.yaml</filetree><br />
<filetree seq="step92" root="/root/tacker/">conf/controller/tacker/examples</filetree><br />
<exec seq="step92" type="verbatim"><br />
sed -i -e 's/.*"resource_types:OS::Nova::Flavor":.*/ "resource_types:OS::Nova::Flavor": "role:admin",/' /etc/heat/policy.json <br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE tacker;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password xxxx tacker<br />
openstack role add --project service --user tacker admin<br />
openstack service create --name tacker --description "Tacker Project" nfv-orchestration<br />
openstack endpoint create --region RegionOne nfv-orchestration public http://controller:9890/<br />
openstack endpoint create --region RegionOne nfv-orchestration internal http://controller:9890/<br />
openstack endpoint create --region RegionOne nfv-orchestration admin http://controller:9890/<br />
<br />
mkdir -p /root/tacker<br />
cd /root/tacker<br />
git clone https://github.com/openstack/tacker<br />
cd tacker<br />
git checkout stable/ocata<br />
pip install -r requirements.txt<br />
pip install tosca-parser<br />
python setup.py install<br />
mkdir -p /var/log/tacker<br />
<br />
/usr/local/bin/tacker-db-manage --config-file /usr/local/etc/tacker/tacker.conf upgrade head<br />
<br />
# Tacker client<br />
cd /root/tacker<br />
git clone https://github.com/openstack/python-tackerclient<br />
cd python-tackerclient<br />
git checkout stable/ocata<br />
python setup.py install<br />
<br />
# Tacker horizon<br />
cd /root/tacker<br />
git clone https://github.com/openstack/tacker-horizon<br />
cd tacker-horizon<br />
git checkout stable/ocata<br />
python setup.py install<br />
cp tacker_horizon/enabled/* /usr/share/openstack-dashboard/openstack_dashboard/enabled/<br />
service apache2 restart<br />
<br />
# Start tacker server<br />
mkdir -p /var/log/tacker<br />
nohup python /usr/local/bin/tacker-server \<br />
--config-file /usr/local/etc/tacker/tacker.conf \<br />
--log-file /var/log/tacker/tacker.log &amp;<br />
<br />
# Register default VIM<br />
tacker vim-register --is-default --config-file /usr/local/etc/tacker/default-vim-config.yaml \<br />
--description "Default VIM" "Openstack-VIM"<br />
<br />
</exec><br />
<exec seq="step93" type="verbatim"><br />
nohup python /usr/local/bin/tacker-server \<br />
--config-file /usr/local/etc/tacker/tacker.conf \<br />
--log-file /var/log/tacker/tacker.log &amp;<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-tacker" type="verbatim"><br />
source /root/bin/demo-openrc.sh<br />
<br />
# Create internal network<br />
openstack network create net-tacker<br />
openstack subnet create --network net-tacker --gateway 10.1.11.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.11.0/24 --allocation-pool start=10.1.11.8,end=10.1.11.100 subnet-tacker<br />
<br />
cd /root/tacker/examples<br />
tacker vnfd-create --vnfd-file sample-vnfd.yaml testd<br />
<br />
# Falla con error: <br />
# ERROR: Property error: : resources.VDU1.properties.image: : No module named v2.client<br />
tacker vnf-create --vnfd-id $( tacker vnfd-list | awk '/ testd / { print $2 }' ) test<br />
<br />
</exec><br />
<br />
<!--filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<exec seq="step92" type="verbatim"><br />
</exec--><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
#apt-get -y install python-pip<br />
#pip install --upgrade pip<br />
#pip install gnocchi[mysql,keystone] gnocchiclient<br />
apt-get -y install ceilometer-collector ceilometer-agent-central ceilometer-agent-notification python-ceilometerclient<br />
apt-get -y install gnocchi-common gnocchi-api gnocchi-metricd gnocchi-statsd python-gnocchiclient<br />
<br />
</exec><br />
<br />
<filetree seq="step102" root="/etc/ceilometer/">conf/controller/ceilometer/ceilometer.conf</filetree><br />
<!--filetree seq="step102" root="/etc/mongodb.conf">conf/controller/mongodb/mongodb.conf</filetree--><br />
<!--filetree seq="step102" root="/etc/apache2/sites-available/ceilometer.conf">conf/controller/ceilometer/apache/ceilometer.conf</filetree--><br />
<!--filetree seq="step102" root="/etc/apache2/sites-available/gnocchi.conf">conf/controller/ceilometer/apache/gnocchi.conf</filetree--><br />
<filetree seq="step102" root="/etc/gnocchi/">conf/controller/ceilometer/gnocchi.conf</filetree><br />
<filetree seq="step102" root="/etc/gnocchi/">conf/controller/ceilometer/api-paste.ini</filetree><br />
<exec seq="step102" type="verbatim"><br />
<br />
# Create gnocchi database<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE gnocchidb default character set utf8;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON gnocchidb.* TO 'gnocchi'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON gnocchidb.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
# Ceilometer<br />
source /root/bin/admin-openrc.sh <br />
openstack user create --domain default --password xxxx ceilometer<br />
openstack role add --project service --user ceilometer admin<br />
openstack service create --name ceilometer --description "Telemetry" metering<br />
openstack user create --domain default --password xxxx gnocchi<br />
openstack role add --project service --user gnocchi admin<br />
openstack service create --name gnocchi --description "Metric Service" metric<br />
openstack endpoint create --region RegionOne metric public http://controller:8041<br />
openstack endpoint create --region RegionOne metric internal http://controller:8041<br />
openstack endpoint create --region RegionOne metric admin http://controller:8041<br />
<br />
mkdir -p /var/cache/gnocchi<br />
chown gnocchi:gnocchi -R /var/cache/gnocchi<br />
mkdir -p /var/lib/gnocchi<br />
chown gnocchi:gnocchi -R /var/lib/gnocchi<br />
<br />
gnocchi-upgrade<br />
sed -i 's/8000/8041/g' /usr/bin/gnocchi-api<br />
# Correct error in gnocchi-api startup script<br />
sed -i -e 's/exec $DAEMON $DAEMON_ARGS/exec $DAEMON -- $DAEMON_ARGS/' /etc/init.d/gnocchi-api<br />
systemctl enable gnocchi-api.service gnocchi-metricd.service gnocchi-statsd.service<br />
systemctl start gnocchi-api.service gnocchi-metricd.service gnocchi-statsd.service<br />
<br />
ceilometer-upgrade --skip-metering-database<br />
service ceilometer-agent-central restart<br />
service ceilometer-agent-notification restart<br />
service ceilometer-collector restart<br />
<br />
# Enable Glance service meters<br />
crudini --set /etc/glance/glance-api.conf DEFAULT transport_url rabbit://openstack:xxxx@controller<br />
crudini --set /etc/glance/glance-api.conf oslo_messaging_notifications driver messagingv2<br />
crudini --set /etc/glance/glance-registry.conf DEFAULT transport_url rabbit://openstack:xxxx@controller<br />
crudini --set /etc/glance/glance-registry.conf oslo_messaging_notifications driver messagingv2<br />
<br />
service glance-registry restart<br />
service glance-api restart<br />
<br />
# Enable Neutron service meters<br />
crudini --set /etc/neutron/neutron.conf oslo_messaging_notifications driver messagingv2<br />
service neutron-server restart<br />
<br />
# Enable Heat service meters<br />
crudini --set /etc/heat/heat.conf oslo_messaging_notifications driver messagingv2<br />
service heat-api restart<br />
service heat-api-cfn restart<br />
service heat-engine restart<br />
<br />
#crudini --set /etc/glance/glance-api.conf DEFAULT rpc_backend rabbit<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_notifications driver messagingv2<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_host controller<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_userid openstack<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_password xxxx<br />
<br />
#crudini --set /etc/glance/glance-registry.conf DEFAULT rpc_backend rabbit<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_notifications driver messagingv2<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_host controller<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_userid openstack<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_password xxxx<br />
<br />
<br />
</exec><br />
<br />
<br />
<exec seq="load-img" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create flavors if not created<br />
openstack flavor show m1.nano >/dev/null 2>&amp;1 || openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano<br />
openstack flavor show m1.tiny >/dev/null 2>&amp;1 || openstack flavor create --id 1 --vcpus 1 --ram 512 --disk 1 m1.tiny<br />
openstack flavor show m1.smaller >/dev/null 2>&amp;1 || openstack flavor create --id 6 --vcpus 1 --ram 512 --disk 3 m1.smaller<br />
<br />
# CentOS image<br />
# Cirros image <br />
#wget -P /tmp/images http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img<br />
wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/cirros-0.3.4-x86_64-disk-vnx.qcow2<br />
glance image-create --name "cirros-0.3.4-x86_64-vnx" --file /tmp/images/cirros-0.3.4-x86_64-disk-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/cirros-0.3.4-x86_64-disk*.qcow2<br />
<br />
# Ubuntu image (trusty)<br />
#wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/trusty-server-cloudimg-amd64-disk1-vnx.qcow2<br />
#glance image-create --name "trusty-server-cloudimg-amd64-vnx" --file /tmp/images/trusty-server-cloudimg-amd64-disk1-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
#rm /tmp/images/trusty-server-cloudimg-amd64-disk1*.qcow2<br />
<br />
# Ubuntu image (xenial)<br />
wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/xenial-server-cloudimg-amd64-disk1-vnx.qcow2<br />
glance image-create --name "xenial-server-cloudimg-amd64-vnx" --file /tmp/images/xenial-server-cloudimg-amd64-disk1-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/xenial-server-cloudimg-amd64-disk1*.qcow2<br />
<br />
#wget -P /tmp/images http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2<br />
#glance image-create --name "CentOS-7-x86_64" --file /tmp/images/CentOS-7-x86_64-GenericCloud.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
#rm /tmp/images/CentOS-7-x86_64-GenericCloud.qcow2<br />
</exec><br />
<br />
<exec seq="create-demo-scenario" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create internal network<br />
#neutron net-create net0<br />
openstack network create net0<br />
#neutron subnet-create net0 10.1.1.0/24 --name subnet0 --gateway 10.1.1.1 --dns-nameserver 8.8.8.8<br />
openstack subnet create --network net0 --gateway 10.1.1.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.1.0/24 --allocation-pool start=10.1.1.8,end=10.1.1.100 subnet0<br />
<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm1 > /root/keys/vm1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm1 --nic net-id=net0 --key-name vm1<br />
<br />
# Create external network<br />
#neutron net-create ExtNet --provider:physical_network provider --provider:network_type flat --router:external --shared<br />
openstack network create --share --external --provider-physical-network provider --provider-network-type flat ExtNet<br />
#neutron subnet-create --name ExtSubnet --allocation-pool start=10.0.10.100,end=10.0.10.200 --dns-nameserver 10.0.10.1 --gateway 10.0.10.1 ExtNet 10.0.10.0/24<br />
openstack subnet create --network ExtNet --gateway 10.0.10.1 --dns-nameserver 10.0.10.1 --subnet-range 10.0.10.0/24 --allocation-pool start=10.0.10.100,end=10.0.10.200 ExtSubNet<br />
#neutron router-create r0<br />
openstack router create r0<br />
#neutron router-gateway-set r0 ExtNet<br />
openstack router set r0 --external-gateway ExtNet<br />
#neutron router-interface-add r0 subnet0<br />
openstack router add subnet r0 subnet0<br />
<br />
<br />
# Assign floating IP address to vm1<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm1<br />
openstack server add floating ip vm1 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
<br />
<br />
# Create security group rules to allow ICMP, SSH and WWW access<br />
openstack security group rule create --proto icmp --dst-port 0 default<br />
openstack security group rule create --proto tcp --dst-port 80 default<br />
openstack security group rule create --proto tcp --dst-port 22 default<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-vm2" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm2 > /root/keys/vm2<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm2 --nic net-id=net0 --key-name vm2<br />
# Assign floating IP address to vm2<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm2<br />
openstack server add floating ip vm2 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
</exec><br />
<br />
<exec seq="create-demo-vm3" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm3 > /root/keys/vm3<br />
openstack server create --flavor m1.smaller --image xenial-server-cloudimg-amd64-vnx vm3 --nic net-id=net0 --key-name vm3<br />
# Assign floating IP address to vm3<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm3<br />
openstack server add floating ip vm3 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
</exec><br />
<br />
<exec seq="create-vlan-demo-scenario" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create vlan based networks and subnetworks<br />
#neutron net-create vlan1000 --shared --provider:physical_network vlan --provider:network_type vlan --provider:segmentation_id 1000<br />
#neutron net-create vlan1001 --shared --provider:physical_network vlan --provider:network_type vlan --provider:segmentation_id 1001<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1000 vlan1000<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1001 vlan1001<br />
#neutron subnet-create vlan1000 10.1.2.0/24 --name vlan1000-subnet --allocation-pool start=10.1.2.2,end=10.1.2.99 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8<br />
#neutron subnet-create vlan1001 10.1.3.0/24 --name vlan1001-subnet --allocation-pool start=10.1.3.2,end=10.1.3.99 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8<br />
openstack subnet create --network vlan1000 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.2.0/24 --allocation-pool start=10.1.2.2,end=10.1.2.99 subvlan1000<br />
openstack subnet create --network vlan1001 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.3.0/24 --allocation-pool start=10.1.3.2,end=10.1.3.99 subvlan1001<br />
<br />
<br />
# Create virtual machine<br />
mkdir -p tmp<br />
openstack keypair create vm3 > tmp/vm3<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm3 --nic net-id=vlan1000 --key-name vm3<br />
openstack keypair create vm4 > tmp/vm4<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm4 --nic net-id=vlan1001 --key-name vm4<br />
<br />
<br />
# Create security group rules to allow ICMP, SSH and WWW access<br />
openstack security group rule create --proto icmp --dst-port 0 default<br />
openstack security group rule create --proto tcp --dst-port 80 default<br />
openstack security group rule create --proto tcp --dst-port 22 default<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
echo "--"<br />
echo "-- Keystone (identity)"<br />
echo "--"<br />
echo "Command: openstack --os-auth-url http://controller:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue"<br />
openstack --os-auth-url http://controller:35357/v3 \<br />
--os-project-domain-name default --os-user-domain-name default \<br />
--os-project-name admin --os-username admin token issue<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Glance (images)"<br />
echo "--"<br />
echo "Command: openstack image list"<br />
openstack image list<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Nova (compute)"<br />
echo "--"<br />
echo "Command: openstack compute service list"<br />
openstack compute service list<br />
echo "Command: openstack hypervisor service list"<br />
openstack hypervisor service list<br />
echo "Command: openstack catalog list"<br />
openstack catalog list<br />
echo "Command: nova-status upgrade check"<br />
nova-status upgrade check<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Neutron (network)"<br />
echo "--"<br />
echo "Command: openstack extension list --network"<br />
openstack extension list --network<br />
echo "Command: openstack network agent list"<br />
openstack network agent list<br />
echo "Command: openstack security group list"<br />
openstack security group list<br />
echo "Command: openstack security group rule list"<br />
openstack security group rule list<br />
</exec><br />
<br />
</vm><br />
<br />
<vm name="network" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-network</filesystem><br />
<!--vm name="network" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-network</filesystem--><br />
<mem>1G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.21/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.21/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="4" net="ExtNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<forwarding type="ip" /><br />
<forwarding type="ipv6" /><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts<br />
rm /root/hosts<br />
</exec><br />
<exec seq="on_boot" type="verbatim"><br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --><br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<filetree seq="on_boot" root="/root/">conf/network/bin</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
chmod +x /root/bin/*<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) --><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/neutron.conf</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/metadata_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/plugins/ml2/">conf/network/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step52" root="/etc/neutron/plugins/ml2/">conf/network/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/l3_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/dhcp_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/dnsmasq-neutron.conf</filetree><br />
<!--filetree seq="step52" root="/etc/neutron/">conf/network/neutron/fwaas_driver.ini</filetree--><br />
<!--filetree seq="step52" root="/etc/neutron/">conf/network/neutron/lbaas_agent.ini</filetree--><br />
<exec seq="step52" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
#ovs-vsctl add-br br-ex<br />
ovs-vsctl add-port br-provider eth4<br />
<br />
service neutron-lbaasv2-agent restart<br />
service openvswitch-switch restart<br />
<br />
service neutron-openvswitch-agent restart<br />
#service neutron-linuxbridge-agent restart<br />
service neutron-dhcp-agent restart<br />
service neutron-metadata-agent restart<br />
service neutron-l3-agent restart<br />
<br />
rm -f /var/lib/neutron/neutron.sqlite<br />
</exec><br />
<br />
</vm><br />
<br />
<br />
<vm name="compute1" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-compute</filesystem><br />
<!--vm name="compute1" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64" vcpu="2"--><br />
<!--filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-compute</filesystem--><br />
<mem>2G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.31/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.31/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
# Create /dev/net/tun device <br />
#mkdir -p /dev/net/<br />
#mknod -m 666 /dev/net/tun c 10 200<br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<!-- STEP 42: Compute service (Nova) --><br />
<filetree seq="step42" root="/etc/nova/">conf/compute1/nova/nova.conf</filetree><br />
<filetree seq="step42" root="/etc/nova/">conf/compute1/nova/nova-compute.conf</filetree><br />
<exec seq="step42" type="verbatim"><br />
service nova-compute restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron) --><br />
<filetree seq="step53" root="/etc/neutron/">conf/compute1/neutron/neutron.conf</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/ml2_conf.ini</filetree!--><br />
<exec seq="step53" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
service openvswitch-switch restart<br />
service nova-compute restart<br />
service neutron-openvswitch-agent restart<br />
</exec><br />
<br />
<!--filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<exec seq="step92" type="verbatim"><br />
</exec--><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get -y install ceilometer-agent-compute<br />
</exec><br />
<filetree seq="step102" root="/etc/ceilometer/">conf/compute2/ceilometer/ceilometer.conf</filetree><br />
<exec seq="step102" type="verbatim"><br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit True<br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit_period hour<br />
crudini --set /etc/nova/nova.conf DEFAULT notify_on_state_change vm_and_task_state<br />
crudini --set /etc/nova/nova.conf oslo_messaging_notifications driver messagingv2<br />
service ceilometer-agent-compute restart<br />
service nova-compute restart<br />
</exec><br />
<br />
</vm><br />
<br />
<vm name="compute2" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-compute</filesystem><br />
<!--vm name="compute2" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64" vcpu="2"--><br />
<!--filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-compute</filesystem!--><br />
<mem>2G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.32/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.32/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
# Create /dev/net/tun device <br />
#mkdir -p /dev/net/<br />
#mknod -m 666 /dev/net/tun c 10 200<br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<!-- STEP 42: Compute service (Nova) --><br />
<filetree seq="step42" root="/etc/nova/">conf/compute2/nova/nova.conf</filetree><br />
<filetree seq="step42" root="/etc/nova/">conf/compute2/nova/nova-compute.conf</filetree><br />
<exec seq="step42" type="verbatim"><br />
service nova-compute restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) --><br />
<filetree seq="step53" root="/etc/neutron/">conf/compute2/neutron/neutron.conf</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/ml2_conf.ini</filetree!--><br />
<exec seq="step53" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
service openvswitch-switch restart<br />
service nova-compute restart<br />
service neutron-openvswitch-agent restart<br />
</exec><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get -y install ceilometer-agent-compute<br />
</exec><br />
<filetree seq="step102" root="/etc/ceilometer/">conf/compute2/ceilometer/ceilometer.conf</filetree><br />
<exec seq="step102" type="verbatim"><br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit True<br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit_period hour<br />
crudini --set /etc/nova/nova.conf DEFAULT notify_on_state_change vm_and_task_state<br />
crudini --set /etc/nova/nova.conf oslo_messaging_notifications driver messagingv2<br />
service ceilometer-agent-compute restart<br />
service nova-compute restart<br />
</exec><br />
<br />
</vm><br />
<br />
<br />
<host><br />
<hostif net="ExtNet"><br />
<ipv4>10.0.10.1/24</ipv4><br />
</hostif><br />
<hostif net="MgmtNet"><br />
<ipv4>10.0.0.1/24</ipv4><br />
</hostif><br />
<exec seq="step00" type="verbatim"><br />
echo "--\n-- Waiting for all VMs to be ssh ready...\n--"<br />
</exec><br />
<exec seq="step00" type="verbatim"><br />
# Wait till ssh is accesible in all VMs<br />
while ! $( nc -z controller 22 ); do sleep 1; done<br />
while ! $( nc -z network 22 ); do sleep 1; done<br />
while ! $( nc -z compute1 22 ); do sleep 1; done<br />
while ! $( nc -z compute2 22 ); do sleep 1; done<br />
</exec><br />
<exec seq="step00" type="verbatim"><br />
echo "-- ...OK\n--"<br />
</exec><br />
</host><br />
<br />
</vnx></div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Vnx-labo-openstack-4nodes-classic-ovs-stein&diff=2627
Vnx-labo-openstack-4nodes-classic-ovs-stein
2019-08-26T14:18:35Z
<p>David: /* Provider networks example */</p>
<hr />
<div>{{Title|VNX Openstack Stein four nodes classic scenario using Open vSwitch}}<br />
<br />
== Introduction ==<br />
<br />
This is an Openstack tutorial scenario designed to experiment with Openstack free and open-source software platform for cloud-computing.<br />
<br />
The scenario is made of four virtual machines: a controller node, a network node and two compute nodes, all based on LXC. Optionally, new compute nodes can be added by starting additional VNX scenarios.<br />
<br />
Openstack version used is Stein (April 2019) over Ubuntu 18.04 LTS. The deployment scenario is the one that was named "Classic with Open vSwitch" and was described in previous versions of Openstack documentation (https://docs.openstack.org/liberty/networking-guide/scenario-classic-ovs.html). It is prepared to experiment either with the deployment of virtual machines using "Self Service Networks" provided by Openstack, or with the use of external "Provider networks".<br />
<br />
The scenario is similar to the ones developed by Raul Alvarez to test OpenDaylight-Openstack integration, but instead of using Devstack to configure Openstack nodes, the configuration is done by means of commands integrated into the VNX scenario following [https://docs.openstack.org/stein/install/ Openstack Stein installation recipes].<br />
<br />
[[File:Openstack_tutorial-stein.png|center|thumb|600px|<div align=center><br />
'''Figure 1: Openstack tutorial scenario'''</div>]]<br />
<br />
== Requirements ==<br />
<br />
To use the scenario you need a Linux computer (Ubuntu 16.04 or later recommended) with VNX software installed. At least 8GB of memory are needed to execute the scenario. <br />
<br />
See how to install VNX here: http://vnx.dit.upm.es/vnx/index.php/Vnx-install<br />
<br />
If already installed, update VNX to the latest version with:<br />
<br />
vnx_update<br />
<br />
== Installation ==<br />
<br />
Download the scenario with the virtual machines images included and unpack it:<br />
<br />
wget http://idefix.dit.upm.es/vnx/examples/openstack/openstack_lab-stein_4n_classic_ovs-v01-with-rootfs.tgz<br />
sudo vnx --unpack openstack_lab-stein_4n_classic_ovs-v01-with-rootfs.tgz<br />
<br />
== Starting the scenario ==<br />
<br />
Start the scenario and configure it and load an example cirros and ubuntu images with:<br />
cd openstack_lab-stein_4n_classic_ovs-v01<br />
# Start the scenario<br />
sudo vnx -f openstack_lab.xml -v --create<br />
# Wait for all consoles to have started and configure all Openstack services<br />
vnx -f openstack_lab.xml -v -x start-all<br />
# Load vm images in GLANCE<br />
vnx -f openstack_lab.xml -v -x load-img<br />
<br />
<br />
[[File:Tutorial-openstack-stein-4n-classic-ovs.png|center|thumb|600px|<div align=center><br />
'''Figure 2: Openstack tutorial detailed topology'''</div>]]<br />
<br />
Once started, you can connect to Openstack Dashboard (default/admin/xxxx) starting a browser and pointing it to the controller horizon page. For example:<br />
<br />
firefox 10.0.10.11/horizon<br />
<br />
== Self Service networks example ==<br />
<br />
Access the network topology Dashboard page (Project->Network->Network topology) and create a simple demo scenario inside Openstack:<br />
<br />
vnx -f openstack_lab.xml -v -x create-demo-scenario<br />
<br />
You should see the simple scenario as it is being created through the Dashboard.<br />
<br />
Once created you should be able to access vm1 console, to ping or ssh from the host to the vm1 or the opposite (see the floating IP assigned to vm1 in the Dashboard, probably 10.0.10.102).<br />
<br />
You can create a second Cirros virtual machine (vm2) or a third Ubuntu virtual machine and test connectivity among all virtual machines with: <br />
<br />
vnx -f openstack_lab.xml -v -x create-demo-vm2<br />
vnx -f openstack_lab.xml -v -x create-demo-vm3<br />
<br />
To allow external Internet access from vm1 you hace to configure a NAT in the host. You can easily do it using '''vnx_config_nat''' command distributed with VNX. Just find out the name of the public network interface of your host (i.e eth0) and execute:<br />
<br />
vnx_config_nat ExtNet eth0<br />
<br />
Besides, you can access the Openstack controller by ssh from the host and execute management commands directly:<br />
slogin root@controller # root/xxxx<br />
source bin/admin-openrc.sh # Load admin credentials<br />
<br />
For example, to show the virtual machines started:<br />
openstack server list<br />
<br />
You can also execute that commands from the host where the virtual scenario is started. For that purpose you need to install openstack client first:<br />
<br />
pip install python-openstackclient<br />
source bin/admin-openrc.sh # Load admin credentials<br />
openstack server list<br />
<br />
== Provider networks example ==<br />
<br />
Compute nodes in the Openstack virtual lab scenario have two network interfaces for internal and external connections: <br />
* '''eth2''', connected to Tunnels network and used to connect with VMs in other compute nodes or routers in the network node<br />
* '''eth3''', connected to VLANs network and used to connect with VMs in other compute nodes and also to connect to external systems through the Providers networks infrastructure. <br />
<br />
To demonstrate how Openstack VMs can be connected with external systems though the VLAN network switches, an additional demo scenario is included. Just execute:<br />
vnx -f openstack_lab.xml -v -x create-vlan-demo-scenario<br />
<br />
That scenario will create two new networks and subnetworks associated with VLANs 1000 and 1001, and two VMs, vm3 and vm4 connected to that networks. You can see the scenario created through the openstack Dashboard.<br />
<br />
[[File:Openstack-provider-networks-example.png|center|thumb|600px|<div align=center><br />
'''Figure 3: Provider networks testing scenario'''</div>]]<br />
<br />
The commands used to create that networks and vms are the following (can be consulted also in the scenario XML file):<br />
<br />
<pre><br />
# Networks<br />
neutron net-create vlan1000 --shared --provider:physical_network vlan --provider:network_type vlan --provider:segmentation_id 1000<br />
neutron net-create vlan1001 --shared --provider:physical_network vlan --provider:network_type vlan --provider:segmentation_id 1001<br />
neutron subnet-create vlan1000 10.1.2.0/24 --name vlan1000-subnet --allocation-pool start=10.1.2.2,end=10.1.2.99 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8<br />
neutron subnet-create vlan1001 10.1.3.0/24 --name vlan1001-subnet --allocation-pool start=10.1.3.2,end=10.1.3.99 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8<br />
<br />
# VMs<br />
mkdir -p tmp<br />
openstack keypair create vm3 > tmp/vm3<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm3 --nic net-id=vlan1000<br />
openstack keypair create vm4 > tmp/vm4<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm4 --nic net-id=vlan1001<br />
</pre><br />
<br />
To demonstrate the connectivity of vm3 and vm4 with external systems connected on VLANs 1000/1001, you can start and additional virtual scenario which creates three additional systems: vmA (vlan 1000), vmB (vlan 1001) and vlan-router (connected to both vlans). To start it just execute:<br />
vnx -f openstack_lab-vms-vlan.xml -v -t<br />
<br />
[[File:Openstack_lab-vms-vlan-map.png|center|thumb|600px|<div align=center><br />
'''Figure 3: Provider networks demo external scenario'''</div>]]<br />
<br />
Once the scenario is started, you should be able to ping, traceroute and ssh among vm3, vm4, vmA and vmB using the following IP addresses:<br />
<ul><br />
<li>Virtual machines inside Openstack:</li><br />
<ul><br />
<li>vmA1: 10.1.2.20/24</li><br />
<li>vmB1: 10.1.3.20/24</li><br />
</ul><br />
<li>Virtual machines outside Openstack:</li><br />
<ul><br />
<li>vmA2: 10.1.2.100/24</li><br />
<li>vmB2: 10.1.3.100/24</li><br />
</ul><br />
</ul><br />
<br />
You can have a look at the virtual switch that supports the Openstack VLAN Network executing the following command in the host:<br />
ovs-vsctl show<br />
<br />
<br />
[[File:Vnx-demo-scenario-openstack-stein.png|center|thumb|600px|<div align=center><br />
'''Figure 4: Openstack Dashboard view of the demo virtual scenarios created'''</div>]]<br />
<br />
== Stopping or releasing the scenario ==<br />
<br />
To stop the scenario preserving the configuration and the changes made:<br />
<br />
vnx -f openstack_lab.xml -v --shutdown<br />
<br />
To start it again use:<br />
<br />
vnx -f openstack_lab.xml -v --start<br />
<br />
To stop the scenario destroying all the configuration and changes made:<br />
<br />
vnx -f openstack_lab.xml -v --destroy<br />
<br />
To unconfigure the NAT, just execute (change eth0 by the name of your external interface):<br />
<br />
vnx_config_nat -d ExtNet eth0<br />
<br />
== Other useful information ==<br />
<br />
To pack the scenario in a tgz file:<br />
<br />
bin/pack-scenario-with-rootfs # including rootfs<br />
bin/pack-scenario # without rootfs<br />
<br />
== Other Openstack Dashboard screen captures ==<br />
<br />
[[File:Vnx-demo-scenario-openstack-mitaka-compute-overview.png|center|thumb|600px|<div align=center><br />
'''Figure 4: Openstack Dashboard compute overview'''</div>]]<br />
<br />
[[File:Vnx-demo-scenario-openstack-mitaka-instances.png|center|thumb|600px|<div align=center><br />
'''Figure 5: Openstack Dashboard view of the demo virtual machines created'''</div>]]<br />
<br />
<?xml version="1.0" encoding="UTF-8"?><br />
<br />
<!--<br />
~~~~~~~~~~~~~~~~~~~~~~<br />
VNX Sample scenarios<br />
~~~~~~~~~~~~~~~~~~~~~~<br />
<br />
Name: openstack_tutorial-stein<br />
<br />
Description: This is an Openstack tutorial scenario designed to experiment with Openstack free and open-source <br />
software platform for cloud-computing. It is made of four LXC containers: <br />
- one controller<br />
- one network node<br />
- two compute nodes<br />
Openstack version used: Stein.<br />
The network configuration is based on the one named "Classic with Open vSwitch" described here:<br />
http://docs.openstack.org/liberty/networking-guide/scenario-classic-ovs.html<br />
<br />
Author: David Fernandez (david@dit.upm.es)<br />
<br />
This file is part of the Virtual Networks over LinuX (VNX) Project distribution. <br />
(www: http://www.dit.upm.es/vnx - e-mail: vnx@dit.upm.es) <br />
<br />
Copyright (C) 2019 Departamento de Ingenieria de Sistemas Telematicos (DIT)<br />
Universidad Politecnica de Madrid (UPM)<br />
SPAIN<br />
--><br />
<br />
<vnx xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"<br />
xsi:noNamespaceSchemaLocation="/usr/share/xml/vnx/vnx-2.00.xsd"><br />
<global><br />
<version>2.0</version><br />
<scenario_name>openstack_tutorial-stein</scenario_name><br />
<ssh_key>/root/.ssh/id_rsa.pub</ssh_key><br />
<automac offset="0"/><br />
<!--vm_mgmt type="none"/--><br />
<vm_mgmt type="private" network="10.20.0.0" mask="24" offset="0"><br />
<host_mapping /><br />
</vm_mgmt> <br />
<vm_defaults><br />
<console id="0" display="no"/><br />
<console id="1" display="yes"/><br />
</vm_defaults><br />
<cmd-seq seq="step1-6">step1,step2,step3,step3b,step4,step5,step6</cmd-seq><br />
<cmd-seq seq="step4">step41,step42,step43,step44</cmd-seq><br />
<cmd-seq seq="step5">step51,step52,step53</cmd-seq><br />
<!-- start-all for 'noconfig' scenario: all installation steps included --><br />
<!--cmd-seq seq="start-all">step1,step2,step3,step4,step5,step6</cmd-seq--><br />
<!-- start-all for configured scenario: only network and compute node steps included --><br />
<cmd-seq seq="step10">step101,step102</cmd-seq><br />
<cmd-seq seq="start-all">step00,step42,step43,step44,step52,step53</cmd-seq><br />
<cmd-seq seq="discover-hosts">step44</cmd-seq><br />
</global><br />
<br />
<net name="MgmtNet" mode="openvswitch" mtu="1450"/><br />
<net name="TunnNet" mode="openvswitch" mtu="1450"/><br />
<net name="ExtNet" mode="openvswitch" /><br />
<net name="VlanNet" mode="openvswitch" /><br />
<net name="virbr0" mode="virtual_bridge" managed="no"/><br />
<br />
<vm name="controller" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-controller</filesystem><br />
<mem>1G</mem><br />
<!--console id="0" display="yes"/--><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.11/24</ipv4><br />
</if><br />
<if id="2" net="ExtNet"><br />
<ipv4>10.0.10.11/24</ipv4><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-controller.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<filetree seq="on_boot" root="/root/">conf/controller/bin</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
chmod +x /root/bin/*<br />
</exec><br />
<exec seq="on_boot" type="verbatim"><br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
<br />
# Change owner of secret_key to horizon to avoid a 500 error when<br />
# accessing horizon (new problem arosed in v04)<br />
# See: https://ask.openstack.org/en/question/30059/getting-500-internal-server-error-while-accessing-horizon-dashboard-in-ubuntu-icehouse/<br />
chown horizon /var/lib/openstack-dashboard/secret_key<br />
<br />
# Stop nova services. Before being configured, they consume a lot of CPU<br />
service nova-scheduler stop<br />
service nova-api stop<br />
service nova-conductor stop<br />
</exec><br />
<br />
<exec seq="step00" type="verbatim"><br />
# Restart nova services<br />
service nova-scheduler start<br />
service nova-api start<br />
service nova-conductor start<br />
</exec><br />
<br />
<!-- STEP 1: Basic services --><br />
<filetree seq="step1" root="/etc/mysql/mariadb.conf.d/">conf/controller/mysql/mysqld_openstack.cnf</filetree><br />
<filetree seq="step1" root="/etc/">conf/controller/memcached/memcached.conf</filetree><br />
<filetree seq="step1" root="/etc/default/">conf/controller/etcd/etcd</filetree><br />
<!--filetree seq="step1" root="/etc/mongodb.conf">conf/controller/mongodb/mongodb.conf</filetree!--><br />
<exec seq="step1" type="verbatim"><br />
<br />
# Stop nova services. Before being configured, they consume a lot of CPU<br />
service nova-scheduler stop<br />
service nova-api stop<br />
service nova-conductor stop<br />
<br />
# Change all ocurrences of utf8mb4 to utf8. See comment above<br />
#for f in $( find /etc/mysql/mariadb.conf.d/ -type f ); do echo "Changing utf8mb4 to utf8 in file $f"; sed -i -e 's/utf8mb4/utf8/g' $f; done<br />
service mysql restart<br />
#mysql_secure_installation # to be run manually<br />
<br />
rabbitmqctl add_user openstack xxxx<br />
rabbitmqctl set_permissions openstack ".*" ".*" ".*" <br />
<br />
service memcached restart<br />
<br />
systemctl enable etcd<br />
systemctl start etcd<br />
<br />
#service mongodb stop<br />
#rm -f /var/lib/mongodb/journal/prealloc.*<br />
#service mongodb start<br />
</exec><br />
<br />
<!-- STEP 2: Identity service --><br />
<filetree seq="step2" root="/etc/keystone/">conf/controller/keystone/keystone.conf</filetree><br />
<filetree seq="step2" root="/root/bin/">conf/controller/keystone/admin-openrc.sh</filetree><br />
<filetree seq="step2" root="/root/bin/">conf/controller/keystone/demo-openrc.sh</filetree><br />
<exec seq="step2" type="verbatim"><br />
count=1; while ! mysqladmin ping ; do echo -n $count; echo ": waiting for mysql ..."; ((count++)) &amp;&amp; ((count==6)) &amp;&amp; echo "--" &amp;&amp; echo "-- ERROR: database not ready." &amp;&amp; echo "--" &amp;&amp; break; sleep 2; done<br />
</exec><br />
<exec seq="step2" type="verbatim"><br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE keystone;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
su -s /bin/sh -c "keystone-manage db_sync" keystone<br />
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone<br />
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone<br />
<br />
keystone-manage bootstrap --bootstrap-password xxxx \<br />
--bootstrap-admin-url http://controller:5000/v3/ \<br />
--bootstrap-internal-url http://controller:5000/v3/ \<br />
--bootstrap-public-url http://controller:5000/v3/ \<br />
--bootstrap-region-id RegionOne<br />
<br />
echo "ServerName controller" >> /etc/apache2/apache2.conf<br />
#ln -s /etc/apache2/sites-available/wsgi-keystone.conf /etc/apache2/sites-enabled<br />
service apache2 restart<br />
rm -f /var/lib/keystone/keystone.db<br />
sleep 5<br />
<br />
export OS_USERNAME=admin<br />
export OS_PASSWORD=xxxx<br />
export OS_PROJECT_NAME=admin<br />
export OS_USER_DOMAIN_NAME=Default<br />
export OS_PROJECT_DOMAIN_NAME=Default<br />
export OS_AUTH_URL=http://controller:5000/v3<br />
export OS_IDENTITY_API_VERSION=3<br />
<br />
# Create users and projects<br />
openstack project create --domain default --description "Service Project" service<br />
openstack project create --domain default --description "Demo Project" demo<br />
openstack user create --domain default --password=xxxx demo<br />
openstack role create user<br />
openstack role add --project demo --user demo user<br />
</exec><br />
<br />
<!-- <br />
STEP 3: Image service (Glance) <br />
--><br />
<filetree seq="step3" root="/etc/glance/">conf/controller/glance/glance-api.conf</filetree><br />
<filetree seq="step3" root="/etc/glance/">conf/controller/glance/glance-registry.conf</filetree><br />
<exec seq="step3" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE glance;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx glance<br />
openstack role add --project service --user glance admin<br />
openstack service create --name glance --description "OpenStack Image" image<br />
openstack endpoint create --region RegionOne image public http://controller:9292<br />
openstack endpoint create --region RegionOne image internal http://controller:9292<br />
openstack endpoint create --region RegionOne image admin http://controller:9292<br />
<br />
su -s /bin/sh -c "glance-manage db_sync" glance<br />
service glance-registry restart<br />
service glance-api restart<br />
#rm -f /var/lib/glance/glance.sqlite<br />
</exec><br />
<br />
<!-- <br />
STEP 3B: Placement service API<br />
--><br />
<filetree seq="step3b" root="/etc/placement/">conf/controller/placement/placement.conf</filetree><br />
<exec seq="step3b" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE placement;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx placement<br />
openstack role add --project service --user placement admin<br />
openstack service create --name placement --description "Placement API" placement<br />
openstack endpoint create --region RegionOne placement public http://controller:8778<br />
openstack endpoint create --region RegionOne placement internal http://controller:8778<br />
openstack endpoint create --region RegionOne placement admin http://controller:8778<br />
su -s /bin/sh -c "placement-manage db sync" placement<br />
service apache2 restart<br />
</exec><br />
<br />
<br />
<br />
<!-- STEP 4: Compute service (Nova) --><br />
<filetree seq="step41" root="/etc/nova/">conf/controller/nova/nova.conf</filetree><br />
<exec seq="step41" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova_api;"<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova;"<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova_cell0;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
<br />
openstack user create --domain default --password=xxxx nova<br />
openstack role add --project service --user nova admin<br />
openstack service create --name nova --description "OpenStack Compute" compute<br />
<br />
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1<br />
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1<br />
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1<br />
<br />
# Restart services stopped at step 1 to save CPU<br />
service nova-scheduler start<br />
service nova-api start<br />
service nova-conductor start<br />
<br />
su -s /bin/sh -c "nova-manage api_db sync" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova<br />
su -s /bin/sh -c "nova-manage db sync" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova<br />
<br />
service nova-api restart<br />
service nova-consoleauth restart<br />
service nova-scheduler restart<br />
service nova-conductor restart<br />
service nova-novncproxy restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<exec seq="step43" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Wait for compute1 hypervisor to be up<br />
while [ $( openstack hypervisor list --matching compute1 -f value -c State ) != 'up' ]; do echo "waiting for compute1 hypervisor..."; sleep 5; done<br />
</exec><br />
<exec seq="step43" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Wait for compute2 hypervisor to be up<br />
while [ $( openstack hypervisor list --matching compute2 -f value -c State ) != 'up' ]; do echo "waiting for compute2 hypervisor..."; sleep 5; done<br />
</exec><br />
<exec seq="step44" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
openstack hypervisor list<br />
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron) --><br />
<filetree seq="step51" root="/etc/neutron/">conf/controller/neutron/neutron.conf</filetree><br />
<!--filetree seq="step51" root="/etc/neutron/">conf/controller/neutron/neutron_lbaas.conf</filetree--><br />
<filetree seq="step51" root="/etc/neutron/plugins/ml2/">conf/controller/neutron/ml2_conf.ini</filetree><br />
<exec seq="step51" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE neutron;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx neutron<br />
openstack role add --project service --user neutron admin<br />
openstack service create --name neutron --description "OpenStack Networking" network<br />
openstack endpoint create --region RegionOne network public http://controller:9696<br />
openstack endpoint create --region RegionOne network internal http://controller:9696<br />
openstack endpoint create --region RegionOne network admin http://controller:9696<br />
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron<br />
<br />
# LBaaS<br />
neutron-db-manage --subproject neutron-lbaas upgrade head<br />
<br />
# FwaaS<br />
neutron-db-manage --subproject neutron-fwaas upgrade head<br />
<br />
# LBaaS Dashboard panels<br />
#git clone https://git.openstack.org/openstack/neutron-lbaas-dashboard<br />
#cd neutron-lbaas-dashboard<br />
#git checkout stable/mitaka<br />
#python setup.py install<br />
#cp neutron_lbaas_dashboard/enabled/_1481_project_ng_loadbalancersv2_panel.py /usr/share/openstack-dashboard/openstack_dashboard/local/enabled/<br />
#cd /usr/share/openstack-dashboard<br />
#./manage.py collectstatic --noinput<br />
#./manage.py compress<br />
#sudo service apache2 restart<br />
<br />
service nova-api restart<br />
service neutron-server restart<br />
</exec><br />
<br />
<!-- STEP 6: Dashboard service --><br />
<filetree seq="step6" root="/etc/openstack-dashboard/">conf/controller/dashboard/local_settings.py</filetree><br />
<exec seq="step6" type="verbatim"><br />
#chown www-data:www-data /var/lib/openstack-dashboard/secret_key<br />
rm /var/lib/openstack-dashboard/secret_key<br />
systemctl enable apache2<br />
service apache2 restart<br />
</exec><br />
<br />
<!-- STEP 7: Trove service --><br />
<cmd-seq seq="step7">step71,step72,step73</cmd-seq><br />
<exec seq="step71" type="verbatim"><br />
apt-get -y install python-trove python-troveclient python-glanceclient trove-common trove-api trove-taskmanager trove-conductor python-pip<br />
pip install trove-dashboard==7.0.0.0b2<br />
</exec><br />
<br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-conductor.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-taskmanager.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-guestagent.conf</filetree><br />
<exec seq="step72" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE trove;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
<br />
openstack user create --domain default --password xxxx trove<br />
openstack role add --project service --user trove admin<br />
openstack service create --name trove --description "Database" database<br />
openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s<br />
<br />
su -s /bin/sh -c "trove-manage db_sync" trove<br />
<br />
service trove-api restart<br />
service trove-taskmanager restart<br />
service trove-conductor restart<br />
<br />
# Install trove_dashboard<br />
cp -a /usr/local/lib/python2.7/dist-packages/trove_dashboard/enabled/* /usr/share/openstack-dashboard/openstack_dashboard/local/enabled/<br />
service apache2 restart<br />
<br />
</exec><br />
<br />
<exec seq="step73" type="verbatim"><br />
#wget -P /tmp/images http://tarballs.openstack.org/trove/images/ubuntu/mariadb.qcow2<br />
wget -P /tmp/images/ http://138.4.7.228/download/vnx/filesystems/ostack-images/trove/mariadb.qcow2<br />
glance image-create --name "trove-mariadb" --file /tmp/images/mariadb.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/mariadb.qcow2<br />
su -s /bin/sh -c "trove-manage --config-file /etc/trove/trove.conf datastore_update mysql ''" trove <br />
su -s /bin/sh -c "trove-manage --config-file /etc/trove/trove.conf datastore_version_update mysql mariadb mariadb glance_image_ID '' 1" trove<br />
<br />
# Create example database<br />
openstack flavor show m1.smaller >/dev/null 2>&amp;1 || openstack flavor create m1.smaller --ram 512 --disk 3 --vcpus 1 --id 6<br />
#trove create mysql_instance_1 m1.smaller --size 1 --databases myDB --users userA:xxxx --datastore_version mariadb --datastore mysql<br />
</exec><br />
<br />
<!-- STEP 8: Heat service --><br />
<!--cmd-seq seq="step8">step81,step82</cmd-seq--><br />
<br />
<filetree seq="step8" root="/etc/heat/">conf/controller/heat/heat.conf</filetree><br />
<filetree seq="step8" root="/root/heat/">conf/controller/heat/examples</filetree><br />
<exec seq="step8" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE heat;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password xxxx heat<br />
openstack role add --project service --user heat admin<br />
openstack service create --name heat --description "Orchestration" orchestration<br />
openstack service create --name heat-cfn --description "Orchestration" cloudformation<br />
openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1<br />
openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1<br />
openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1<br />
openstack domain create --description "Stack projects and users" heat<br />
openstack user create --domain heat --password xxxx heat_domain_admin<br />
openstack role add --domain heat --user-domain heat --user heat_domain_admin admin<br />
openstack role create heat_stack_owner<br />
openstack role add --project demo --user demo heat_stack_owner<br />
openstack role create heat_stack_user<br />
<br />
su -s /bin/sh -c "heat-manage db_sync" heat<br />
service heat-api restart<br />
service heat-api-cfn restart<br />
service heat-engine restart<br />
<br />
# Install Orchestration interface in Dashboard<br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get install -y gettext<br />
pip3 install heat-dashboard<br />
<br />
cd /root<br />
git clone https://github.com/openstack/heat-dashboard.git<br />
cd heat-dashboard/<br />
git checkout stable/stein<br />
cp heat_dashboard/enabled/_[1-9]*.py /usr/share/openstack-dashboard/openstack_dashboard/local/enabled<br />
python3 ./manage.py compilemessages<br />
cd /usr/share/openstack-dashboard<br />
DJANGO_SETTINGS_MODULE=openstack_dashboard.settings python3 manage.py collectstatic --noinput<br />
DJANGO_SETTINGS_MODULE=openstack_dashboard.settings python3 manage.py compress --force<br />
rm /var/lib/openstack-dashboard/secret_key<br />
service apache2 restart<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-heat" type="verbatim"><br />
source /root/bin/demo-openrc.sh<br />
<br />
# Create internal network<br />
openstack network create net-heat<br />
openstack subnet create --network net-heat --gateway 10.1.10.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.10.0/24 --allocation-pool start=10.1.10.8,end=10.1.10.100 subnet-heat<br />
<br />
mkdir -p /root/keys<br />
openstack keypair create key-heat > /root/keys/key-heat<br />
#export NET_ID=$(openstack network list | awk '/ net-heat / { print $2 }')<br />
export NET_ID=$( openstack network list --name net-heat -f value -c ID )<br />
openstack stack create -t /root/heat/examples/demo-template.yml --parameter "NetID=$NET_ID" stack<br />
<br />
</exec><br />
<br />
<br />
<!-- STEP 9: Tacker service --><br />
<cmd-seq seq="step9">step91,step92</cmd-seq><br />
<br />
<exec seq="step91" type="verbatim"><br />
apt-get -y install python-pip git<br />
pip install --upgrade pip<br />
</exec><br />
<br />
<filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/default-vim-config.yaml</filetree><br />
<filetree seq="step92" root="/root/tacker/">conf/controller/tacker/examples</filetree><br />
<exec seq="step92" type="verbatim"><br />
sed -i -e 's/.*"resource_types:OS::Nova::Flavor":.*/ "resource_types:OS::Nova::Flavor": "role:admin",/' /etc/heat/policy.json <br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE tacker;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password xxxx tacker<br />
openstack role add --project service --user tacker admin<br />
openstack service create --name tacker --description "Tacker Project" nfv-orchestration<br />
openstack endpoint create --region RegionOne nfv-orchestration public http://controller:9890/<br />
openstack endpoint create --region RegionOne nfv-orchestration internal http://controller:9890/<br />
openstack endpoint create --region RegionOne nfv-orchestration admin http://controller:9890/<br />
<br />
mkdir -p /root/tacker<br />
cd /root/tacker<br />
git clone https://github.com/openstack/tacker<br />
cd tacker<br />
git checkout stable/ocata<br />
pip install -r requirements.txt<br />
pip install tosca-parser<br />
python setup.py install<br />
mkdir -p /var/log/tacker<br />
<br />
/usr/local/bin/tacker-db-manage --config-file /usr/local/etc/tacker/tacker.conf upgrade head<br />
<br />
# Tacker client<br />
cd /root/tacker<br />
git clone https://github.com/openstack/python-tackerclient<br />
cd python-tackerclient<br />
git checkout stable/ocata<br />
python setup.py install<br />
<br />
# Tacker horizon<br />
cd /root/tacker<br />
git clone https://github.com/openstack/tacker-horizon<br />
cd tacker-horizon<br />
git checkout stable/ocata<br />
python setup.py install<br />
cp tacker_horizon/enabled/* /usr/share/openstack-dashboard/openstack_dashboard/enabled/<br />
service apache2 restart<br />
<br />
# Start tacker server<br />
mkdir -p /var/log/tacker<br />
nohup python /usr/local/bin/tacker-server \<br />
--config-file /usr/local/etc/tacker/tacker.conf \<br />
--log-file /var/log/tacker/tacker.log &amp;<br />
<br />
# Register default VIM<br />
tacker vim-register --is-default --config-file /usr/local/etc/tacker/default-vim-config.yaml \<br />
--description "Default VIM" "Openstack-VIM"<br />
<br />
</exec><br />
<exec seq="step93" type="verbatim"><br />
nohup python /usr/local/bin/tacker-server \<br />
--config-file /usr/local/etc/tacker/tacker.conf \<br />
--log-file /var/log/tacker/tacker.log &amp;<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-tacker" type="verbatim"><br />
source /root/bin/demo-openrc.sh<br />
<br />
# Create internal network<br />
openstack network create net-tacker<br />
openstack subnet create --network net-tacker --gateway 10.1.11.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.11.0/24 --allocation-pool start=10.1.11.8,end=10.1.11.100 subnet-tacker<br />
<br />
cd /root/tacker/examples<br />
tacker vnfd-create --vnfd-file sample-vnfd.yaml testd<br />
<br />
# Falla con error: <br />
# ERROR: Property error: : resources.VDU1.properties.image: : No module named v2.client<br />
tacker vnf-create --vnfd-id $( tacker vnfd-list | awk '/ testd / { print $2 }' ) test<br />
<br />
</exec><br />
<br />
<!--filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<exec seq="step92" type="verbatim"><br />
</exec--><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
#apt-get -y install python-pip<br />
#pip install --upgrade pip<br />
#pip install gnocchi[mysql,keystone] gnocchiclient<br />
apt-get -y install ceilometer-collector ceilometer-agent-central ceilometer-agent-notification python-ceilometerclient<br />
apt-get -y install gnocchi-common gnocchi-api gnocchi-metricd gnocchi-statsd python-gnocchiclient<br />
<br />
</exec><br />
<br />
<filetree seq="step102" root="/etc/ceilometer/">conf/controller/ceilometer/ceilometer.conf</filetree><br />
<!--filetree seq="step102" root="/etc/mongodb.conf">conf/controller/mongodb/mongodb.conf</filetree--><br />
<!--filetree seq="step102" root="/etc/apache2/sites-available/ceilometer.conf">conf/controller/ceilometer/apache/ceilometer.conf</filetree--><br />
<!--filetree seq="step102" root="/etc/apache2/sites-available/gnocchi.conf">conf/controller/ceilometer/apache/gnocchi.conf</filetree--><br />
<filetree seq="step102" root="/etc/gnocchi/">conf/controller/ceilometer/gnocchi.conf</filetree><br />
<filetree seq="step102" root="/etc/gnocchi/">conf/controller/ceilometer/api-paste.ini</filetree><br />
<exec seq="step102" type="verbatim"><br />
<br />
# Create gnocchi database<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE gnocchidb default character set utf8;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON gnocchidb.* TO 'gnocchi'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON gnocchidb.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
# Ceilometer<br />
source /root/bin/admin-openrc.sh <br />
openstack user create --domain default --password xxxx ceilometer<br />
openstack role add --project service --user ceilometer admin<br />
openstack service create --name ceilometer --description "Telemetry" metering<br />
openstack user create --domain default --password xxxx gnocchi<br />
openstack role add --project service --user gnocchi admin<br />
openstack service create --name gnocchi --description "Metric Service" metric<br />
openstack endpoint create --region RegionOne metric public http://controller:8041<br />
openstack endpoint create --region RegionOne metric internal http://controller:8041<br />
openstack endpoint create --region RegionOne metric admin http://controller:8041<br />
<br />
mkdir -p /var/cache/gnocchi<br />
chown gnocchi:gnocchi -R /var/cache/gnocchi<br />
mkdir -p /var/lib/gnocchi<br />
chown gnocchi:gnocchi -R /var/lib/gnocchi<br />
<br />
gnocchi-upgrade<br />
sed -i 's/8000/8041/g' /usr/bin/gnocchi-api<br />
# Correct error in gnocchi-api startup script<br />
sed -i -e 's/exec $DAEMON $DAEMON_ARGS/exec $DAEMON -- $DAEMON_ARGS/' /etc/init.d/gnocchi-api<br />
systemctl enable gnocchi-api.service gnocchi-metricd.service gnocchi-statsd.service<br />
systemctl start gnocchi-api.service gnocchi-metricd.service gnocchi-statsd.service<br />
<br />
ceilometer-upgrade --skip-metering-database<br />
service ceilometer-agent-central restart<br />
service ceilometer-agent-notification restart<br />
service ceilometer-collector restart<br />
<br />
# Enable Glance service meters<br />
crudini --set /etc/glance/glance-api.conf DEFAULT transport_url rabbit://openstack:xxxx@controller<br />
crudini --set /etc/glance/glance-api.conf oslo_messaging_notifications driver messagingv2<br />
crudini --set /etc/glance/glance-registry.conf DEFAULT transport_url rabbit://openstack:xxxx@controller<br />
crudini --set /etc/glance/glance-registry.conf oslo_messaging_notifications driver messagingv2<br />
<br />
service glance-registry restart<br />
service glance-api restart<br />
<br />
# Enable Neutron service meters<br />
crudini --set /etc/neutron/neutron.conf oslo_messaging_notifications driver messagingv2<br />
service neutron-server restart<br />
<br />
# Enable Heat service meters<br />
crudini --set /etc/heat/heat.conf oslo_messaging_notifications driver messagingv2<br />
service heat-api restart<br />
service heat-api-cfn restart<br />
service heat-engine restart<br />
<br />
#crudini --set /etc/glance/glance-api.conf DEFAULT rpc_backend rabbit<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_notifications driver messagingv2<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_host controller<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_userid openstack<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_password xxxx<br />
<br />
#crudini --set /etc/glance/glance-registry.conf DEFAULT rpc_backend rabbit<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_notifications driver messagingv2<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_host controller<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_userid openstack<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_password xxxx<br />
<br />
<br />
</exec><br />
<br />
<br />
<exec seq="load-img" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create flavors if not created<br />
openstack flavor show m1.nano >/dev/null 2>&amp;1 || openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano<br />
openstack flavor show m1.tiny >/dev/null 2>&amp;1 || openstack flavor create --id 1 --vcpus 1 --ram 512 --disk 1 m1.tiny<br />
openstack flavor show m1.smaller >/dev/null 2>&amp;1 || openstack flavor create --id 6 --vcpus 1 --ram 512 --disk 3 m1.smaller<br />
<br />
# CentOS image<br />
# Cirros image <br />
#wget -P /tmp/images http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img<br />
wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/cirros-0.3.4-x86_64-disk-vnx.qcow2<br />
glance image-create --name "cirros-0.3.4-x86_64-vnx" --file /tmp/images/cirros-0.3.4-x86_64-disk-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/cirros-0.3.4-x86_64-disk*.qcow2<br />
<br />
# Ubuntu image (trusty)<br />
#wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/trusty-server-cloudimg-amd64-disk1-vnx.qcow2<br />
#glance image-create --name "trusty-server-cloudimg-amd64-vnx" --file /tmp/images/trusty-server-cloudimg-amd64-disk1-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
#rm /tmp/images/trusty-server-cloudimg-amd64-disk1*.qcow2<br />
<br />
# Ubuntu image (xenial)<br />
wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/xenial-server-cloudimg-amd64-disk1-vnx.qcow2<br />
glance image-create --name "xenial-server-cloudimg-amd64-vnx" --file /tmp/images/xenial-server-cloudimg-amd64-disk1-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/xenial-server-cloudimg-amd64-disk1*.qcow2<br />
<br />
#wget -P /tmp/images http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2<br />
#glance image-create --name "CentOS-7-x86_64" --file /tmp/images/CentOS-7-x86_64-GenericCloud.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
#rm /tmp/images/CentOS-7-x86_64-GenericCloud.qcow2<br />
</exec><br />
<br />
<exec seq="create-demo-scenario" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create internal network<br />
#neutron net-create net0<br />
openstack network create net0<br />
#neutron subnet-create net0 10.1.1.0/24 --name subnet0 --gateway 10.1.1.1 --dns-nameserver 8.8.8.8<br />
openstack subnet create --network net0 --gateway 10.1.1.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.1.0/24 --allocation-pool start=10.1.1.8,end=10.1.1.100 subnet0<br />
<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm1 > /root/keys/vm1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm1 --nic net-id=net0 --key-name vm1<br />
<br />
# Create external network<br />
#neutron net-create ExtNet --provider:physical_network provider --provider:network_type flat --router:external --shared<br />
openstack network create --share --external --provider-physical-network provider --provider-network-type flat ExtNet<br />
#neutron subnet-create --name ExtSubnet --allocation-pool start=10.0.10.100,end=10.0.10.200 --dns-nameserver 10.0.10.1 --gateway 10.0.10.1 ExtNet 10.0.10.0/24<br />
openstack subnet create --network ExtNet --gateway 10.0.10.1 --dns-nameserver 10.0.10.1 --subnet-range 10.0.10.0/24 --allocation-pool start=10.0.10.100,end=10.0.10.200 ExtSubNet<br />
#neutron router-create r0<br />
openstack router create r0<br />
#neutron router-gateway-set r0 ExtNet<br />
openstack router set r0 --external-gateway ExtNet<br />
#neutron router-interface-add r0 subnet0<br />
openstack router add subnet r0 subnet0<br />
<br />
<br />
# Assign floating IP address to vm1<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm1<br />
openstack server add floating ip vm1 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
<br />
<br />
# Create security group rules to allow ICMP, SSH and WWW access<br />
openstack security group rule create --proto icmp --dst-port 0 default<br />
openstack security group rule create --proto tcp --dst-port 80 default<br />
openstack security group rule create --proto tcp --dst-port 22 default<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-vm2" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm2 > /root/keys/vm2<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm2 --nic net-id=net0 --key-name vm2<br />
# Assign floating IP address to vm2<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm2<br />
openstack server add floating ip vm2 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
</exec><br />
<br />
<exec seq="create-demo-vm3" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm3 > /root/keys/vm3<br />
openstack server create --flavor m1.smaller --image xenial-server-cloudimg-amd64-vnx vm3 --nic net-id=net0 --key-name vm3<br />
# Assign floating IP address to vm3<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm3<br />
openstack server add floating ip vm3 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
</exec><br />
<br />
<exec seq="create-vlan-demo-scenario" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create vlan based networks and subnetworks<br />
#neutron net-create vlan1000 --shared --provider:physical_network vlan --provider:network_type vlan --provider:segmentation_id 1000<br />
#neutron net-create vlan1001 --shared --provider:physical_network vlan --provider:network_type vlan --provider:segmentation_id 1001<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1000 vlan1000<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1001 vlan1001<br />
#neutron subnet-create vlan1000 10.1.2.0/24 --name vlan1000-subnet --allocation-pool start=10.1.2.2,end=10.1.2.99 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8<br />
#neutron subnet-create vlan1001 10.1.3.0/24 --name vlan1001-subnet --allocation-pool start=10.1.3.2,end=10.1.3.99 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8<br />
openstack subnet create --network vlan1000 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.2.0/24 --allocation-pool start=10.1.2.2,end=10.1.2.99 subvlan1000<br />
openstack subnet create --network vlan1001 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.3.0/24 --allocation-pool start=10.1.3.2,end=10.1.3.99 subvlan1001<br />
<br />
<br />
# Create virtual machine<br />
mkdir -p tmp<br />
openstack keypair create vm3 > tmp/vm3<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm3 --nic net-id=vlan1000 --key-name vm3<br />
openstack keypair create vm4 > tmp/vm4<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm4 --nic net-id=vlan1001 --key-name vm4<br />
<br />
<br />
# Create security group rules to allow ICMP, SSH and WWW access<br />
openstack security group rule create --proto icmp --dst-port 0 default<br />
openstack security group rule create --proto tcp --dst-port 80 default<br />
openstack security group rule create --proto tcp --dst-port 22 default<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
echo "--"<br />
echo "-- Keystone (identity)"<br />
echo "--"<br />
echo "Command: openstack --os-auth-url http://controller:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue"<br />
openstack --os-auth-url http://controller:35357/v3 \<br />
--os-project-domain-name default --os-user-domain-name default \<br />
--os-project-name admin --os-username admin token issue<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Glance (images)"<br />
echo "--"<br />
echo "Command: openstack image list"<br />
openstack image list<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Nova (compute)"<br />
echo "--"<br />
echo "Command: openstack compute service list"<br />
openstack compute service list<br />
echo "Command: openstack hypervisor service list"<br />
openstack hypervisor service list<br />
echo "Command: openstack catalog list"<br />
openstack catalog list<br />
echo "Command: nova-status upgrade check"<br />
nova-status upgrade check<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Neutron (network)"<br />
echo "--"<br />
echo "Command: openstack extension list --network"<br />
openstack extension list --network<br />
echo "Command: openstack network agent list"<br />
openstack network agent list<br />
echo "Command: openstack security group list"<br />
openstack security group list<br />
echo "Command: openstack security group rule list"<br />
openstack security group rule list<br />
</exec><br />
<br />
</vm><br />
<br />
<vm name="network" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-network</filesystem><br />
<!--vm name="network" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-network</filesystem--><br />
<mem>1G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.21/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.21/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="4" net="ExtNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<forwarding type="ip" /><br />
<forwarding type="ipv6" /><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts<br />
rm /root/hosts<br />
</exec><br />
<exec seq="on_boot" type="verbatim"><br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --><br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<filetree seq="on_boot" root="/root/">conf/network/bin</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
chmod +x /root/bin/*<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) --><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/neutron.conf</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/metadata_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/plugins/ml2/">conf/network/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step52" root="/etc/neutron/plugins/ml2/">conf/network/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/l3_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/dhcp_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/dnsmasq-neutron.conf</filetree><br />
<!--filetree seq="step52" root="/etc/neutron/">conf/network/neutron/fwaas_driver.ini</filetree--><br />
<!--filetree seq="step52" root="/etc/neutron/">conf/network/neutron/lbaas_agent.ini</filetree--><br />
<exec seq="step52" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
#ovs-vsctl add-br br-ex<br />
ovs-vsctl add-port br-provider eth4<br />
<br />
service neutron-lbaasv2-agent restart<br />
service openvswitch-switch restart<br />
<br />
service neutron-openvswitch-agent restart<br />
#service neutron-linuxbridge-agent restart<br />
service neutron-dhcp-agent restart<br />
service neutron-metadata-agent restart<br />
service neutron-l3-agent restart<br />
<br />
rm -f /var/lib/neutron/neutron.sqlite<br />
</exec><br />
<br />
</vm><br />
<br />
<br />
<vm name="compute1" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-compute</filesystem><br />
<!--vm name="compute1" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64" vcpu="2"--><br />
<!--filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-compute</filesystem--><br />
<mem>2G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.31/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.31/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
# Create /dev/net/tun device <br />
#mkdir -p /dev/net/<br />
#mknod -m 666 /dev/net/tun c 10 200<br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<!-- STEP 42: Compute service (Nova) --><br />
<filetree seq="step42" root="/etc/nova/">conf/compute1/nova/nova.conf</filetree><br />
<filetree seq="step42" root="/etc/nova/">conf/compute1/nova/nova-compute.conf</filetree><br />
<exec seq="step42" type="verbatim"><br />
service nova-compute restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron) --><br />
<filetree seq="step53" root="/etc/neutron/">conf/compute1/neutron/neutron.conf</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/ml2_conf.ini</filetree!--><br />
<exec seq="step53" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
service openvswitch-switch restart<br />
service nova-compute restart<br />
service neutron-openvswitch-agent restart<br />
</exec><br />
<br />
<!--filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<exec seq="step92" type="verbatim"><br />
</exec--><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get -y install ceilometer-agent-compute<br />
</exec><br />
<filetree seq="step102" root="/etc/ceilometer/">conf/compute2/ceilometer/ceilometer.conf</filetree><br />
<exec seq="step102" type="verbatim"><br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit True<br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit_period hour<br />
crudini --set /etc/nova/nova.conf DEFAULT notify_on_state_change vm_and_task_state<br />
crudini --set /etc/nova/nova.conf oslo_messaging_notifications driver messagingv2<br />
service ceilometer-agent-compute restart<br />
service nova-compute restart<br />
</exec><br />
<br />
</vm><br />
<br />
<vm name="compute2" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-compute</filesystem><br />
<!--vm name="compute2" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64" vcpu="2"--><br />
<!--filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-compute</filesystem!--><br />
<mem>2G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.32/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.32/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
# Create /dev/net/tun device <br />
#mkdir -p /dev/net/<br />
#mknod -m 666 /dev/net/tun c 10 200<br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<!-- STEP 42: Compute service (Nova) --><br />
<filetree seq="step42" root="/etc/nova/">conf/compute2/nova/nova.conf</filetree><br />
<filetree seq="step42" root="/etc/nova/">conf/compute2/nova/nova-compute.conf</filetree><br />
<exec seq="step42" type="verbatim"><br />
service nova-compute restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) --><br />
<filetree seq="step53" root="/etc/neutron/">conf/compute2/neutron/neutron.conf</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/ml2_conf.ini</filetree!--><br />
<exec seq="step53" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
service openvswitch-switch restart<br />
service nova-compute restart<br />
service neutron-openvswitch-agent restart<br />
</exec><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get -y install ceilometer-agent-compute<br />
</exec><br />
<filetree seq="step102" root="/etc/ceilometer/">conf/compute2/ceilometer/ceilometer.conf</filetree><br />
<exec seq="step102" type="verbatim"><br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit True<br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit_period hour<br />
crudini --set /etc/nova/nova.conf DEFAULT notify_on_state_change vm_and_task_state<br />
crudini --set /etc/nova/nova.conf oslo_messaging_notifications driver messagingv2<br />
service ceilometer-agent-compute restart<br />
service nova-compute restart<br />
</exec><br />
<br />
</vm><br />
<br />
<br />
<host><br />
<hostif net="ExtNet"><br />
<ipv4>10.0.10.1/24</ipv4><br />
</hostif><br />
<hostif net="MgmtNet"><br />
<ipv4>10.0.0.1/24</ipv4><br />
</hostif><br />
<exec seq="step00" type="verbatim"><br />
echo "--\n-- Waiting for all VMs to be ssh ready...\n--"<br />
</exec><br />
<exec seq="step00" type="verbatim"><br />
# Wait till ssh is accesible in all VMs<br />
while ! $( nc -z controller 22 ); do sleep 1; done<br />
while ! $( nc -z network 22 ); do sleep 1; done<br />
while ! $( nc -z compute1 22 ); do sleep 1; done<br />
while ! $( nc -z compute2 22 ); do sleep 1; done<br />
</exec><br />
<exec seq="step00" type="verbatim"><br />
echo "-- ...OK\n--"<br />
</exec><br />
</host><br />
<br />
</vnx></div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Vnx-labo-openstack-4nodes-classic-ovs-stein&diff=2626
Vnx-labo-openstack-4nodes-classic-ovs-stein
2019-08-26T14:17:02Z
<p>David: /* Provider networks example */</p>
<hr />
<div>{{Title|VNX Openstack Stein four nodes classic scenario using Open vSwitch}}<br />
<br />
== Introduction ==<br />
<br />
This is an Openstack tutorial scenario designed to experiment with Openstack free and open-source software platform for cloud-computing.<br />
<br />
The scenario is made of four virtual machines: a controller node, a network node and two compute nodes, all based on LXC. Optionally, new compute nodes can be added by starting additional VNX scenarios.<br />
<br />
Openstack version used is Stein (April 2019) over Ubuntu 18.04 LTS. The deployment scenario is the one that was named "Classic with Open vSwitch" and was described in previous versions of Openstack documentation (https://docs.openstack.org/liberty/networking-guide/scenario-classic-ovs.html). It is prepared to experiment either with the deployment of virtual machines using "Self Service Networks" provided by Openstack, or with the use of external "Provider networks".<br />
<br />
The scenario is similar to the ones developed by Raul Alvarez to test OpenDaylight-Openstack integration, but instead of using Devstack to configure Openstack nodes, the configuration is done by means of commands integrated into the VNX scenario following [https://docs.openstack.org/stein/install/ Openstack Stein installation recipes].<br />
<br />
[[File:Openstack_tutorial-stein.png|center|thumb|600px|<div align=center><br />
'''Figure 1: Openstack tutorial scenario'''</div>]]<br />
<br />
== Requirements ==<br />
<br />
To use the scenario you need a Linux computer (Ubuntu 16.04 or later recommended) with VNX software installed. At least 8GB of memory are needed to execute the scenario. <br />
<br />
See how to install VNX here: http://vnx.dit.upm.es/vnx/index.php/Vnx-install<br />
<br />
If already installed, update VNX to the latest version with:<br />
<br />
vnx_update<br />
<br />
== Installation ==<br />
<br />
Download the scenario with the virtual machines images included and unpack it:<br />
<br />
wget http://idefix.dit.upm.es/vnx/examples/openstack/openstack_lab-stein_4n_classic_ovs-v01-with-rootfs.tgz<br />
sudo vnx --unpack openstack_lab-stein_4n_classic_ovs-v01-with-rootfs.tgz<br />
<br />
== Starting the scenario ==<br />
<br />
Start the scenario and configure it and load an example cirros and ubuntu images with:<br />
cd openstack_lab-stein_4n_classic_ovs-v01<br />
# Start the scenario<br />
sudo vnx -f openstack_lab.xml -v --create<br />
# Wait for all consoles to have started and configure all Openstack services<br />
vnx -f openstack_lab.xml -v -x start-all<br />
# Load vm images in GLANCE<br />
vnx -f openstack_lab.xml -v -x load-img<br />
<br />
<br />
[[File:Tutorial-openstack-stein-4n-classic-ovs.png|center|thumb|600px|<div align=center><br />
'''Figure 2: Openstack tutorial detailed topology'''</div>]]<br />
<br />
Once started, you can connect to Openstack Dashboard (default/admin/xxxx) starting a browser and pointing it to the controller horizon page. For example:<br />
<br />
firefox 10.0.10.11/horizon<br />
<br />
== Self Service networks example ==<br />
<br />
Access the network topology Dashboard page (Project->Network->Network topology) and create a simple demo scenario inside Openstack:<br />
<br />
vnx -f openstack_lab.xml -v -x create-demo-scenario<br />
<br />
You should see the simple scenario as it is being created through the Dashboard.<br />
<br />
Once created you should be able to access vm1 console, to ping or ssh from the host to the vm1 or the opposite (see the floating IP assigned to vm1 in the Dashboard, probably 10.0.10.102).<br />
<br />
You can create a second Cirros virtual machine (vm2) or a third Ubuntu virtual machine and test connectivity among all virtual machines with: <br />
<br />
vnx -f openstack_lab.xml -v -x create-demo-vm2<br />
vnx -f openstack_lab.xml -v -x create-demo-vm3<br />
<br />
To allow external Internet access from vm1 you hace to configure a NAT in the host. You can easily do it using '''vnx_config_nat''' command distributed with VNX. Just find out the name of the public network interface of your host (i.e eth0) and execute:<br />
<br />
vnx_config_nat ExtNet eth0<br />
<br />
Besides, you can access the Openstack controller by ssh from the host and execute management commands directly:<br />
slogin root@controller # root/xxxx<br />
source bin/admin-openrc.sh # Load admin credentials<br />
<br />
For example, to show the virtual machines started:<br />
openstack server list<br />
<br />
You can also execute that commands from the host where the virtual scenario is started. For that purpose you need to install openstack client first:<br />
<br />
pip install python-openstackclient<br />
source bin/admin-openrc.sh # Load admin credentials<br />
openstack server list<br />
<br />
== Provider networks example ==<br />
<br />
Compute nodes in the Openstack virtual lab scenario have two network interfaces for internal and external connections: <br />
* '''eth2''', connected to Tunnels network and used to connect with VMs in other compute nodes or routers in the network node<br />
* '''eth3''', connected to VLANs network and used for the same purpose and also to connect to external systems through the VLAN based network infraestructure. <br />
<br />
To demonstrate how Openstack VMs can be connected with external systems though the VLAN network switches, an additional demo scenario is included. Just execute:<br />
vnx -f openstack_lab.xml -v -x create-vlan-demo-scenario<br />
<br />
That scenario will create two new networks and subnetworks associated with VLANs 1000 and 1001, and two VMs, vm3 and vm4 connected to that networks. You can see the scenario created through the openstack Dashboard.<br />
<br />
[[File:Openstack-provider-networks-example.png|center|thumb|600px|<div align=center><br />
'''Figure 3: Provider networks testing scenario'''</div>]]<br />
<br />
The commands used to create that networks and vms are the following (can be consulted also in the scenario XML file):<br />
<br />
<pre><br />
# Networks<br />
neutron net-create vlan1000 --shared --provider:physical_network vlan --provider:network_type vlan --provider:segmentation_id 1000<br />
neutron net-create vlan1001 --shared --provider:physical_network vlan --provider:network_type vlan --provider:segmentation_id 1001<br />
neutron subnet-create vlan1000 10.1.2.0/24 --name vlan1000-subnet --allocation-pool start=10.1.2.2,end=10.1.2.99 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8<br />
neutron subnet-create vlan1001 10.1.3.0/24 --name vlan1001-subnet --allocation-pool start=10.1.3.2,end=10.1.3.99 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8<br />
<br />
# VMs<br />
mkdir -p tmp<br />
openstack keypair create vm3 > tmp/vm3<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm3 --nic net-id=vlan1000<br />
openstack keypair create vm4 > tmp/vm4<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm4 --nic net-id=vlan1001<br />
</pre><br />
<br />
To demonstrate the connectivity of vm3 and vm4 with external systems connected on VLANs 1000/1001, you can start and additional virtual scenario which creates three additional systems: vmA (vlan 1000), vmB (vlan 1001) and vlan-router (connected to both vlans). To start it just execute:<br />
vnx -f openstack_lab-vms-vlan.xml -v -t<br />
<br />
[[File:Openstack_lab-vms-vlan-map.png|center|thumb|600px|<div align=center><br />
'''Figure 3: Provider networks demo external scenario'''</div>]]<br />
<br />
Once the scenario is started, you should be able to ping, traceroute and ssh among vm3, vm4, vmA and vmB using the following IP addresses:<br />
<ul><br />
<li>Virtual machines inside Openstack:</li><br />
<ul><br />
<li>vmA1: 10.1.2.20/24</li><br />
<li>vmB1: 10.1.3.20/24</li><br />
</ul><br />
<li>Virtual machines outside Openstack:</li><br />
<ul><br />
<li>vmA2: 10.1.2.100/24</li><br />
<li>vmB2: 10.1.3.100/24</li><br />
</ul><br />
</ul><br />
<br />
You can have a look at the virtual switch that supports the Openstack VLAN Network executing the following command in the host:<br />
ovs-vsctl show<br />
<br />
<br />
[[File:Vnx-demo-scenario-openstack-stein.png|center|thumb|600px|<div align=center><br />
'''Figure 4: Openstack Dashboard view of the demo virtual scenarios created'''</div>]]<br />
<br />
== Stopping or releasing the scenario ==<br />
<br />
To stop the scenario preserving the configuration and the changes made:<br />
<br />
vnx -f openstack_lab.xml -v --shutdown<br />
<br />
To start it again use:<br />
<br />
vnx -f openstack_lab.xml -v --start<br />
<br />
To stop the scenario destroying all the configuration and changes made:<br />
<br />
vnx -f openstack_lab.xml -v --destroy<br />
<br />
To unconfigure the NAT, just execute (change eth0 by the name of your external interface):<br />
<br />
vnx_config_nat -d ExtNet eth0<br />
<br />
== Other useful information ==<br />
<br />
To pack the scenario in a tgz file:<br />
<br />
bin/pack-scenario-with-rootfs # including rootfs<br />
bin/pack-scenario # without rootfs<br />
<br />
== Other Openstack Dashboard screen captures ==<br />
<br />
[[File:Vnx-demo-scenario-openstack-mitaka-compute-overview.png|center|thumb|600px|<div align=center><br />
'''Figure 4: Openstack Dashboard compute overview'''</div>]]<br />
<br />
[[File:Vnx-demo-scenario-openstack-mitaka-instances.png|center|thumb|600px|<div align=center><br />
'''Figure 5: Openstack Dashboard view of the demo virtual machines created'''</div>]]<br />
<br />
<?xml version="1.0" encoding="UTF-8"?><br />
<br />
<!--<br />
~~~~~~~~~~~~~~~~~~~~~~<br />
VNX Sample scenarios<br />
~~~~~~~~~~~~~~~~~~~~~~<br />
<br />
Name: openstack_tutorial-stein<br />
<br />
Description: This is an Openstack tutorial scenario designed to experiment with Openstack free and open-source <br />
software platform for cloud-computing. It is made of four LXC containers: <br />
- one controller<br />
- one network node<br />
- two compute nodes<br />
Openstack version used: Stein.<br />
The network configuration is based on the one named "Classic with Open vSwitch" described here:<br />
http://docs.openstack.org/liberty/networking-guide/scenario-classic-ovs.html<br />
<br />
Author: David Fernandez (david@dit.upm.es)<br />
<br />
This file is part of the Virtual Networks over LinuX (VNX) Project distribution. <br />
(www: http://www.dit.upm.es/vnx - e-mail: vnx@dit.upm.es) <br />
<br />
Copyright (C) 2019 Departamento de Ingenieria de Sistemas Telematicos (DIT)<br />
Universidad Politecnica de Madrid (UPM)<br />
SPAIN<br />
--><br />
<br />
<vnx xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"<br />
xsi:noNamespaceSchemaLocation="/usr/share/xml/vnx/vnx-2.00.xsd"><br />
<global><br />
<version>2.0</version><br />
<scenario_name>openstack_tutorial-stein</scenario_name><br />
<ssh_key>/root/.ssh/id_rsa.pub</ssh_key><br />
<automac offset="0"/><br />
<!--vm_mgmt type="none"/--><br />
<vm_mgmt type="private" network="10.20.0.0" mask="24" offset="0"><br />
<host_mapping /><br />
</vm_mgmt> <br />
<vm_defaults><br />
<console id="0" display="no"/><br />
<console id="1" display="yes"/><br />
</vm_defaults><br />
<cmd-seq seq="step1-6">step1,step2,step3,step3b,step4,step5,step6</cmd-seq><br />
<cmd-seq seq="step4">step41,step42,step43,step44</cmd-seq><br />
<cmd-seq seq="step5">step51,step52,step53</cmd-seq><br />
<!-- start-all for 'noconfig' scenario: all installation steps included --><br />
<!--cmd-seq seq="start-all">step1,step2,step3,step4,step5,step6</cmd-seq--><br />
<!-- start-all for configured scenario: only network and compute node steps included --><br />
<cmd-seq seq="step10">step101,step102</cmd-seq><br />
<cmd-seq seq="start-all">step00,step42,step43,step44,step52,step53</cmd-seq><br />
<cmd-seq seq="discover-hosts">step44</cmd-seq><br />
</global><br />
<br />
<net name="MgmtNet" mode="openvswitch" mtu="1450"/><br />
<net name="TunnNet" mode="openvswitch" mtu="1450"/><br />
<net name="ExtNet" mode="openvswitch" /><br />
<net name="VlanNet" mode="openvswitch" /><br />
<net name="virbr0" mode="virtual_bridge" managed="no"/><br />
<br />
<vm name="controller" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-controller</filesystem><br />
<mem>1G</mem><br />
<!--console id="0" display="yes"/--><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.11/24</ipv4><br />
</if><br />
<if id="2" net="ExtNet"><br />
<ipv4>10.0.10.11/24</ipv4><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-controller.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<filetree seq="on_boot" root="/root/">conf/controller/bin</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
chmod +x /root/bin/*<br />
</exec><br />
<exec seq="on_boot" type="verbatim"><br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
<br />
# Change owner of secret_key to horizon to avoid a 500 error when<br />
# accessing horizon (new problem arosed in v04)<br />
# See: https://ask.openstack.org/en/question/30059/getting-500-internal-server-error-while-accessing-horizon-dashboard-in-ubuntu-icehouse/<br />
chown horizon /var/lib/openstack-dashboard/secret_key<br />
<br />
# Stop nova services. Before being configured, they consume a lot of CPU<br />
service nova-scheduler stop<br />
service nova-api stop<br />
service nova-conductor stop<br />
</exec><br />
<br />
<exec seq="step00" type="verbatim"><br />
# Restart nova services<br />
service nova-scheduler start<br />
service nova-api start<br />
service nova-conductor start<br />
</exec><br />
<br />
<!-- STEP 1: Basic services --><br />
<filetree seq="step1" root="/etc/mysql/mariadb.conf.d/">conf/controller/mysql/mysqld_openstack.cnf</filetree><br />
<filetree seq="step1" root="/etc/">conf/controller/memcached/memcached.conf</filetree><br />
<filetree seq="step1" root="/etc/default/">conf/controller/etcd/etcd</filetree><br />
<!--filetree seq="step1" root="/etc/mongodb.conf">conf/controller/mongodb/mongodb.conf</filetree!--><br />
<exec seq="step1" type="verbatim"><br />
<br />
# Stop nova services. Before being configured, they consume a lot of CPU<br />
service nova-scheduler stop<br />
service nova-api stop<br />
service nova-conductor stop<br />
<br />
# Change all ocurrences of utf8mb4 to utf8. See comment above<br />
#for f in $( find /etc/mysql/mariadb.conf.d/ -type f ); do echo "Changing utf8mb4 to utf8 in file $f"; sed -i -e 's/utf8mb4/utf8/g' $f; done<br />
service mysql restart<br />
#mysql_secure_installation # to be run manually<br />
<br />
rabbitmqctl add_user openstack xxxx<br />
rabbitmqctl set_permissions openstack ".*" ".*" ".*" <br />
<br />
service memcached restart<br />
<br />
systemctl enable etcd<br />
systemctl start etcd<br />
<br />
#service mongodb stop<br />
#rm -f /var/lib/mongodb/journal/prealloc.*<br />
#service mongodb start<br />
</exec><br />
<br />
<!-- STEP 2: Identity service --><br />
<filetree seq="step2" root="/etc/keystone/">conf/controller/keystone/keystone.conf</filetree><br />
<filetree seq="step2" root="/root/bin/">conf/controller/keystone/admin-openrc.sh</filetree><br />
<filetree seq="step2" root="/root/bin/">conf/controller/keystone/demo-openrc.sh</filetree><br />
<exec seq="step2" type="verbatim"><br />
count=1; while ! mysqladmin ping ; do echo -n $count; echo ": waiting for mysql ..."; ((count++)) &amp;&amp; ((count==6)) &amp;&amp; echo "--" &amp;&amp; echo "-- ERROR: database not ready." &amp;&amp; echo "--" &amp;&amp; break; sleep 2; done<br />
</exec><br />
<exec seq="step2" type="verbatim"><br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE keystone;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
su -s /bin/sh -c "keystone-manage db_sync" keystone<br />
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone<br />
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone<br />
<br />
keystone-manage bootstrap --bootstrap-password xxxx \<br />
--bootstrap-admin-url http://controller:5000/v3/ \<br />
--bootstrap-internal-url http://controller:5000/v3/ \<br />
--bootstrap-public-url http://controller:5000/v3/ \<br />
--bootstrap-region-id RegionOne<br />
<br />
echo "ServerName controller" >> /etc/apache2/apache2.conf<br />
#ln -s /etc/apache2/sites-available/wsgi-keystone.conf /etc/apache2/sites-enabled<br />
service apache2 restart<br />
rm -f /var/lib/keystone/keystone.db<br />
sleep 5<br />
<br />
export OS_USERNAME=admin<br />
export OS_PASSWORD=xxxx<br />
export OS_PROJECT_NAME=admin<br />
export OS_USER_DOMAIN_NAME=Default<br />
export OS_PROJECT_DOMAIN_NAME=Default<br />
export OS_AUTH_URL=http://controller:5000/v3<br />
export OS_IDENTITY_API_VERSION=3<br />
<br />
# Create users and projects<br />
openstack project create --domain default --description "Service Project" service<br />
openstack project create --domain default --description "Demo Project" demo<br />
openstack user create --domain default --password=xxxx demo<br />
openstack role create user<br />
openstack role add --project demo --user demo user<br />
</exec><br />
<br />
<!-- <br />
STEP 3: Image service (Glance) <br />
--><br />
<filetree seq="step3" root="/etc/glance/">conf/controller/glance/glance-api.conf</filetree><br />
<filetree seq="step3" root="/etc/glance/">conf/controller/glance/glance-registry.conf</filetree><br />
<exec seq="step3" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE glance;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx glance<br />
openstack role add --project service --user glance admin<br />
openstack service create --name glance --description "OpenStack Image" image<br />
openstack endpoint create --region RegionOne image public http://controller:9292<br />
openstack endpoint create --region RegionOne image internal http://controller:9292<br />
openstack endpoint create --region RegionOne image admin http://controller:9292<br />
<br />
su -s /bin/sh -c "glance-manage db_sync" glance<br />
service glance-registry restart<br />
service glance-api restart<br />
#rm -f /var/lib/glance/glance.sqlite<br />
</exec><br />
<br />
<!-- <br />
STEP 3B: Placement service API<br />
--><br />
<filetree seq="step3b" root="/etc/placement/">conf/controller/placement/placement.conf</filetree><br />
<exec seq="step3b" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE placement;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx placement<br />
openstack role add --project service --user placement admin<br />
openstack service create --name placement --description "Placement API" placement<br />
openstack endpoint create --region RegionOne placement public http://controller:8778<br />
openstack endpoint create --region RegionOne placement internal http://controller:8778<br />
openstack endpoint create --region RegionOne placement admin http://controller:8778<br />
su -s /bin/sh -c "placement-manage db sync" placement<br />
service apache2 restart<br />
</exec><br />
<br />
<br />
<br />
<!-- STEP 4: Compute service (Nova) --><br />
<filetree seq="step41" root="/etc/nova/">conf/controller/nova/nova.conf</filetree><br />
<exec seq="step41" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova_api;"<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova;"<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova_cell0;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
<br />
openstack user create --domain default --password=xxxx nova<br />
openstack role add --project service --user nova admin<br />
openstack service create --name nova --description "OpenStack Compute" compute<br />
<br />
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1<br />
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1<br />
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1<br />
<br />
# Restart services stopped at step 1 to save CPU<br />
service nova-scheduler start<br />
service nova-api start<br />
service nova-conductor start<br />
<br />
su -s /bin/sh -c "nova-manage api_db sync" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova<br />
su -s /bin/sh -c "nova-manage db sync" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova<br />
<br />
service nova-api restart<br />
service nova-consoleauth restart<br />
service nova-scheduler restart<br />
service nova-conductor restart<br />
service nova-novncproxy restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<exec seq="step43" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Wait for compute1 hypervisor to be up<br />
while [ $( openstack hypervisor list --matching compute1 -f value -c State ) != 'up' ]; do echo "waiting for compute1 hypervisor..."; sleep 5; done<br />
</exec><br />
<exec seq="step43" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Wait for compute2 hypervisor to be up<br />
while [ $( openstack hypervisor list --matching compute2 -f value -c State ) != 'up' ]; do echo "waiting for compute2 hypervisor..."; sleep 5; done<br />
</exec><br />
<exec seq="step44" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
openstack hypervisor list<br />
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron) --><br />
<filetree seq="step51" root="/etc/neutron/">conf/controller/neutron/neutron.conf</filetree><br />
<!--filetree seq="step51" root="/etc/neutron/">conf/controller/neutron/neutron_lbaas.conf</filetree--><br />
<filetree seq="step51" root="/etc/neutron/plugins/ml2/">conf/controller/neutron/ml2_conf.ini</filetree><br />
<exec seq="step51" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE neutron;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx neutron<br />
openstack role add --project service --user neutron admin<br />
openstack service create --name neutron --description "OpenStack Networking" network<br />
openstack endpoint create --region RegionOne network public http://controller:9696<br />
openstack endpoint create --region RegionOne network internal http://controller:9696<br />
openstack endpoint create --region RegionOne network admin http://controller:9696<br />
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron<br />
<br />
# LBaaS<br />
neutron-db-manage --subproject neutron-lbaas upgrade head<br />
<br />
# FwaaS<br />
neutron-db-manage --subproject neutron-fwaas upgrade head<br />
<br />
# LBaaS Dashboard panels<br />
#git clone https://git.openstack.org/openstack/neutron-lbaas-dashboard<br />
#cd neutron-lbaas-dashboard<br />
#git checkout stable/mitaka<br />
#python setup.py install<br />
#cp neutron_lbaas_dashboard/enabled/_1481_project_ng_loadbalancersv2_panel.py /usr/share/openstack-dashboard/openstack_dashboard/local/enabled/<br />
#cd /usr/share/openstack-dashboard<br />
#./manage.py collectstatic --noinput<br />
#./manage.py compress<br />
#sudo service apache2 restart<br />
<br />
service nova-api restart<br />
service neutron-server restart<br />
</exec><br />
<br />
<!-- STEP 6: Dashboard service --><br />
<filetree seq="step6" root="/etc/openstack-dashboard/">conf/controller/dashboard/local_settings.py</filetree><br />
<exec seq="step6" type="verbatim"><br />
#chown www-data:www-data /var/lib/openstack-dashboard/secret_key<br />
rm /var/lib/openstack-dashboard/secret_key<br />
systemctl enable apache2<br />
service apache2 restart<br />
</exec><br />
<br />
<!-- STEP 7: Trove service --><br />
<cmd-seq seq="step7">step71,step72,step73</cmd-seq><br />
<exec seq="step71" type="verbatim"><br />
apt-get -y install python-trove python-troveclient python-glanceclient trove-common trove-api trove-taskmanager trove-conductor python-pip<br />
pip install trove-dashboard==7.0.0.0b2<br />
</exec><br />
<br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-conductor.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-taskmanager.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-guestagent.conf</filetree><br />
<exec seq="step72" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE trove;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
<br />
openstack user create --domain default --password xxxx trove<br />
openstack role add --project service --user trove admin<br />
openstack service create --name trove --description "Database" database<br />
openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s<br />
<br />
su -s /bin/sh -c "trove-manage db_sync" trove<br />
<br />
service trove-api restart<br />
service trove-taskmanager restart<br />
service trove-conductor restart<br />
<br />
# Install trove_dashboard<br />
cp -a /usr/local/lib/python2.7/dist-packages/trove_dashboard/enabled/* /usr/share/openstack-dashboard/openstack_dashboard/local/enabled/<br />
service apache2 restart<br />
<br />
</exec><br />
<br />
<exec seq="step73" type="verbatim"><br />
#wget -P /tmp/images http://tarballs.openstack.org/trove/images/ubuntu/mariadb.qcow2<br />
wget -P /tmp/images/ http://138.4.7.228/download/vnx/filesystems/ostack-images/trove/mariadb.qcow2<br />
glance image-create --name "trove-mariadb" --file /tmp/images/mariadb.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/mariadb.qcow2<br />
su -s /bin/sh -c "trove-manage --config-file /etc/trove/trove.conf datastore_update mysql ''" trove <br />
su -s /bin/sh -c "trove-manage --config-file /etc/trove/trove.conf datastore_version_update mysql mariadb mariadb glance_image_ID '' 1" trove<br />
<br />
# Create example database<br />
openstack flavor show m1.smaller >/dev/null 2>&amp;1 || openstack flavor create m1.smaller --ram 512 --disk 3 --vcpus 1 --id 6<br />
#trove create mysql_instance_1 m1.smaller --size 1 --databases myDB --users userA:xxxx --datastore_version mariadb --datastore mysql<br />
</exec><br />
<br />
<!-- STEP 8: Heat service --><br />
<!--cmd-seq seq="step8">step81,step82</cmd-seq--><br />
<br />
<filetree seq="step8" root="/etc/heat/">conf/controller/heat/heat.conf</filetree><br />
<filetree seq="step8" root="/root/heat/">conf/controller/heat/examples</filetree><br />
<exec seq="step8" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE heat;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password xxxx heat<br />
openstack role add --project service --user heat admin<br />
openstack service create --name heat --description "Orchestration" orchestration<br />
openstack service create --name heat-cfn --description "Orchestration" cloudformation<br />
openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1<br />
openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1<br />
openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1<br />
openstack domain create --description "Stack projects and users" heat<br />
openstack user create --domain heat --password xxxx heat_domain_admin<br />
openstack role add --domain heat --user-domain heat --user heat_domain_admin admin<br />
openstack role create heat_stack_owner<br />
openstack role add --project demo --user demo heat_stack_owner<br />
openstack role create heat_stack_user<br />
<br />
su -s /bin/sh -c "heat-manage db_sync" heat<br />
service heat-api restart<br />
service heat-api-cfn restart<br />
service heat-engine restart<br />
<br />
# Install Orchestration interface in Dashboard<br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get install -y gettext<br />
pip3 install heat-dashboard<br />
<br />
cd /root<br />
git clone https://github.com/openstack/heat-dashboard.git<br />
cd heat-dashboard/<br />
git checkout stable/stein<br />
cp heat_dashboard/enabled/_[1-9]*.py /usr/share/openstack-dashboard/openstack_dashboard/local/enabled<br />
python3 ./manage.py compilemessages<br />
cd /usr/share/openstack-dashboard<br />
DJANGO_SETTINGS_MODULE=openstack_dashboard.settings python3 manage.py collectstatic --noinput<br />
DJANGO_SETTINGS_MODULE=openstack_dashboard.settings python3 manage.py compress --force<br />
rm /var/lib/openstack-dashboard/secret_key<br />
service apache2 restart<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-heat" type="verbatim"><br />
source /root/bin/demo-openrc.sh<br />
<br />
# Create internal network<br />
openstack network create net-heat<br />
openstack subnet create --network net-heat --gateway 10.1.10.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.10.0/24 --allocation-pool start=10.1.10.8,end=10.1.10.100 subnet-heat<br />
<br />
mkdir -p /root/keys<br />
openstack keypair create key-heat > /root/keys/key-heat<br />
#export NET_ID=$(openstack network list | awk '/ net-heat / { print $2 }')<br />
export NET_ID=$( openstack network list --name net-heat -f value -c ID )<br />
openstack stack create -t /root/heat/examples/demo-template.yml --parameter "NetID=$NET_ID" stack<br />
<br />
</exec><br />
<br />
<br />
<!-- STEP 9: Tacker service --><br />
<cmd-seq seq="step9">step91,step92</cmd-seq><br />
<br />
<exec seq="step91" type="verbatim"><br />
apt-get -y install python-pip git<br />
pip install --upgrade pip<br />
</exec><br />
<br />
<filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/default-vim-config.yaml</filetree><br />
<filetree seq="step92" root="/root/tacker/">conf/controller/tacker/examples</filetree><br />
<exec seq="step92" type="verbatim"><br />
sed -i -e 's/.*"resource_types:OS::Nova::Flavor":.*/ "resource_types:OS::Nova::Flavor": "role:admin",/' /etc/heat/policy.json <br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE tacker;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password xxxx tacker<br />
openstack role add --project service --user tacker admin<br />
openstack service create --name tacker --description "Tacker Project" nfv-orchestration<br />
openstack endpoint create --region RegionOne nfv-orchestration public http://controller:9890/<br />
openstack endpoint create --region RegionOne nfv-orchestration internal http://controller:9890/<br />
openstack endpoint create --region RegionOne nfv-orchestration admin http://controller:9890/<br />
<br />
mkdir -p /root/tacker<br />
cd /root/tacker<br />
git clone https://github.com/openstack/tacker<br />
cd tacker<br />
git checkout stable/ocata<br />
pip install -r requirements.txt<br />
pip install tosca-parser<br />
python setup.py install<br />
mkdir -p /var/log/tacker<br />
<br />
/usr/local/bin/tacker-db-manage --config-file /usr/local/etc/tacker/tacker.conf upgrade head<br />
<br />
# Tacker client<br />
cd /root/tacker<br />
git clone https://github.com/openstack/python-tackerclient<br />
cd python-tackerclient<br />
git checkout stable/ocata<br />
python setup.py install<br />
<br />
# Tacker horizon<br />
cd /root/tacker<br />
git clone https://github.com/openstack/tacker-horizon<br />
cd tacker-horizon<br />
git checkout stable/ocata<br />
python setup.py install<br />
cp tacker_horizon/enabled/* /usr/share/openstack-dashboard/openstack_dashboard/enabled/<br />
service apache2 restart<br />
<br />
# Start tacker server<br />
mkdir -p /var/log/tacker<br />
nohup python /usr/local/bin/tacker-server \<br />
--config-file /usr/local/etc/tacker/tacker.conf \<br />
--log-file /var/log/tacker/tacker.log &amp;<br />
<br />
# Register default VIM<br />
tacker vim-register --is-default --config-file /usr/local/etc/tacker/default-vim-config.yaml \<br />
--description "Default VIM" "Openstack-VIM"<br />
<br />
</exec><br />
<exec seq="step93" type="verbatim"><br />
nohup python /usr/local/bin/tacker-server \<br />
--config-file /usr/local/etc/tacker/tacker.conf \<br />
--log-file /var/log/tacker/tacker.log &amp;<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-tacker" type="verbatim"><br />
source /root/bin/demo-openrc.sh<br />
<br />
# Create internal network<br />
openstack network create net-tacker<br />
openstack subnet create --network net-tacker --gateway 10.1.11.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.11.0/24 --allocation-pool start=10.1.11.8,end=10.1.11.100 subnet-tacker<br />
<br />
cd /root/tacker/examples<br />
tacker vnfd-create --vnfd-file sample-vnfd.yaml testd<br />
<br />
# Falla con error: <br />
# ERROR: Property error: : resources.VDU1.properties.image: : No module named v2.client<br />
tacker vnf-create --vnfd-id $( tacker vnfd-list | awk '/ testd / { print $2 }' ) test<br />
<br />
</exec><br />
<br />
<!--filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<exec seq="step92" type="verbatim"><br />
</exec--><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
#apt-get -y install python-pip<br />
#pip install --upgrade pip<br />
#pip install gnocchi[mysql,keystone] gnocchiclient<br />
apt-get -y install ceilometer-collector ceilometer-agent-central ceilometer-agent-notification python-ceilometerclient<br />
apt-get -y install gnocchi-common gnocchi-api gnocchi-metricd gnocchi-statsd python-gnocchiclient<br />
<br />
</exec><br />
<br />
<filetree seq="step102" root="/etc/ceilometer/">conf/controller/ceilometer/ceilometer.conf</filetree><br />
<!--filetree seq="step102" root="/etc/mongodb.conf">conf/controller/mongodb/mongodb.conf</filetree--><br />
<!--filetree seq="step102" root="/etc/apache2/sites-available/ceilometer.conf">conf/controller/ceilometer/apache/ceilometer.conf</filetree--><br />
<!--filetree seq="step102" root="/etc/apache2/sites-available/gnocchi.conf">conf/controller/ceilometer/apache/gnocchi.conf</filetree--><br />
<filetree seq="step102" root="/etc/gnocchi/">conf/controller/ceilometer/gnocchi.conf</filetree><br />
<filetree seq="step102" root="/etc/gnocchi/">conf/controller/ceilometer/api-paste.ini</filetree><br />
<exec seq="step102" type="verbatim"><br />
<br />
# Create gnocchi database<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE gnocchidb default character set utf8;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON gnocchidb.* TO 'gnocchi'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON gnocchidb.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
# Ceilometer<br />
source /root/bin/admin-openrc.sh <br />
openstack user create --domain default --password xxxx ceilometer<br />
openstack role add --project service --user ceilometer admin<br />
openstack service create --name ceilometer --description "Telemetry" metering<br />
openstack user create --domain default --password xxxx gnocchi<br />
openstack role add --project service --user gnocchi admin<br />
openstack service create --name gnocchi --description "Metric Service" metric<br />
openstack endpoint create --region RegionOne metric public http://controller:8041<br />
openstack endpoint create --region RegionOne metric internal http://controller:8041<br />
openstack endpoint create --region RegionOne metric admin http://controller:8041<br />
<br />
mkdir -p /var/cache/gnocchi<br />
chown gnocchi:gnocchi -R /var/cache/gnocchi<br />
mkdir -p /var/lib/gnocchi<br />
chown gnocchi:gnocchi -R /var/lib/gnocchi<br />
<br />
gnocchi-upgrade<br />
sed -i 's/8000/8041/g' /usr/bin/gnocchi-api<br />
# Correct error in gnocchi-api startup script<br />
sed -i -e 's/exec $DAEMON $DAEMON_ARGS/exec $DAEMON -- $DAEMON_ARGS/' /etc/init.d/gnocchi-api<br />
systemctl enable gnocchi-api.service gnocchi-metricd.service gnocchi-statsd.service<br />
systemctl start gnocchi-api.service gnocchi-metricd.service gnocchi-statsd.service<br />
<br />
ceilometer-upgrade --skip-metering-database<br />
service ceilometer-agent-central restart<br />
service ceilometer-agent-notification restart<br />
service ceilometer-collector restart<br />
<br />
# Enable Glance service meters<br />
crudini --set /etc/glance/glance-api.conf DEFAULT transport_url rabbit://openstack:xxxx@controller<br />
crudini --set /etc/glance/glance-api.conf oslo_messaging_notifications driver messagingv2<br />
crudini --set /etc/glance/glance-registry.conf DEFAULT transport_url rabbit://openstack:xxxx@controller<br />
crudini --set /etc/glance/glance-registry.conf oslo_messaging_notifications driver messagingv2<br />
<br />
service glance-registry restart<br />
service glance-api restart<br />
<br />
# Enable Neutron service meters<br />
crudini --set /etc/neutron/neutron.conf oslo_messaging_notifications driver messagingv2<br />
service neutron-server restart<br />
<br />
# Enable Heat service meters<br />
crudini --set /etc/heat/heat.conf oslo_messaging_notifications driver messagingv2<br />
service heat-api restart<br />
service heat-api-cfn restart<br />
service heat-engine restart<br />
<br />
#crudini --set /etc/glance/glance-api.conf DEFAULT rpc_backend rabbit<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_notifications driver messagingv2<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_host controller<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_userid openstack<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_password xxxx<br />
<br />
#crudini --set /etc/glance/glance-registry.conf DEFAULT rpc_backend rabbit<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_notifications driver messagingv2<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_host controller<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_userid openstack<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_password xxxx<br />
<br />
<br />
</exec><br />
<br />
<br />
<exec seq="load-img" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create flavors if not created<br />
openstack flavor show m1.nano >/dev/null 2>&amp;1 || openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano<br />
openstack flavor show m1.tiny >/dev/null 2>&amp;1 || openstack flavor create --id 1 --vcpus 1 --ram 512 --disk 1 m1.tiny<br />
openstack flavor show m1.smaller >/dev/null 2>&amp;1 || openstack flavor create --id 6 --vcpus 1 --ram 512 --disk 3 m1.smaller<br />
<br />
# CentOS image<br />
# Cirros image <br />
#wget -P /tmp/images http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img<br />
wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/cirros-0.3.4-x86_64-disk-vnx.qcow2<br />
glance image-create --name "cirros-0.3.4-x86_64-vnx" --file /tmp/images/cirros-0.3.4-x86_64-disk-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/cirros-0.3.4-x86_64-disk*.qcow2<br />
<br />
# Ubuntu image (trusty)<br />
#wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/trusty-server-cloudimg-amd64-disk1-vnx.qcow2<br />
#glance image-create --name "trusty-server-cloudimg-amd64-vnx" --file /tmp/images/trusty-server-cloudimg-amd64-disk1-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
#rm /tmp/images/trusty-server-cloudimg-amd64-disk1*.qcow2<br />
<br />
# Ubuntu image (xenial)<br />
wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/xenial-server-cloudimg-amd64-disk1-vnx.qcow2<br />
glance image-create --name "xenial-server-cloudimg-amd64-vnx" --file /tmp/images/xenial-server-cloudimg-amd64-disk1-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/xenial-server-cloudimg-amd64-disk1*.qcow2<br />
<br />
#wget -P /tmp/images http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2<br />
#glance image-create --name "CentOS-7-x86_64" --file /tmp/images/CentOS-7-x86_64-GenericCloud.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
#rm /tmp/images/CentOS-7-x86_64-GenericCloud.qcow2<br />
</exec><br />
<br />
<exec seq="create-demo-scenario" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create internal network<br />
#neutron net-create net0<br />
openstack network create net0<br />
#neutron subnet-create net0 10.1.1.0/24 --name subnet0 --gateway 10.1.1.1 --dns-nameserver 8.8.8.8<br />
openstack subnet create --network net0 --gateway 10.1.1.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.1.0/24 --allocation-pool start=10.1.1.8,end=10.1.1.100 subnet0<br />
<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm1 > /root/keys/vm1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm1 --nic net-id=net0 --key-name vm1<br />
<br />
# Create external network<br />
#neutron net-create ExtNet --provider:physical_network provider --provider:network_type flat --router:external --shared<br />
openstack network create --share --external --provider-physical-network provider --provider-network-type flat ExtNet<br />
#neutron subnet-create --name ExtSubnet --allocation-pool start=10.0.10.100,end=10.0.10.200 --dns-nameserver 10.0.10.1 --gateway 10.0.10.1 ExtNet 10.0.10.0/24<br />
openstack subnet create --network ExtNet --gateway 10.0.10.1 --dns-nameserver 10.0.10.1 --subnet-range 10.0.10.0/24 --allocation-pool start=10.0.10.100,end=10.0.10.200 ExtSubNet<br />
#neutron router-create r0<br />
openstack router create r0<br />
#neutron router-gateway-set r0 ExtNet<br />
openstack router set r0 --external-gateway ExtNet<br />
#neutron router-interface-add r0 subnet0<br />
openstack router add subnet r0 subnet0<br />
<br />
<br />
# Assign floating IP address to vm1<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm1<br />
openstack server add floating ip vm1 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
<br />
<br />
# Create security group rules to allow ICMP, SSH and WWW access<br />
openstack security group rule create --proto icmp --dst-port 0 default<br />
openstack security group rule create --proto tcp --dst-port 80 default<br />
openstack security group rule create --proto tcp --dst-port 22 default<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-vm2" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm2 > /root/keys/vm2<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm2 --nic net-id=net0 --key-name vm2<br />
# Assign floating IP address to vm2<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm2<br />
openstack server add floating ip vm2 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
</exec><br />
<br />
<exec seq="create-demo-vm3" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm3 > /root/keys/vm3<br />
openstack server create --flavor m1.smaller --image xenial-server-cloudimg-amd64-vnx vm3 --nic net-id=net0 --key-name vm3<br />
# Assign floating IP address to vm3<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm3<br />
openstack server add floating ip vm3 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
</exec><br />
<br />
<exec seq="create-vlan-demo-scenario" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create vlan based networks and subnetworks<br />
#neutron net-create vlan1000 --shared --provider:physical_network vlan --provider:network_type vlan --provider:segmentation_id 1000<br />
#neutron net-create vlan1001 --shared --provider:physical_network vlan --provider:network_type vlan --provider:segmentation_id 1001<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1000 vlan1000<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1001 vlan1001<br />
#neutron subnet-create vlan1000 10.1.2.0/24 --name vlan1000-subnet --allocation-pool start=10.1.2.2,end=10.1.2.99 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8<br />
#neutron subnet-create vlan1001 10.1.3.0/24 --name vlan1001-subnet --allocation-pool start=10.1.3.2,end=10.1.3.99 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8<br />
openstack subnet create --network vlan1000 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.2.0/24 --allocation-pool start=10.1.2.2,end=10.1.2.99 subvlan1000<br />
openstack subnet create --network vlan1001 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.3.0/24 --allocation-pool start=10.1.3.2,end=10.1.3.99 subvlan1001<br />
<br />
<br />
# Create virtual machine<br />
mkdir -p tmp<br />
openstack keypair create vm3 > tmp/vm3<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm3 --nic net-id=vlan1000 --key-name vm3<br />
openstack keypair create vm4 > tmp/vm4<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm4 --nic net-id=vlan1001 --key-name vm4<br />
<br />
<br />
# Create security group rules to allow ICMP, SSH and WWW access<br />
openstack security group rule create --proto icmp --dst-port 0 default<br />
openstack security group rule create --proto tcp --dst-port 80 default<br />
openstack security group rule create --proto tcp --dst-port 22 default<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
echo "--"<br />
echo "-- Keystone (identity)"<br />
echo "--"<br />
echo "Command: openstack --os-auth-url http://controller:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue"<br />
openstack --os-auth-url http://controller:35357/v3 \<br />
--os-project-domain-name default --os-user-domain-name default \<br />
--os-project-name admin --os-username admin token issue<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Glance (images)"<br />
echo "--"<br />
echo "Command: openstack image list"<br />
openstack image list<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Nova (compute)"<br />
echo "--"<br />
echo "Command: openstack compute service list"<br />
openstack compute service list<br />
echo "Command: openstack hypervisor service list"<br />
openstack hypervisor service list<br />
echo "Command: openstack catalog list"<br />
openstack catalog list<br />
echo "Command: nova-status upgrade check"<br />
nova-status upgrade check<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Neutron (network)"<br />
echo "--"<br />
echo "Command: openstack extension list --network"<br />
openstack extension list --network<br />
echo "Command: openstack network agent list"<br />
openstack network agent list<br />
echo "Command: openstack security group list"<br />
openstack security group list<br />
echo "Command: openstack security group rule list"<br />
openstack security group rule list<br />
</exec><br />
<br />
</vm><br />
<br />
<vm name="network" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-network</filesystem><br />
<!--vm name="network" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-network</filesystem--><br />
<mem>1G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.21/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.21/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="4" net="ExtNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<forwarding type="ip" /><br />
<forwarding type="ipv6" /><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts<br />
rm /root/hosts<br />
</exec><br />
<exec seq="on_boot" type="verbatim"><br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --><br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<filetree seq="on_boot" root="/root/">conf/network/bin</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
chmod +x /root/bin/*<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) --><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/neutron.conf</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/metadata_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/plugins/ml2/">conf/network/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step52" root="/etc/neutron/plugins/ml2/">conf/network/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/l3_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/dhcp_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/dnsmasq-neutron.conf</filetree><br />
<!--filetree seq="step52" root="/etc/neutron/">conf/network/neutron/fwaas_driver.ini</filetree--><br />
<!--filetree seq="step52" root="/etc/neutron/">conf/network/neutron/lbaas_agent.ini</filetree--><br />
<exec seq="step52" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
#ovs-vsctl add-br br-ex<br />
ovs-vsctl add-port br-provider eth4<br />
<br />
service neutron-lbaasv2-agent restart<br />
service openvswitch-switch restart<br />
<br />
service neutron-openvswitch-agent restart<br />
#service neutron-linuxbridge-agent restart<br />
service neutron-dhcp-agent restart<br />
service neutron-metadata-agent restart<br />
service neutron-l3-agent restart<br />
<br />
rm -f /var/lib/neutron/neutron.sqlite<br />
</exec><br />
<br />
</vm><br />
<br />
<br />
<vm name="compute1" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-compute</filesystem><br />
<!--vm name="compute1" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64" vcpu="2"--><br />
<!--filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-compute</filesystem--><br />
<mem>2G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.31/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.31/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
# Create /dev/net/tun device <br />
#mkdir -p /dev/net/<br />
#mknod -m 666 /dev/net/tun c 10 200<br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<!-- STEP 42: Compute service (Nova) --><br />
<filetree seq="step42" root="/etc/nova/">conf/compute1/nova/nova.conf</filetree><br />
<filetree seq="step42" root="/etc/nova/">conf/compute1/nova/nova-compute.conf</filetree><br />
<exec seq="step42" type="verbatim"><br />
service nova-compute restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron) --><br />
<filetree seq="step53" root="/etc/neutron/">conf/compute1/neutron/neutron.conf</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/ml2_conf.ini</filetree!--><br />
<exec seq="step53" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
service openvswitch-switch restart<br />
service nova-compute restart<br />
service neutron-openvswitch-agent restart<br />
</exec><br />
<br />
<!--filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<exec seq="step92" type="verbatim"><br />
</exec--><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get -y install ceilometer-agent-compute<br />
</exec><br />
<filetree seq="step102" root="/etc/ceilometer/">conf/compute2/ceilometer/ceilometer.conf</filetree><br />
<exec seq="step102" type="verbatim"><br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit True<br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit_period hour<br />
crudini --set /etc/nova/nova.conf DEFAULT notify_on_state_change vm_and_task_state<br />
crudini --set /etc/nova/nova.conf oslo_messaging_notifications driver messagingv2<br />
service ceilometer-agent-compute restart<br />
service nova-compute restart<br />
</exec><br />
<br />
</vm><br />
<br />
<vm name="compute2" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-compute</filesystem><br />
<!--vm name="compute2" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64" vcpu="2"--><br />
<!--filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-compute</filesystem!--><br />
<mem>2G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.32/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.32/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
# Create /dev/net/tun device <br />
#mkdir -p /dev/net/<br />
#mknod -m 666 /dev/net/tun c 10 200<br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<!-- STEP 42: Compute service (Nova) --><br />
<filetree seq="step42" root="/etc/nova/">conf/compute2/nova/nova.conf</filetree><br />
<filetree seq="step42" root="/etc/nova/">conf/compute2/nova/nova-compute.conf</filetree><br />
<exec seq="step42" type="verbatim"><br />
service nova-compute restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) --><br />
<filetree seq="step53" root="/etc/neutron/">conf/compute2/neutron/neutron.conf</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/ml2_conf.ini</filetree!--><br />
<exec seq="step53" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
service openvswitch-switch restart<br />
service nova-compute restart<br />
service neutron-openvswitch-agent restart<br />
</exec><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get -y install ceilometer-agent-compute<br />
</exec><br />
<filetree seq="step102" root="/etc/ceilometer/">conf/compute2/ceilometer/ceilometer.conf</filetree><br />
<exec seq="step102" type="verbatim"><br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit True<br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit_period hour<br />
crudini --set /etc/nova/nova.conf DEFAULT notify_on_state_change vm_and_task_state<br />
crudini --set /etc/nova/nova.conf oslo_messaging_notifications driver messagingv2<br />
service ceilometer-agent-compute restart<br />
service nova-compute restart<br />
</exec><br />
<br />
</vm><br />
<br />
<br />
<host><br />
<hostif net="ExtNet"><br />
<ipv4>10.0.10.1/24</ipv4><br />
</hostif><br />
<hostif net="MgmtNet"><br />
<ipv4>10.0.0.1/24</ipv4><br />
</hostif><br />
<exec seq="step00" type="verbatim"><br />
echo "--\n-- Waiting for all VMs to be ssh ready...\n--"<br />
</exec><br />
<exec seq="step00" type="verbatim"><br />
# Wait till ssh is accesible in all VMs<br />
while ! $( nc -z controller 22 ); do sleep 1; done<br />
while ! $( nc -z network 22 ); do sleep 1; done<br />
while ! $( nc -z compute1 22 ); do sleep 1; done<br />
while ! $( nc -z compute2 22 ); do sleep 1; done<br />
</exec><br />
<exec seq="step00" type="verbatim"><br />
echo "-- ...OK\n--"<br />
</exec><br />
</host><br />
<br />
</vnx></div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Vnx-labo-openstack-4nodes-classic-ovs-stein&diff=2625
Vnx-labo-openstack-4nodes-classic-ovs-stein
2019-08-26T14:13:54Z
<p>David: /* Requirements */</p>
<hr />
<div>{{Title|VNX Openstack Stein four nodes classic scenario using Open vSwitch}}<br />
<br />
== Introduction ==<br />
<br />
This is an Openstack tutorial scenario designed to experiment with Openstack free and open-source software platform for cloud-computing.<br />
<br />
The scenario is made of four virtual machines: a controller node, a network node and two compute nodes, all based on LXC. Optionally, new compute nodes can be added by starting additional VNX scenarios.<br />
<br />
Openstack version used is Stein (April 2019) over Ubuntu 18.04 LTS. The deployment scenario is the one that was named "Classic with Open vSwitch" and was described in previous versions of Openstack documentation (https://docs.openstack.org/liberty/networking-guide/scenario-classic-ovs.html). It is prepared to experiment either with the deployment of virtual machines using "Self Service Networks" provided by Openstack, or with the use of external "Provider networks".<br />
<br />
The scenario is similar to the ones developed by Raul Alvarez to test OpenDaylight-Openstack integration, but instead of using Devstack to configure Openstack nodes, the configuration is done by means of commands integrated into the VNX scenario following [https://docs.openstack.org/stein/install/ Openstack Stein installation recipes].<br />
<br />
[[File:Openstack_tutorial-stein.png|center|thumb|600px|<div align=center><br />
'''Figure 1: Openstack tutorial scenario'''</div>]]<br />
<br />
== Requirements ==<br />
<br />
To use the scenario you need a Linux computer (Ubuntu 16.04 or later recommended) with VNX software installed. At least 8GB of memory are needed to execute the scenario. <br />
<br />
See how to install VNX here: http://vnx.dit.upm.es/vnx/index.php/Vnx-install<br />
<br />
If already installed, update VNX to the latest version with:<br />
<br />
vnx_update<br />
<br />
== Installation ==<br />
<br />
Download the scenario with the virtual machines images included and unpack it:<br />
<br />
wget http://idefix.dit.upm.es/vnx/examples/openstack/openstack_lab-stein_4n_classic_ovs-v01-with-rootfs.tgz<br />
sudo vnx --unpack openstack_lab-stein_4n_classic_ovs-v01-with-rootfs.tgz<br />
<br />
== Starting the scenario ==<br />
<br />
Start the scenario and configure it and load an example cirros and ubuntu images with:<br />
cd openstack_lab-stein_4n_classic_ovs-v01<br />
# Start the scenario<br />
sudo vnx -f openstack_lab.xml -v --create<br />
# Wait for all consoles to have started and configure all Openstack services<br />
vnx -f openstack_lab.xml -v -x start-all<br />
# Load vm images in GLANCE<br />
vnx -f openstack_lab.xml -v -x load-img<br />
<br />
<br />
[[File:Tutorial-openstack-stein-4n-classic-ovs.png|center|thumb|600px|<div align=center><br />
'''Figure 2: Openstack tutorial detailed topology'''</div>]]<br />
<br />
Once started, you can connect to Openstack Dashboard (default/admin/xxxx) starting a browser and pointing it to the controller horizon page. For example:<br />
<br />
firefox 10.0.10.11/horizon<br />
<br />
== Self Service networks example ==<br />
<br />
Access the network topology Dashboard page (Project->Network->Network topology) and create a simple demo scenario inside Openstack:<br />
<br />
vnx -f openstack_lab.xml -v -x create-demo-scenario<br />
<br />
You should see the simple scenario as it is being created through the Dashboard.<br />
<br />
Once created you should be able to access vm1 console, to ping or ssh from the host to the vm1 or the opposite (see the floating IP assigned to vm1 in the Dashboard, probably 10.0.10.102).<br />
<br />
You can create a second Cirros virtual machine (vm2) or a third Ubuntu virtual machine and test connectivity among all virtual machines with: <br />
<br />
vnx -f openstack_lab.xml -v -x create-demo-vm2<br />
vnx -f openstack_lab.xml -v -x create-demo-vm3<br />
<br />
To allow external Internet access from vm1 you hace to configure a NAT in the host. You can easily do it using '''vnx_config_nat''' command distributed with VNX. Just find out the name of the public network interface of your host (i.e eth0) and execute:<br />
<br />
vnx_config_nat ExtNet eth0<br />
<br />
Besides, you can access the Openstack controller by ssh from the host and execute management commands directly:<br />
slogin root@controller # root/xxxx<br />
source bin/admin-openrc.sh # Load admin credentials<br />
<br />
For example, to show the virtual machines started:<br />
openstack server list<br />
<br />
You can also execute that commands from the host where the virtual scenario is started. For that purpose you need to install openstack client first:<br />
<br />
pip install python-openstackclient<br />
source bin/admin-openrc.sh # Load admin credentials<br />
openstack server list<br />
<br />
== Provider networks example ==<br />
<br />
Compute nodes in the Openstack virtual lab scenario have two network interfaces for internal and external connections: <br />
* '''eth2''', connected to Tunnel network and used to connect with VMs in other compute nodes or routers in the network node<br />
* '''eth3''', connected to VLAN network and used for the same purpose and also to connect to external systems through the VLAN based network infraestructure. <br />
<br />
To demonstrate how Openstack VMs can be connected with external systems though the VLAN network switches, an additional demo scenario is included. Just execute:<br />
vnx -f openstack_lab.xml -v -x create-vlan-demo-scenario<br />
<br />
That scenario will create two new networks and subnetworks associated with VLANs 1000 and 1001, and two VMs, vm3 and vm4 connected to that networks. You can see the scenario created through the openstack Dashboard.<br />
<br />
[[File:Openstack-provider-networks-example.png|center|thumb|600px|<div align=center><br />
'''Figure 3: Provider networks testing scenario'''</div>]]<br />
<br />
The commands used to create that networks and vms are the following (can be consulted also in the scenario XML file):<br />
<br />
<pre><br />
# Networks<br />
neutron net-create vlan1000 --shared --provider:physical_network vlan --provider:network_type vlan --provider:segmentation_id 1000<br />
neutron net-create vlan1001 --shared --provider:physical_network vlan --provider:network_type vlan --provider:segmentation_id 1001<br />
neutron subnet-create vlan1000 10.1.2.0/24 --name vlan1000-subnet --allocation-pool start=10.1.2.2,end=10.1.2.99 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8<br />
neutron subnet-create vlan1001 10.1.3.0/24 --name vlan1001-subnet --allocation-pool start=10.1.3.2,end=10.1.3.99 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8<br />
<br />
# VMs<br />
mkdir -p tmp<br />
openstack keypair create vm3 > tmp/vm3<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm3 --nic net-id=vlan1000<br />
openstack keypair create vm4 > tmp/vm4<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm4 --nic net-id=vlan1001<br />
</pre><br />
<br />
To demonstrate the connectivity of vm3 and vm4 with external systems connected on VLANs 1000/1001, you can start and additional virtual scenario which creates three additional systems: vmA (vlan 1000), vmB (vlan 1001) and vlan-router (connected to both vlans). To start it just execute:<br />
vnx -f openstack_lab-vms-vlan.xml -v -t<br />
<br />
[[File:Openstack_lab-vms-vlan-map.png|center|thumb|600px|<div align=center><br />
'''Figure 3: Provider networks demo external scenario'''</div>]]<br />
<br />
Once the scenario is started, you should be able to ping, traceroute and ssh among vm3, vm4, vmA and vmB using the following IP addresses:<br />
<ul><br />
<li>Virtual machines inside Openstack:</li><br />
<ul><br />
<li>vmA1: 10.1.2.20/24</li><br />
<li>vmB1: 10.1.3.20/24</li><br />
</ul><br />
<li>Virtual machines outside Openstack:</li><br />
<ul><br />
<li>vmA2: 10.1.2.100/24</li><br />
<li>vmB2: 10.1.3.100/24</li><br />
</ul><br />
</ul><br />
<br />
You can have a look at the virtual switch that supports the Openstack VLAN Network executing the following command in the host:<br />
ovs-vsctl show<br />
<br />
<br />
[[File:Vnx-demo-scenario-openstack-stein.png|center|thumb|600px|<div align=center><br />
'''Figure 4: Openstack Dashboard view of the demo virtual scenarios created'''</div>]]<br />
<br />
== Stopping or releasing the scenario ==<br />
<br />
To stop the scenario preserving the configuration and the changes made:<br />
<br />
vnx -f openstack_lab.xml -v --shutdown<br />
<br />
To start it again use:<br />
<br />
vnx -f openstack_lab.xml -v --start<br />
<br />
To stop the scenario destroying all the configuration and changes made:<br />
<br />
vnx -f openstack_lab.xml -v --destroy<br />
<br />
To unconfigure the NAT, just execute (change eth0 by the name of your external interface):<br />
<br />
vnx_config_nat -d ExtNet eth0<br />
<br />
== Other useful information ==<br />
<br />
To pack the scenario in a tgz file:<br />
<br />
bin/pack-scenario-with-rootfs # including rootfs<br />
bin/pack-scenario # without rootfs<br />
<br />
== Other Openstack Dashboard screen captures ==<br />
<br />
[[File:Vnx-demo-scenario-openstack-mitaka-compute-overview.png|center|thumb|600px|<div align=center><br />
'''Figure 4: Openstack Dashboard compute overview'''</div>]]<br />
<br />
[[File:Vnx-demo-scenario-openstack-mitaka-instances.png|center|thumb|600px|<div align=center><br />
'''Figure 5: Openstack Dashboard view of the demo virtual machines created'''</div>]]<br />
<br />
<?xml version="1.0" encoding="UTF-8"?><br />
<br />
<!--<br />
~~~~~~~~~~~~~~~~~~~~~~<br />
VNX Sample scenarios<br />
~~~~~~~~~~~~~~~~~~~~~~<br />
<br />
Name: openstack_tutorial-stein<br />
<br />
Description: This is an Openstack tutorial scenario designed to experiment with Openstack free and open-source <br />
software platform for cloud-computing. It is made of four LXC containers: <br />
- one controller<br />
- one network node<br />
- two compute nodes<br />
Openstack version used: Stein.<br />
The network configuration is based on the one named "Classic with Open vSwitch" described here:<br />
http://docs.openstack.org/liberty/networking-guide/scenario-classic-ovs.html<br />
<br />
Author: David Fernandez (david@dit.upm.es)<br />
<br />
This file is part of the Virtual Networks over LinuX (VNX) Project distribution. <br />
(www: http://www.dit.upm.es/vnx - e-mail: vnx@dit.upm.es) <br />
<br />
Copyright (C) 2019 Departamento de Ingenieria de Sistemas Telematicos (DIT)<br />
Universidad Politecnica de Madrid (UPM)<br />
SPAIN<br />
--><br />
<br />
<vnx xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"<br />
xsi:noNamespaceSchemaLocation="/usr/share/xml/vnx/vnx-2.00.xsd"><br />
<global><br />
<version>2.0</version><br />
<scenario_name>openstack_tutorial-stein</scenario_name><br />
<ssh_key>/root/.ssh/id_rsa.pub</ssh_key><br />
<automac offset="0"/><br />
<!--vm_mgmt type="none"/--><br />
<vm_mgmt type="private" network="10.20.0.0" mask="24" offset="0"><br />
<host_mapping /><br />
</vm_mgmt> <br />
<vm_defaults><br />
<console id="0" display="no"/><br />
<console id="1" display="yes"/><br />
</vm_defaults><br />
<cmd-seq seq="step1-6">step1,step2,step3,step3b,step4,step5,step6</cmd-seq><br />
<cmd-seq seq="step4">step41,step42,step43,step44</cmd-seq><br />
<cmd-seq seq="step5">step51,step52,step53</cmd-seq><br />
<!-- start-all for 'noconfig' scenario: all installation steps included --><br />
<!--cmd-seq seq="start-all">step1,step2,step3,step4,step5,step6</cmd-seq--><br />
<!-- start-all for configured scenario: only network and compute node steps included --><br />
<cmd-seq seq="step10">step101,step102</cmd-seq><br />
<cmd-seq seq="start-all">step00,step42,step43,step44,step52,step53</cmd-seq><br />
<cmd-seq seq="discover-hosts">step44</cmd-seq><br />
</global><br />
<br />
<net name="MgmtNet" mode="openvswitch" mtu="1450"/><br />
<net name="TunnNet" mode="openvswitch" mtu="1450"/><br />
<net name="ExtNet" mode="openvswitch" /><br />
<net name="VlanNet" mode="openvswitch" /><br />
<net name="virbr0" mode="virtual_bridge" managed="no"/><br />
<br />
<vm name="controller" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-controller</filesystem><br />
<mem>1G</mem><br />
<!--console id="0" display="yes"/--><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.11/24</ipv4><br />
</if><br />
<if id="2" net="ExtNet"><br />
<ipv4>10.0.10.11/24</ipv4><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-controller.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<filetree seq="on_boot" root="/root/">conf/controller/bin</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
chmod +x /root/bin/*<br />
</exec><br />
<exec seq="on_boot" type="verbatim"><br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
<br />
# Change owner of secret_key to horizon to avoid a 500 error when<br />
# accessing horizon (new problem arosed in v04)<br />
# See: https://ask.openstack.org/en/question/30059/getting-500-internal-server-error-while-accessing-horizon-dashboard-in-ubuntu-icehouse/<br />
chown horizon /var/lib/openstack-dashboard/secret_key<br />
<br />
# Stop nova services. Before being configured, they consume a lot of CPU<br />
service nova-scheduler stop<br />
service nova-api stop<br />
service nova-conductor stop<br />
</exec><br />
<br />
<exec seq="step00" type="verbatim"><br />
# Restart nova services<br />
service nova-scheduler start<br />
service nova-api start<br />
service nova-conductor start<br />
</exec><br />
<br />
<!-- STEP 1: Basic services --><br />
<filetree seq="step1" root="/etc/mysql/mariadb.conf.d/">conf/controller/mysql/mysqld_openstack.cnf</filetree><br />
<filetree seq="step1" root="/etc/">conf/controller/memcached/memcached.conf</filetree><br />
<filetree seq="step1" root="/etc/default/">conf/controller/etcd/etcd</filetree><br />
<!--filetree seq="step1" root="/etc/mongodb.conf">conf/controller/mongodb/mongodb.conf</filetree!--><br />
<exec seq="step1" type="verbatim"><br />
<br />
# Stop nova services. Before being configured, they consume a lot of CPU<br />
service nova-scheduler stop<br />
service nova-api stop<br />
service nova-conductor stop<br />
<br />
# Change all ocurrences of utf8mb4 to utf8. See comment above<br />
#for f in $( find /etc/mysql/mariadb.conf.d/ -type f ); do echo "Changing utf8mb4 to utf8 in file $f"; sed -i -e 's/utf8mb4/utf8/g' $f; done<br />
service mysql restart<br />
#mysql_secure_installation # to be run manually<br />
<br />
rabbitmqctl add_user openstack xxxx<br />
rabbitmqctl set_permissions openstack ".*" ".*" ".*" <br />
<br />
service memcached restart<br />
<br />
systemctl enable etcd<br />
systemctl start etcd<br />
<br />
#service mongodb stop<br />
#rm -f /var/lib/mongodb/journal/prealloc.*<br />
#service mongodb start<br />
</exec><br />
<br />
<!-- STEP 2: Identity service --><br />
<filetree seq="step2" root="/etc/keystone/">conf/controller/keystone/keystone.conf</filetree><br />
<filetree seq="step2" root="/root/bin/">conf/controller/keystone/admin-openrc.sh</filetree><br />
<filetree seq="step2" root="/root/bin/">conf/controller/keystone/demo-openrc.sh</filetree><br />
<exec seq="step2" type="verbatim"><br />
count=1; while ! mysqladmin ping ; do echo -n $count; echo ": waiting for mysql ..."; ((count++)) &amp;&amp; ((count==6)) &amp;&amp; echo "--" &amp;&amp; echo "-- ERROR: database not ready." &amp;&amp; echo "--" &amp;&amp; break; sleep 2; done<br />
</exec><br />
<exec seq="step2" type="verbatim"><br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE keystone;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
su -s /bin/sh -c "keystone-manage db_sync" keystone<br />
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone<br />
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone<br />
<br />
keystone-manage bootstrap --bootstrap-password xxxx \<br />
--bootstrap-admin-url http://controller:5000/v3/ \<br />
--bootstrap-internal-url http://controller:5000/v3/ \<br />
--bootstrap-public-url http://controller:5000/v3/ \<br />
--bootstrap-region-id RegionOne<br />
<br />
echo "ServerName controller" >> /etc/apache2/apache2.conf<br />
#ln -s /etc/apache2/sites-available/wsgi-keystone.conf /etc/apache2/sites-enabled<br />
service apache2 restart<br />
rm -f /var/lib/keystone/keystone.db<br />
sleep 5<br />
<br />
export OS_USERNAME=admin<br />
export OS_PASSWORD=xxxx<br />
export OS_PROJECT_NAME=admin<br />
export OS_USER_DOMAIN_NAME=Default<br />
export OS_PROJECT_DOMAIN_NAME=Default<br />
export OS_AUTH_URL=http://controller:5000/v3<br />
export OS_IDENTITY_API_VERSION=3<br />
<br />
# Create users and projects<br />
openstack project create --domain default --description "Service Project" service<br />
openstack project create --domain default --description "Demo Project" demo<br />
openstack user create --domain default --password=xxxx demo<br />
openstack role create user<br />
openstack role add --project demo --user demo user<br />
</exec><br />
<br />
<!-- <br />
STEP 3: Image service (Glance) <br />
--><br />
<filetree seq="step3" root="/etc/glance/">conf/controller/glance/glance-api.conf</filetree><br />
<filetree seq="step3" root="/etc/glance/">conf/controller/glance/glance-registry.conf</filetree><br />
<exec seq="step3" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE glance;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx glance<br />
openstack role add --project service --user glance admin<br />
openstack service create --name glance --description "OpenStack Image" image<br />
openstack endpoint create --region RegionOne image public http://controller:9292<br />
openstack endpoint create --region RegionOne image internal http://controller:9292<br />
openstack endpoint create --region RegionOne image admin http://controller:9292<br />
<br />
su -s /bin/sh -c "glance-manage db_sync" glance<br />
service glance-registry restart<br />
service glance-api restart<br />
#rm -f /var/lib/glance/glance.sqlite<br />
</exec><br />
<br />
<!-- <br />
STEP 3B: Placement service API<br />
--><br />
<filetree seq="step3b" root="/etc/placement/">conf/controller/placement/placement.conf</filetree><br />
<exec seq="step3b" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE placement;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx placement<br />
openstack role add --project service --user placement admin<br />
openstack service create --name placement --description "Placement API" placement<br />
openstack endpoint create --region RegionOne placement public http://controller:8778<br />
openstack endpoint create --region RegionOne placement internal http://controller:8778<br />
openstack endpoint create --region RegionOne placement admin http://controller:8778<br />
su -s /bin/sh -c "placement-manage db sync" placement<br />
service apache2 restart<br />
</exec><br />
<br />
<br />
<br />
<!-- STEP 4: Compute service (Nova) --><br />
<filetree seq="step41" root="/etc/nova/">conf/controller/nova/nova.conf</filetree><br />
<exec seq="step41" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova_api;"<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova;"<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova_cell0;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
<br />
openstack user create --domain default --password=xxxx nova<br />
openstack role add --project service --user nova admin<br />
openstack service create --name nova --description "OpenStack Compute" compute<br />
<br />
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1<br />
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1<br />
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1<br />
<br />
# Restart services stopped at step 1 to save CPU<br />
service nova-scheduler start<br />
service nova-api start<br />
service nova-conductor start<br />
<br />
su -s /bin/sh -c "nova-manage api_db sync" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova<br />
su -s /bin/sh -c "nova-manage db sync" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova<br />
<br />
service nova-api restart<br />
service nova-consoleauth restart<br />
service nova-scheduler restart<br />
service nova-conductor restart<br />
service nova-novncproxy restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<exec seq="step43" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Wait for compute1 hypervisor to be up<br />
while [ $( openstack hypervisor list --matching compute1 -f value -c State ) != 'up' ]; do echo "waiting for compute1 hypervisor..."; sleep 5; done<br />
</exec><br />
<exec seq="step43" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Wait for compute2 hypervisor to be up<br />
while [ $( openstack hypervisor list --matching compute2 -f value -c State ) != 'up' ]; do echo "waiting for compute2 hypervisor..."; sleep 5; done<br />
</exec><br />
<exec seq="step44" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
openstack hypervisor list<br />
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron) --><br />
<filetree seq="step51" root="/etc/neutron/">conf/controller/neutron/neutron.conf</filetree><br />
<!--filetree seq="step51" root="/etc/neutron/">conf/controller/neutron/neutron_lbaas.conf</filetree--><br />
<filetree seq="step51" root="/etc/neutron/plugins/ml2/">conf/controller/neutron/ml2_conf.ini</filetree><br />
<exec seq="step51" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE neutron;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx neutron<br />
openstack role add --project service --user neutron admin<br />
openstack service create --name neutron --description "OpenStack Networking" network<br />
openstack endpoint create --region RegionOne network public http://controller:9696<br />
openstack endpoint create --region RegionOne network internal http://controller:9696<br />
openstack endpoint create --region RegionOne network admin http://controller:9696<br />
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron<br />
<br />
# LBaaS<br />
neutron-db-manage --subproject neutron-lbaas upgrade head<br />
<br />
# FwaaS<br />
neutron-db-manage --subproject neutron-fwaas upgrade head<br />
<br />
# LBaaS Dashboard panels<br />
#git clone https://git.openstack.org/openstack/neutron-lbaas-dashboard<br />
#cd neutron-lbaas-dashboard<br />
#git checkout stable/mitaka<br />
#python setup.py install<br />
#cp neutron_lbaas_dashboard/enabled/_1481_project_ng_loadbalancersv2_panel.py /usr/share/openstack-dashboard/openstack_dashboard/local/enabled/<br />
#cd /usr/share/openstack-dashboard<br />
#./manage.py collectstatic --noinput<br />
#./manage.py compress<br />
#sudo service apache2 restart<br />
<br />
service nova-api restart<br />
service neutron-server restart<br />
</exec><br />
<br />
<!-- STEP 6: Dashboard service --><br />
<filetree seq="step6" root="/etc/openstack-dashboard/">conf/controller/dashboard/local_settings.py</filetree><br />
<exec seq="step6" type="verbatim"><br />
#chown www-data:www-data /var/lib/openstack-dashboard/secret_key<br />
rm /var/lib/openstack-dashboard/secret_key<br />
systemctl enable apache2<br />
service apache2 restart<br />
</exec><br />
<br />
<!-- STEP 7: Trove service --><br />
<cmd-seq seq="step7">step71,step72,step73</cmd-seq><br />
<exec seq="step71" type="verbatim"><br />
apt-get -y install python-trove python-troveclient python-glanceclient trove-common trove-api trove-taskmanager trove-conductor python-pip<br />
pip install trove-dashboard==7.0.0.0b2<br />
</exec><br />
<br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-conductor.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-taskmanager.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-guestagent.conf</filetree><br />
<exec seq="step72" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE trove;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
<br />
openstack user create --domain default --password xxxx trove<br />
openstack role add --project service --user trove admin<br />
openstack service create --name trove --description "Database" database<br />
openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s<br />
<br />
su -s /bin/sh -c "trove-manage db_sync" trove<br />
<br />
service trove-api restart<br />
service trove-taskmanager restart<br />
service trove-conductor restart<br />
<br />
# Install trove_dashboard<br />
cp -a /usr/local/lib/python2.7/dist-packages/trove_dashboard/enabled/* /usr/share/openstack-dashboard/openstack_dashboard/local/enabled/<br />
service apache2 restart<br />
<br />
</exec><br />
<br />
<exec seq="step73" type="verbatim"><br />
#wget -P /tmp/images http://tarballs.openstack.org/trove/images/ubuntu/mariadb.qcow2<br />
wget -P /tmp/images/ http://138.4.7.228/download/vnx/filesystems/ostack-images/trove/mariadb.qcow2<br />
glance image-create --name "trove-mariadb" --file /tmp/images/mariadb.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/mariadb.qcow2<br />
su -s /bin/sh -c "trove-manage --config-file /etc/trove/trove.conf datastore_update mysql ''" trove <br />
su -s /bin/sh -c "trove-manage --config-file /etc/trove/trove.conf datastore_version_update mysql mariadb mariadb glance_image_ID '' 1" trove<br />
<br />
# Create example database<br />
openstack flavor show m1.smaller >/dev/null 2>&amp;1 || openstack flavor create m1.smaller --ram 512 --disk 3 --vcpus 1 --id 6<br />
#trove create mysql_instance_1 m1.smaller --size 1 --databases myDB --users userA:xxxx --datastore_version mariadb --datastore mysql<br />
</exec><br />
<br />
<!-- STEP 8: Heat service --><br />
<!--cmd-seq seq="step8">step81,step82</cmd-seq--><br />
<br />
<filetree seq="step8" root="/etc/heat/">conf/controller/heat/heat.conf</filetree><br />
<filetree seq="step8" root="/root/heat/">conf/controller/heat/examples</filetree><br />
<exec seq="step8" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE heat;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password xxxx heat<br />
openstack role add --project service --user heat admin<br />
openstack service create --name heat --description "Orchestration" orchestration<br />
openstack service create --name heat-cfn --description "Orchestration" cloudformation<br />
openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1<br />
openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1<br />
openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1<br />
openstack domain create --description "Stack projects and users" heat<br />
openstack user create --domain heat --password xxxx heat_domain_admin<br />
openstack role add --domain heat --user-domain heat --user heat_domain_admin admin<br />
openstack role create heat_stack_owner<br />
openstack role add --project demo --user demo heat_stack_owner<br />
openstack role create heat_stack_user<br />
<br />
su -s /bin/sh -c "heat-manage db_sync" heat<br />
service heat-api restart<br />
service heat-api-cfn restart<br />
service heat-engine restart<br />
<br />
# Install Orchestration interface in Dashboard<br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get install -y gettext<br />
pip3 install heat-dashboard<br />
<br />
cd /root<br />
git clone https://github.com/openstack/heat-dashboard.git<br />
cd heat-dashboard/<br />
git checkout stable/stein<br />
cp heat_dashboard/enabled/_[1-9]*.py /usr/share/openstack-dashboard/openstack_dashboard/local/enabled<br />
python3 ./manage.py compilemessages<br />
cd /usr/share/openstack-dashboard<br />
DJANGO_SETTINGS_MODULE=openstack_dashboard.settings python3 manage.py collectstatic --noinput<br />
DJANGO_SETTINGS_MODULE=openstack_dashboard.settings python3 manage.py compress --force<br />
rm /var/lib/openstack-dashboard/secret_key<br />
service apache2 restart<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-heat" type="verbatim"><br />
source /root/bin/demo-openrc.sh<br />
<br />
# Create internal network<br />
openstack network create net-heat<br />
openstack subnet create --network net-heat --gateway 10.1.10.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.10.0/24 --allocation-pool start=10.1.10.8,end=10.1.10.100 subnet-heat<br />
<br />
mkdir -p /root/keys<br />
openstack keypair create key-heat > /root/keys/key-heat<br />
#export NET_ID=$(openstack network list | awk '/ net-heat / { print $2 }')<br />
export NET_ID=$( openstack network list --name net-heat -f value -c ID )<br />
openstack stack create -t /root/heat/examples/demo-template.yml --parameter "NetID=$NET_ID" stack<br />
<br />
</exec><br />
<br />
<br />
<!-- STEP 9: Tacker service --><br />
<cmd-seq seq="step9">step91,step92</cmd-seq><br />
<br />
<exec seq="step91" type="verbatim"><br />
apt-get -y install python-pip git<br />
pip install --upgrade pip<br />
</exec><br />
<br />
<filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/default-vim-config.yaml</filetree><br />
<filetree seq="step92" root="/root/tacker/">conf/controller/tacker/examples</filetree><br />
<exec seq="step92" type="verbatim"><br />
sed -i -e 's/.*"resource_types:OS::Nova::Flavor":.*/ "resource_types:OS::Nova::Flavor": "role:admin",/' /etc/heat/policy.json <br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE tacker;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password xxxx tacker<br />
openstack role add --project service --user tacker admin<br />
openstack service create --name tacker --description "Tacker Project" nfv-orchestration<br />
openstack endpoint create --region RegionOne nfv-orchestration public http://controller:9890/<br />
openstack endpoint create --region RegionOne nfv-orchestration internal http://controller:9890/<br />
openstack endpoint create --region RegionOne nfv-orchestration admin http://controller:9890/<br />
<br />
mkdir -p /root/tacker<br />
cd /root/tacker<br />
git clone https://github.com/openstack/tacker<br />
cd tacker<br />
git checkout stable/ocata<br />
pip install -r requirements.txt<br />
pip install tosca-parser<br />
python setup.py install<br />
mkdir -p /var/log/tacker<br />
<br />
/usr/local/bin/tacker-db-manage --config-file /usr/local/etc/tacker/tacker.conf upgrade head<br />
<br />
# Tacker client<br />
cd /root/tacker<br />
git clone https://github.com/openstack/python-tackerclient<br />
cd python-tackerclient<br />
git checkout stable/ocata<br />
python setup.py install<br />
<br />
# Tacker horizon<br />
cd /root/tacker<br />
git clone https://github.com/openstack/tacker-horizon<br />
cd tacker-horizon<br />
git checkout stable/ocata<br />
python setup.py install<br />
cp tacker_horizon/enabled/* /usr/share/openstack-dashboard/openstack_dashboard/enabled/<br />
service apache2 restart<br />
<br />
# Start tacker server<br />
mkdir -p /var/log/tacker<br />
nohup python /usr/local/bin/tacker-server \<br />
--config-file /usr/local/etc/tacker/tacker.conf \<br />
--log-file /var/log/tacker/tacker.log &amp;<br />
<br />
# Register default VIM<br />
tacker vim-register --is-default --config-file /usr/local/etc/tacker/default-vim-config.yaml \<br />
--description "Default VIM" "Openstack-VIM"<br />
<br />
</exec><br />
<exec seq="step93" type="verbatim"><br />
nohup python /usr/local/bin/tacker-server \<br />
--config-file /usr/local/etc/tacker/tacker.conf \<br />
--log-file /var/log/tacker/tacker.log &amp;<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-tacker" type="verbatim"><br />
source /root/bin/demo-openrc.sh<br />
<br />
# Create internal network<br />
openstack network create net-tacker<br />
openstack subnet create --network net-tacker --gateway 10.1.11.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.11.0/24 --allocation-pool start=10.1.11.8,end=10.1.11.100 subnet-tacker<br />
<br />
cd /root/tacker/examples<br />
tacker vnfd-create --vnfd-file sample-vnfd.yaml testd<br />
<br />
# Falla con error: <br />
# ERROR: Property error: : resources.VDU1.properties.image: : No module named v2.client<br />
tacker vnf-create --vnfd-id $( tacker vnfd-list | awk '/ testd / { print $2 }' ) test<br />
<br />
</exec><br />
<br />
<!--filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<exec seq="step92" type="verbatim"><br />
</exec--><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
#apt-get -y install python-pip<br />
#pip install --upgrade pip<br />
#pip install gnocchi[mysql,keystone] gnocchiclient<br />
apt-get -y install ceilometer-collector ceilometer-agent-central ceilometer-agent-notification python-ceilometerclient<br />
apt-get -y install gnocchi-common gnocchi-api gnocchi-metricd gnocchi-statsd python-gnocchiclient<br />
<br />
</exec><br />
<br />
<filetree seq="step102" root="/etc/ceilometer/">conf/controller/ceilometer/ceilometer.conf</filetree><br />
<!--filetree seq="step102" root="/etc/mongodb.conf">conf/controller/mongodb/mongodb.conf</filetree--><br />
<!--filetree seq="step102" root="/etc/apache2/sites-available/ceilometer.conf">conf/controller/ceilometer/apache/ceilometer.conf</filetree--><br />
<!--filetree seq="step102" root="/etc/apache2/sites-available/gnocchi.conf">conf/controller/ceilometer/apache/gnocchi.conf</filetree--><br />
<filetree seq="step102" root="/etc/gnocchi/">conf/controller/ceilometer/gnocchi.conf</filetree><br />
<filetree seq="step102" root="/etc/gnocchi/">conf/controller/ceilometer/api-paste.ini</filetree><br />
<exec seq="step102" type="verbatim"><br />
<br />
# Create gnocchi database<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE gnocchidb default character set utf8;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON gnocchidb.* TO 'gnocchi'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON gnocchidb.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
# Ceilometer<br />
source /root/bin/admin-openrc.sh <br />
openstack user create --domain default --password xxxx ceilometer<br />
openstack role add --project service --user ceilometer admin<br />
openstack service create --name ceilometer --description "Telemetry" metering<br />
openstack user create --domain default --password xxxx gnocchi<br />
openstack role add --project service --user gnocchi admin<br />
openstack service create --name gnocchi --description "Metric Service" metric<br />
openstack endpoint create --region RegionOne metric public http://controller:8041<br />
openstack endpoint create --region RegionOne metric internal http://controller:8041<br />
openstack endpoint create --region RegionOne metric admin http://controller:8041<br />
<br />
mkdir -p /var/cache/gnocchi<br />
chown gnocchi:gnocchi -R /var/cache/gnocchi<br />
mkdir -p /var/lib/gnocchi<br />
chown gnocchi:gnocchi -R /var/lib/gnocchi<br />
<br />
gnocchi-upgrade<br />
sed -i 's/8000/8041/g' /usr/bin/gnocchi-api<br />
# Correct error in gnocchi-api startup script<br />
sed -i -e 's/exec $DAEMON $DAEMON_ARGS/exec $DAEMON -- $DAEMON_ARGS/' /etc/init.d/gnocchi-api<br />
systemctl enable gnocchi-api.service gnocchi-metricd.service gnocchi-statsd.service<br />
systemctl start gnocchi-api.service gnocchi-metricd.service gnocchi-statsd.service<br />
<br />
ceilometer-upgrade --skip-metering-database<br />
service ceilometer-agent-central restart<br />
service ceilometer-agent-notification restart<br />
service ceilometer-collector restart<br />
<br />
# Enable Glance service meters<br />
crudini --set /etc/glance/glance-api.conf DEFAULT transport_url rabbit://openstack:xxxx@controller<br />
crudini --set /etc/glance/glance-api.conf oslo_messaging_notifications driver messagingv2<br />
crudini --set /etc/glance/glance-registry.conf DEFAULT transport_url rabbit://openstack:xxxx@controller<br />
crudini --set /etc/glance/glance-registry.conf oslo_messaging_notifications driver messagingv2<br />
<br />
service glance-registry restart<br />
service glance-api restart<br />
<br />
# Enable Neutron service meters<br />
crudini --set /etc/neutron/neutron.conf oslo_messaging_notifications driver messagingv2<br />
service neutron-server restart<br />
<br />
# Enable Heat service meters<br />
crudini --set /etc/heat/heat.conf oslo_messaging_notifications driver messagingv2<br />
service heat-api restart<br />
service heat-api-cfn restart<br />
service heat-engine restart<br />
<br />
#crudini --set /etc/glance/glance-api.conf DEFAULT rpc_backend rabbit<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_notifications driver messagingv2<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_host controller<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_userid openstack<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_password xxxx<br />
<br />
#crudini --set /etc/glance/glance-registry.conf DEFAULT rpc_backend rabbit<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_notifications driver messagingv2<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_host controller<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_userid openstack<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_password xxxx<br />
<br />
<br />
</exec><br />
<br />
<br />
<exec seq="load-img" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create flavors if not created<br />
openstack flavor show m1.nano >/dev/null 2>&amp;1 || openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano<br />
openstack flavor show m1.tiny >/dev/null 2>&amp;1 || openstack flavor create --id 1 --vcpus 1 --ram 512 --disk 1 m1.tiny<br />
openstack flavor show m1.smaller >/dev/null 2>&amp;1 || openstack flavor create --id 6 --vcpus 1 --ram 512 --disk 3 m1.smaller<br />
<br />
# CentOS image<br />
# Cirros image <br />
#wget -P /tmp/images http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img<br />
wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/cirros-0.3.4-x86_64-disk-vnx.qcow2<br />
glance image-create --name "cirros-0.3.4-x86_64-vnx" --file /tmp/images/cirros-0.3.4-x86_64-disk-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/cirros-0.3.4-x86_64-disk*.qcow2<br />
<br />
# Ubuntu image (trusty)<br />
#wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/trusty-server-cloudimg-amd64-disk1-vnx.qcow2<br />
#glance image-create --name "trusty-server-cloudimg-amd64-vnx" --file /tmp/images/trusty-server-cloudimg-amd64-disk1-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
#rm /tmp/images/trusty-server-cloudimg-amd64-disk1*.qcow2<br />
<br />
# Ubuntu image (xenial)<br />
wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/xenial-server-cloudimg-amd64-disk1-vnx.qcow2<br />
glance image-create --name "xenial-server-cloudimg-amd64-vnx" --file /tmp/images/xenial-server-cloudimg-amd64-disk1-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/xenial-server-cloudimg-amd64-disk1*.qcow2<br />
<br />
#wget -P /tmp/images http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2<br />
#glance image-create --name "CentOS-7-x86_64" --file /tmp/images/CentOS-7-x86_64-GenericCloud.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
#rm /tmp/images/CentOS-7-x86_64-GenericCloud.qcow2<br />
</exec><br />
<br />
<exec seq="create-demo-scenario" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create internal network<br />
#neutron net-create net0<br />
openstack network create net0<br />
#neutron subnet-create net0 10.1.1.0/24 --name subnet0 --gateway 10.1.1.1 --dns-nameserver 8.8.8.8<br />
openstack subnet create --network net0 --gateway 10.1.1.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.1.0/24 --allocation-pool start=10.1.1.8,end=10.1.1.100 subnet0<br />
<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm1 > /root/keys/vm1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm1 --nic net-id=net0 --key-name vm1<br />
<br />
# Create external network<br />
#neutron net-create ExtNet --provider:physical_network provider --provider:network_type flat --router:external --shared<br />
openstack network create --share --external --provider-physical-network provider --provider-network-type flat ExtNet<br />
#neutron subnet-create --name ExtSubnet --allocation-pool start=10.0.10.100,end=10.0.10.200 --dns-nameserver 10.0.10.1 --gateway 10.0.10.1 ExtNet 10.0.10.0/24<br />
openstack subnet create --network ExtNet --gateway 10.0.10.1 --dns-nameserver 10.0.10.1 --subnet-range 10.0.10.0/24 --allocation-pool start=10.0.10.100,end=10.0.10.200 ExtSubNet<br />
#neutron router-create r0<br />
openstack router create r0<br />
#neutron router-gateway-set r0 ExtNet<br />
openstack router set r0 --external-gateway ExtNet<br />
#neutron router-interface-add r0 subnet0<br />
openstack router add subnet r0 subnet0<br />
<br />
<br />
# Assign floating IP address to vm1<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm1<br />
openstack server add floating ip vm1 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
<br />
<br />
# Create security group rules to allow ICMP, SSH and WWW access<br />
openstack security group rule create --proto icmp --dst-port 0 default<br />
openstack security group rule create --proto tcp --dst-port 80 default<br />
openstack security group rule create --proto tcp --dst-port 22 default<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-vm2" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm2 > /root/keys/vm2<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm2 --nic net-id=net0 --key-name vm2<br />
# Assign floating IP address to vm2<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm2<br />
openstack server add floating ip vm2 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
</exec><br />
<br />
<exec seq="create-demo-vm3" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm3 > /root/keys/vm3<br />
openstack server create --flavor m1.smaller --image xenial-server-cloudimg-amd64-vnx vm3 --nic net-id=net0 --key-name vm3<br />
# Assign floating IP address to vm3<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm3<br />
openstack server add floating ip vm3 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
</exec><br />
<br />
<exec seq="create-vlan-demo-scenario" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create vlan based networks and subnetworks<br />
#neutron net-create vlan1000 --shared --provider:physical_network vlan --provider:network_type vlan --provider:segmentation_id 1000<br />
#neutron net-create vlan1001 --shared --provider:physical_network vlan --provider:network_type vlan --provider:segmentation_id 1001<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1000 vlan1000<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1001 vlan1001<br />
#neutron subnet-create vlan1000 10.1.2.0/24 --name vlan1000-subnet --allocation-pool start=10.1.2.2,end=10.1.2.99 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8<br />
#neutron subnet-create vlan1001 10.1.3.0/24 --name vlan1001-subnet --allocation-pool start=10.1.3.2,end=10.1.3.99 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8<br />
openstack subnet create --network vlan1000 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.2.0/24 --allocation-pool start=10.1.2.2,end=10.1.2.99 subvlan1000<br />
openstack subnet create --network vlan1001 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.3.0/24 --allocation-pool start=10.1.3.2,end=10.1.3.99 subvlan1001<br />
<br />
<br />
# Create virtual machine<br />
mkdir -p tmp<br />
openstack keypair create vm3 > tmp/vm3<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm3 --nic net-id=vlan1000 --key-name vm3<br />
openstack keypair create vm4 > tmp/vm4<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm4 --nic net-id=vlan1001 --key-name vm4<br />
<br />
<br />
# Create security group rules to allow ICMP, SSH and WWW access<br />
openstack security group rule create --proto icmp --dst-port 0 default<br />
openstack security group rule create --proto tcp --dst-port 80 default<br />
openstack security group rule create --proto tcp --dst-port 22 default<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
echo "--"<br />
echo "-- Keystone (identity)"<br />
echo "--"<br />
echo "Command: openstack --os-auth-url http://controller:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue"<br />
openstack --os-auth-url http://controller:35357/v3 \<br />
--os-project-domain-name default --os-user-domain-name default \<br />
--os-project-name admin --os-username admin token issue<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Glance (images)"<br />
echo "--"<br />
echo "Command: openstack image list"<br />
openstack image list<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Nova (compute)"<br />
echo "--"<br />
echo "Command: openstack compute service list"<br />
openstack compute service list<br />
echo "Command: openstack hypervisor service list"<br />
openstack hypervisor service list<br />
echo "Command: openstack catalog list"<br />
openstack catalog list<br />
echo "Command: nova-status upgrade check"<br />
nova-status upgrade check<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Neutron (network)"<br />
echo "--"<br />
echo "Command: openstack extension list --network"<br />
openstack extension list --network<br />
echo "Command: openstack network agent list"<br />
openstack network agent list<br />
echo "Command: openstack security group list"<br />
openstack security group list<br />
echo "Command: openstack security group rule list"<br />
openstack security group rule list<br />
</exec><br />
<br />
</vm><br />
<br />
<vm name="network" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-network</filesystem><br />
<!--vm name="network" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-network</filesystem--><br />
<mem>1G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.21/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.21/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="4" net="ExtNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<forwarding type="ip" /><br />
<forwarding type="ipv6" /><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts<br />
rm /root/hosts<br />
</exec><br />
<exec seq="on_boot" type="verbatim"><br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --><br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<filetree seq="on_boot" root="/root/">conf/network/bin</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
chmod +x /root/bin/*<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) --><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/neutron.conf</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/metadata_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/plugins/ml2/">conf/network/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step52" root="/etc/neutron/plugins/ml2/">conf/network/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/l3_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/dhcp_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/dnsmasq-neutron.conf</filetree><br />
<!--filetree seq="step52" root="/etc/neutron/">conf/network/neutron/fwaas_driver.ini</filetree--><br />
<!--filetree seq="step52" root="/etc/neutron/">conf/network/neutron/lbaas_agent.ini</filetree--><br />
<exec seq="step52" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
#ovs-vsctl add-br br-ex<br />
ovs-vsctl add-port br-provider eth4<br />
<br />
service neutron-lbaasv2-agent restart<br />
service openvswitch-switch restart<br />
<br />
service neutron-openvswitch-agent restart<br />
#service neutron-linuxbridge-agent restart<br />
service neutron-dhcp-agent restart<br />
service neutron-metadata-agent restart<br />
service neutron-l3-agent restart<br />
<br />
rm -f /var/lib/neutron/neutron.sqlite<br />
</exec><br />
<br />
</vm><br />
<br />
<br />
<vm name="compute1" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-compute</filesystem><br />
<!--vm name="compute1" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64" vcpu="2"--><br />
<!--filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-compute</filesystem--><br />
<mem>2G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.31/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.31/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
# Create /dev/net/tun device <br />
#mkdir -p /dev/net/<br />
#mknod -m 666 /dev/net/tun c 10 200<br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<!-- STEP 42: Compute service (Nova) --><br />
<filetree seq="step42" root="/etc/nova/">conf/compute1/nova/nova.conf</filetree><br />
<filetree seq="step42" root="/etc/nova/">conf/compute1/nova/nova-compute.conf</filetree><br />
<exec seq="step42" type="verbatim"><br />
service nova-compute restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron) --><br />
<filetree seq="step53" root="/etc/neutron/">conf/compute1/neutron/neutron.conf</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/ml2_conf.ini</filetree!--><br />
<exec seq="step53" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
service openvswitch-switch restart<br />
service nova-compute restart<br />
service neutron-openvswitch-agent restart<br />
</exec><br />
<br />
<!--filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<exec seq="step92" type="verbatim"><br />
</exec--><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get -y install ceilometer-agent-compute<br />
</exec><br />
<filetree seq="step102" root="/etc/ceilometer/">conf/compute2/ceilometer/ceilometer.conf</filetree><br />
<exec seq="step102" type="verbatim"><br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit True<br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit_period hour<br />
crudini --set /etc/nova/nova.conf DEFAULT notify_on_state_change vm_and_task_state<br />
crudini --set /etc/nova/nova.conf oslo_messaging_notifications driver messagingv2<br />
service ceilometer-agent-compute restart<br />
service nova-compute restart<br />
</exec><br />
<br />
</vm><br />
<br />
<vm name="compute2" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-compute</filesystem><br />
<!--vm name="compute2" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64" vcpu="2"--><br />
<!--filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-compute</filesystem!--><br />
<mem>2G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.32/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.32/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
# Create /dev/net/tun device <br />
#mkdir -p /dev/net/<br />
#mknod -m 666 /dev/net/tun c 10 200<br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<!-- STEP 42: Compute service (Nova) --><br />
<filetree seq="step42" root="/etc/nova/">conf/compute2/nova/nova.conf</filetree><br />
<filetree seq="step42" root="/etc/nova/">conf/compute2/nova/nova-compute.conf</filetree><br />
<exec seq="step42" type="verbatim"><br />
service nova-compute restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) --><br />
<filetree seq="step53" root="/etc/neutron/">conf/compute2/neutron/neutron.conf</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/ml2_conf.ini</filetree!--><br />
<exec seq="step53" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
service openvswitch-switch restart<br />
service nova-compute restart<br />
service neutron-openvswitch-agent restart<br />
</exec><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get -y install ceilometer-agent-compute<br />
</exec><br />
<filetree seq="step102" root="/etc/ceilometer/">conf/compute2/ceilometer/ceilometer.conf</filetree><br />
<exec seq="step102" type="verbatim"><br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit True<br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit_period hour<br />
crudini --set /etc/nova/nova.conf DEFAULT notify_on_state_change vm_and_task_state<br />
crudini --set /etc/nova/nova.conf oslo_messaging_notifications driver messagingv2<br />
service ceilometer-agent-compute restart<br />
service nova-compute restart<br />
</exec><br />
<br />
</vm><br />
<br />
<br />
<host><br />
<hostif net="ExtNet"><br />
<ipv4>10.0.10.1/24</ipv4><br />
</hostif><br />
<hostif net="MgmtNet"><br />
<ipv4>10.0.0.1/24</ipv4><br />
</hostif><br />
<exec seq="step00" type="verbatim"><br />
echo "--\n-- Waiting for all VMs to be ssh ready...\n--"<br />
</exec><br />
<exec seq="step00" type="verbatim"><br />
# Wait till ssh is accesible in all VMs<br />
while ! $( nc -z controller 22 ); do sleep 1; done<br />
while ! $( nc -z network 22 ); do sleep 1; done<br />
while ! $( nc -z compute1 22 ); do sleep 1; done<br />
while ! $( nc -z compute2 22 ); do sleep 1; done<br />
</exec><br />
<exec seq="step00" type="verbatim"><br />
echo "-- ...OK\n--"<br />
</exec><br />
</host><br />
<br />
</vnx></div>
David
http://web.dit.upm.es/vnxwiki/index.php?title=Vnx-labo-openstack-4nodes-classic-ovs-stein&diff=2624
Vnx-labo-openstack-4nodes-classic-ovs-stein
2019-08-26T14:13:20Z
<p>David: /* Requirements */</p>
<hr />
<div>{{Title|VNX Openstack Stein four nodes classic scenario using Open vSwitch}}<br />
<br />
== Introduction ==<br />
<br />
This is an Openstack tutorial scenario designed to experiment with Openstack free and open-source software platform for cloud-computing.<br />
<br />
The scenario is made of four virtual machines: a controller node, a network node and two compute nodes, all based on LXC. Optionally, new compute nodes can be added by starting additional VNX scenarios.<br />
<br />
Openstack version used is Stein (April 2019) over Ubuntu 18.04 LTS. The deployment scenario is the one that was named "Classic with Open vSwitch" and was described in previous versions of Openstack documentation (https://docs.openstack.org/liberty/networking-guide/scenario-classic-ovs.html). It is prepared to experiment either with the deployment of virtual machines using "Self Service Networks" provided by Openstack, or with the use of external "Provider networks".<br />
<br />
The scenario is similar to the ones developed by Raul Alvarez to test OpenDaylight-Openstack integration, but instead of using Devstack to configure Openstack nodes, the configuration is done by means of commands integrated into the VNX scenario following [https://docs.openstack.org/stein/install/ Openstack Stein installation recipes].<br />
<br />
[[File:Openstack_tutorial-stein.png|center|thumb|600px|<div align=center><br />
'''Figure 1: Openstack tutorial scenario'''</div>]]<br />
<br />
== Requirements ==<br />
<br />
To use the scenario you need a Linux computer (Ubuntu 16.04 or later recommended) with VNX software installed. At least 8GB of memory are needed to execute the scenario.<br />
<br />
See how to install VNX here: http://vnx.dit.upm.es/vnx/index.php/Vnx-install<br />
<br />
If already installed, update VNX to the latest version with:<br />
<br />
vnx_update<br />
<br />
== Installation ==<br />
<br />
Download the scenario with the virtual machines images included and unpack it:<br />
<br />
wget http://idefix.dit.upm.es/vnx/examples/openstack/openstack_lab-stein_4n_classic_ovs-v01-with-rootfs.tgz<br />
sudo vnx --unpack openstack_lab-stein_4n_classic_ovs-v01-with-rootfs.tgz<br />
<br />
== Starting the scenario ==<br />
<br />
Start the scenario and configure it and load an example cirros and ubuntu images with:<br />
cd openstack_lab-stein_4n_classic_ovs-v01<br />
# Start the scenario<br />
sudo vnx -f openstack_lab.xml -v --create<br />
# Wait for all consoles to have started and configure all Openstack services<br />
vnx -f openstack_lab.xml -v -x start-all<br />
# Load vm images in GLANCE<br />
vnx -f openstack_lab.xml -v -x load-img<br />
<br />
<br />
[[File:Tutorial-openstack-stein-4n-classic-ovs.png|center|thumb|600px|<div align=center><br />
'''Figure 2: Openstack tutorial detailed topology'''</div>]]<br />
<br />
Once started, you can connect to Openstack Dashboard (default/admin/xxxx) starting a browser and pointing it to the controller horizon page. For example:<br />
<br />
firefox 10.0.10.11/horizon<br />
<br />
== Self Service networks example ==<br />
<br />
Access the network topology Dashboard page (Project->Network->Network topology) and create a simple demo scenario inside Openstack:<br />
<br />
vnx -f openstack_lab.xml -v -x create-demo-scenario<br />
<br />
You should see the simple scenario as it is being created through the Dashboard.<br />
<br />
Once created you should be able to access vm1 console, to ping or ssh from the host to the vm1 or the opposite (see the floating IP assigned to vm1 in the Dashboard, probably 10.0.10.102).<br />
<br />
You can create a second Cirros virtual machine (vm2) or a third Ubuntu virtual machine and test connectivity among all virtual machines with: <br />
<br />
vnx -f openstack_lab.xml -v -x create-demo-vm2<br />
vnx -f openstack_lab.xml -v -x create-demo-vm3<br />
<br />
To allow external Internet access from vm1 you hace to configure a NAT in the host. You can easily do it using '''vnx_config_nat''' command distributed with VNX. Just find out the name of the public network interface of your host (i.e eth0) and execute:<br />
<br />
vnx_config_nat ExtNet eth0<br />
<br />
Besides, you can access the Openstack controller by ssh from the host and execute management commands directly:<br />
slogin root@controller # root/xxxx<br />
source bin/admin-openrc.sh # Load admin credentials<br />
<br />
For example, to show the virtual machines started:<br />
openstack server list<br />
<br />
You can also execute that commands from the host where the virtual scenario is started. For that purpose you need to install openstack client first:<br />
<br />
pip install python-openstackclient<br />
source bin/admin-openrc.sh # Load admin credentials<br />
openstack server list<br />
<br />
== Provider networks example ==<br />
<br />
Compute nodes in the Openstack virtual lab scenario have two network interfaces for internal and external connections: <br />
* '''eth2''', connected to Tunnel network and used to connect with VMs in other compute nodes or routers in the network node<br />
* '''eth3''', connected to VLAN network and used for the same purpose and also to connect to external systems through the VLAN based network infraestructure. <br />
<br />
To demonstrate how Openstack VMs can be connected with external systems though the VLAN network switches, an additional demo scenario is included. Just execute:<br />
vnx -f openstack_lab.xml -v -x create-vlan-demo-scenario<br />
<br />
That scenario will create two new networks and subnetworks associated with VLANs 1000 and 1001, and two VMs, vm3 and vm4 connected to that networks. You can see the scenario created through the openstack Dashboard.<br />
<br />
[[File:Openstack-provider-networks-example.png|center|thumb|600px|<div align=center><br />
'''Figure 3: Provider networks testing scenario'''</div>]]<br />
<br />
The commands used to create that networks and vms are the following (can be consulted also in the scenario XML file):<br />
<br />
<pre><br />
# Networks<br />
neutron net-create vlan1000 --shared --provider:physical_network vlan --provider:network_type vlan --provider:segmentation_id 1000<br />
neutron net-create vlan1001 --shared --provider:physical_network vlan --provider:network_type vlan --provider:segmentation_id 1001<br />
neutron subnet-create vlan1000 10.1.2.0/24 --name vlan1000-subnet --allocation-pool start=10.1.2.2,end=10.1.2.99 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8<br />
neutron subnet-create vlan1001 10.1.3.0/24 --name vlan1001-subnet --allocation-pool start=10.1.3.2,end=10.1.3.99 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8<br />
<br />
# VMs<br />
mkdir -p tmp<br />
openstack keypair create vm3 > tmp/vm3<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm3 --nic net-id=vlan1000<br />
openstack keypair create vm4 > tmp/vm4<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm4 --nic net-id=vlan1001<br />
</pre><br />
<br />
To demonstrate the connectivity of vm3 and vm4 with external systems connected on VLANs 1000/1001, you can start and additional virtual scenario which creates three additional systems: vmA (vlan 1000), vmB (vlan 1001) and vlan-router (connected to both vlans). To start it just execute:<br />
vnx -f openstack_lab-vms-vlan.xml -v -t<br />
<br />
[[File:Openstack_lab-vms-vlan-map.png|center|thumb|600px|<div align=center><br />
'''Figure 3: Provider networks demo external scenario'''</div>]]<br />
<br />
Once the scenario is started, you should be able to ping, traceroute and ssh among vm3, vm4, vmA and vmB using the following IP addresses:<br />
<ul><br />
<li>Virtual machines inside Openstack:</li><br />
<ul><br />
<li>vmA1: 10.1.2.20/24</li><br />
<li>vmB1: 10.1.3.20/24</li><br />
</ul><br />
<li>Virtual machines outside Openstack:</li><br />
<ul><br />
<li>vmA2: 10.1.2.100/24</li><br />
<li>vmB2: 10.1.3.100/24</li><br />
</ul><br />
</ul><br />
<br />
You can have a look at the virtual switch that supports the Openstack VLAN Network executing the following command in the host:<br />
ovs-vsctl show<br />
<br />
<br />
[[File:Vnx-demo-scenario-openstack-stein.png|center|thumb|600px|<div align=center><br />
'''Figure 4: Openstack Dashboard view of the demo virtual scenarios created'''</div>]]<br />
<br />
== Stopping or releasing the scenario ==<br />
<br />
To stop the scenario preserving the configuration and the changes made:<br />
<br />
vnx -f openstack_lab.xml -v --shutdown<br />
<br />
To start it again use:<br />
<br />
vnx -f openstack_lab.xml -v --start<br />
<br />
To stop the scenario destroying all the configuration and changes made:<br />
<br />
vnx -f openstack_lab.xml -v --destroy<br />
<br />
To unconfigure the NAT, just execute (change eth0 by the name of your external interface):<br />
<br />
vnx_config_nat -d ExtNet eth0<br />
<br />
== Other useful information ==<br />
<br />
To pack the scenario in a tgz file:<br />
<br />
bin/pack-scenario-with-rootfs # including rootfs<br />
bin/pack-scenario # without rootfs<br />
<br />
== Other Openstack Dashboard screen captures ==<br />
<br />
[[File:Vnx-demo-scenario-openstack-mitaka-compute-overview.png|center|thumb|600px|<div align=center><br />
'''Figure 4: Openstack Dashboard compute overview'''</div>]]<br />
<br />
[[File:Vnx-demo-scenario-openstack-mitaka-instances.png|center|thumb|600px|<div align=center><br />
'''Figure 5: Openstack Dashboard view of the demo virtual machines created'''</div>]]<br />
<br />
<?xml version="1.0" encoding="UTF-8"?><br />
<br />
<!--<br />
~~~~~~~~~~~~~~~~~~~~~~<br />
VNX Sample scenarios<br />
~~~~~~~~~~~~~~~~~~~~~~<br />
<br />
Name: openstack_tutorial-stein<br />
<br />
Description: This is an Openstack tutorial scenario designed to experiment with Openstack free and open-source <br />
software platform for cloud-computing. It is made of four LXC containers: <br />
- one controller<br />
- one network node<br />
- two compute nodes<br />
Openstack version used: Stein.<br />
The network configuration is based on the one named "Classic with Open vSwitch" described here:<br />
http://docs.openstack.org/liberty/networking-guide/scenario-classic-ovs.html<br />
<br />
Author: David Fernandez (david@dit.upm.es)<br />
<br />
This file is part of the Virtual Networks over LinuX (VNX) Project distribution. <br />
(www: http://www.dit.upm.es/vnx - e-mail: vnx@dit.upm.es) <br />
<br />
Copyright (C) 2019 Departamento de Ingenieria de Sistemas Telematicos (DIT)<br />
Universidad Politecnica de Madrid (UPM)<br />
SPAIN<br />
--><br />
<br />
<vnx xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"<br />
xsi:noNamespaceSchemaLocation="/usr/share/xml/vnx/vnx-2.00.xsd"><br />
<global><br />
<version>2.0</version><br />
<scenario_name>openstack_tutorial-stein</scenario_name><br />
<ssh_key>/root/.ssh/id_rsa.pub</ssh_key><br />
<automac offset="0"/><br />
<!--vm_mgmt type="none"/--><br />
<vm_mgmt type="private" network="10.20.0.0" mask="24" offset="0"><br />
<host_mapping /><br />
</vm_mgmt> <br />
<vm_defaults><br />
<console id="0" display="no"/><br />
<console id="1" display="yes"/><br />
</vm_defaults><br />
<cmd-seq seq="step1-6">step1,step2,step3,step3b,step4,step5,step6</cmd-seq><br />
<cmd-seq seq="step4">step41,step42,step43,step44</cmd-seq><br />
<cmd-seq seq="step5">step51,step52,step53</cmd-seq><br />
<!-- start-all for 'noconfig' scenario: all installation steps included --><br />
<!--cmd-seq seq="start-all">step1,step2,step3,step4,step5,step6</cmd-seq--><br />
<!-- start-all for configured scenario: only network and compute node steps included --><br />
<cmd-seq seq="step10">step101,step102</cmd-seq><br />
<cmd-seq seq="start-all">step00,step42,step43,step44,step52,step53</cmd-seq><br />
<cmd-seq seq="discover-hosts">step44</cmd-seq><br />
</global><br />
<br />
<net name="MgmtNet" mode="openvswitch" mtu="1450"/><br />
<net name="TunnNet" mode="openvswitch" mtu="1450"/><br />
<net name="ExtNet" mode="openvswitch" /><br />
<net name="VlanNet" mode="openvswitch" /><br />
<net name="virbr0" mode="virtual_bridge" managed="no"/><br />
<br />
<vm name="controller" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-controller</filesystem><br />
<mem>1G</mem><br />
<!--console id="0" display="yes"/--><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.11/24</ipv4><br />
</if><br />
<if id="2" net="ExtNet"><br />
<ipv4>10.0.10.11/24</ipv4><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-controller.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<filetree seq="on_boot" root="/root/">conf/controller/bin</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
chmod +x /root/bin/*<br />
</exec><br />
<exec seq="on_boot" type="verbatim"><br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
<br />
# Change owner of secret_key to horizon to avoid a 500 error when<br />
# accessing horizon (new problem arosed in v04)<br />
# See: https://ask.openstack.org/en/question/30059/getting-500-internal-server-error-while-accessing-horizon-dashboard-in-ubuntu-icehouse/<br />
chown horizon /var/lib/openstack-dashboard/secret_key<br />
<br />
# Stop nova services. Before being configured, they consume a lot of CPU<br />
service nova-scheduler stop<br />
service nova-api stop<br />
service nova-conductor stop<br />
</exec><br />
<br />
<exec seq="step00" type="verbatim"><br />
# Restart nova services<br />
service nova-scheduler start<br />
service nova-api start<br />
service nova-conductor start<br />
</exec><br />
<br />
<!-- STEP 1: Basic services --><br />
<filetree seq="step1" root="/etc/mysql/mariadb.conf.d/">conf/controller/mysql/mysqld_openstack.cnf</filetree><br />
<filetree seq="step1" root="/etc/">conf/controller/memcached/memcached.conf</filetree><br />
<filetree seq="step1" root="/etc/default/">conf/controller/etcd/etcd</filetree><br />
<!--filetree seq="step1" root="/etc/mongodb.conf">conf/controller/mongodb/mongodb.conf</filetree!--><br />
<exec seq="step1" type="verbatim"><br />
<br />
# Stop nova services. Before being configured, they consume a lot of CPU<br />
service nova-scheduler stop<br />
service nova-api stop<br />
service nova-conductor stop<br />
<br />
# Change all ocurrences of utf8mb4 to utf8. See comment above<br />
#for f in $( find /etc/mysql/mariadb.conf.d/ -type f ); do echo "Changing utf8mb4 to utf8 in file $f"; sed -i -e 's/utf8mb4/utf8/g' $f; done<br />
service mysql restart<br />
#mysql_secure_installation # to be run manually<br />
<br />
rabbitmqctl add_user openstack xxxx<br />
rabbitmqctl set_permissions openstack ".*" ".*" ".*" <br />
<br />
service memcached restart<br />
<br />
systemctl enable etcd<br />
systemctl start etcd<br />
<br />
#service mongodb stop<br />
#rm -f /var/lib/mongodb/journal/prealloc.*<br />
#service mongodb start<br />
</exec><br />
<br />
<!-- STEP 2: Identity service --><br />
<filetree seq="step2" root="/etc/keystone/">conf/controller/keystone/keystone.conf</filetree><br />
<filetree seq="step2" root="/root/bin/">conf/controller/keystone/admin-openrc.sh</filetree><br />
<filetree seq="step2" root="/root/bin/">conf/controller/keystone/demo-openrc.sh</filetree><br />
<exec seq="step2" type="verbatim"><br />
count=1; while ! mysqladmin ping ; do echo -n $count; echo ": waiting for mysql ..."; ((count++)) &amp;&amp; ((count==6)) &amp;&amp; echo "--" &amp;&amp; echo "-- ERROR: database not ready." &amp;&amp; echo "--" &amp;&amp; break; sleep 2; done<br />
</exec><br />
<exec seq="step2" type="verbatim"><br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE keystone;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
su -s /bin/sh -c "keystone-manage db_sync" keystone<br />
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone<br />
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone<br />
<br />
keystone-manage bootstrap --bootstrap-password xxxx \<br />
--bootstrap-admin-url http://controller:5000/v3/ \<br />
--bootstrap-internal-url http://controller:5000/v3/ \<br />
--bootstrap-public-url http://controller:5000/v3/ \<br />
--bootstrap-region-id RegionOne<br />
<br />
echo "ServerName controller" >> /etc/apache2/apache2.conf<br />
#ln -s /etc/apache2/sites-available/wsgi-keystone.conf /etc/apache2/sites-enabled<br />
service apache2 restart<br />
rm -f /var/lib/keystone/keystone.db<br />
sleep 5<br />
<br />
export OS_USERNAME=admin<br />
export OS_PASSWORD=xxxx<br />
export OS_PROJECT_NAME=admin<br />
export OS_USER_DOMAIN_NAME=Default<br />
export OS_PROJECT_DOMAIN_NAME=Default<br />
export OS_AUTH_URL=http://controller:5000/v3<br />
export OS_IDENTITY_API_VERSION=3<br />
<br />
# Create users and projects<br />
openstack project create --domain default --description "Service Project" service<br />
openstack project create --domain default --description "Demo Project" demo<br />
openstack user create --domain default --password=xxxx demo<br />
openstack role create user<br />
openstack role add --project demo --user demo user<br />
</exec><br />
<br />
<!-- <br />
STEP 3: Image service (Glance) <br />
--><br />
<filetree seq="step3" root="/etc/glance/">conf/controller/glance/glance-api.conf</filetree><br />
<filetree seq="step3" root="/etc/glance/">conf/controller/glance/glance-registry.conf</filetree><br />
<exec seq="step3" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE glance;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx glance<br />
openstack role add --project service --user glance admin<br />
openstack service create --name glance --description "OpenStack Image" image<br />
openstack endpoint create --region RegionOne image public http://controller:9292<br />
openstack endpoint create --region RegionOne image internal http://controller:9292<br />
openstack endpoint create --region RegionOne image admin http://controller:9292<br />
<br />
su -s /bin/sh -c "glance-manage db_sync" glance<br />
service glance-registry restart<br />
service glance-api restart<br />
#rm -f /var/lib/glance/glance.sqlite<br />
</exec><br />
<br />
<!-- <br />
STEP 3B: Placement service API<br />
--><br />
<filetree seq="step3b" root="/etc/placement/">conf/controller/placement/placement.conf</filetree><br />
<exec seq="step3b" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE placement;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx placement<br />
openstack role add --project service --user placement admin<br />
openstack service create --name placement --description "Placement API" placement<br />
openstack endpoint create --region RegionOne placement public http://controller:8778<br />
openstack endpoint create --region RegionOne placement internal http://controller:8778<br />
openstack endpoint create --region RegionOne placement admin http://controller:8778<br />
su -s /bin/sh -c "placement-manage db sync" placement<br />
service apache2 restart<br />
</exec><br />
<br />
<br />
<br />
<!-- STEP 4: Compute service (Nova) --><br />
<filetree seq="step41" root="/etc/nova/">conf/controller/nova/nova.conf</filetree><br />
<exec seq="step41" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova_api;"<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova;"<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE nova_cell0;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
<br />
openstack user create --domain default --password=xxxx nova<br />
openstack role add --project service --user nova admin<br />
openstack service create --name nova --description "OpenStack Compute" compute<br />
<br />
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1<br />
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1<br />
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1<br />
<br />
# Restart services stopped at step 1 to save CPU<br />
service nova-scheduler start<br />
service nova-api start<br />
service nova-conductor start<br />
<br />
su -s /bin/sh -c "nova-manage api_db sync" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova<br />
su -s /bin/sh -c "nova-manage db sync" nova<br />
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova<br />
<br />
service nova-api restart<br />
service nova-consoleauth restart<br />
service nova-scheduler restart<br />
service nova-conductor restart<br />
service nova-novncproxy restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<exec seq="step43" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Wait for compute1 hypervisor to be up<br />
while [ $( openstack hypervisor list --matching compute1 -f value -c State ) != 'up' ]; do echo "waiting for compute1 hypervisor..."; sleep 5; done<br />
</exec><br />
<exec seq="step43" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Wait for compute2 hypervisor to be up<br />
while [ $( openstack hypervisor list --matching compute2 -f value -c State ) != 'up' ]; do echo "waiting for compute2 hypervisor..."; sleep 5; done<br />
</exec><br />
<exec seq="step44" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
openstack hypervisor list<br />
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron) --><br />
<filetree seq="step51" root="/etc/neutron/">conf/controller/neutron/neutron.conf</filetree><br />
<!--filetree seq="step51" root="/etc/neutron/">conf/controller/neutron/neutron_lbaas.conf</filetree--><br />
<filetree seq="step51" root="/etc/neutron/plugins/ml2/">conf/controller/neutron/ml2_conf.ini</filetree><br />
<exec seq="step51" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE neutron;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password=xxxx neutron<br />
openstack role add --project service --user neutron admin<br />
openstack service create --name neutron --description "OpenStack Networking" network<br />
openstack endpoint create --region RegionOne network public http://controller:9696<br />
openstack endpoint create --region RegionOne network internal http://controller:9696<br />
openstack endpoint create --region RegionOne network admin http://controller:9696<br />
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron<br />
<br />
# LBaaS<br />
neutron-db-manage --subproject neutron-lbaas upgrade head<br />
<br />
# FwaaS<br />
neutron-db-manage --subproject neutron-fwaas upgrade head<br />
<br />
# LBaaS Dashboard panels<br />
#git clone https://git.openstack.org/openstack/neutron-lbaas-dashboard<br />
#cd neutron-lbaas-dashboard<br />
#git checkout stable/mitaka<br />
#python setup.py install<br />
#cp neutron_lbaas_dashboard/enabled/_1481_project_ng_loadbalancersv2_panel.py /usr/share/openstack-dashboard/openstack_dashboard/local/enabled/<br />
#cd /usr/share/openstack-dashboard<br />
#./manage.py collectstatic --noinput<br />
#./manage.py compress<br />
#sudo service apache2 restart<br />
<br />
service nova-api restart<br />
service neutron-server restart<br />
</exec><br />
<br />
<!-- STEP 6: Dashboard service --><br />
<filetree seq="step6" root="/etc/openstack-dashboard/">conf/controller/dashboard/local_settings.py</filetree><br />
<exec seq="step6" type="verbatim"><br />
#chown www-data:www-data /var/lib/openstack-dashboard/secret_key<br />
rm /var/lib/openstack-dashboard/secret_key<br />
systemctl enable apache2<br />
service apache2 restart<br />
</exec><br />
<br />
<!-- STEP 7: Trove service --><br />
<cmd-seq seq="step7">step71,step72,step73</cmd-seq><br />
<exec seq="step71" type="verbatim"><br />
apt-get -y install python-trove python-troveclient python-glanceclient trove-common trove-api trove-taskmanager trove-conductor python-pip<br />
pip install trove-dashboard==7.0.0.0b2<br />
</exec><br />
<br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-conductor.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-taskmanager.conf</filetree><br />
<filetree seq="step72" root="/etc/trove/">conf/controller/trove/trove-guestagent.conf</filetree><br />
<exec seq="step72" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE trove;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
source /root/bin/admin-openrc.sh<br />
<br />
openstack user create --domain default --password xxxx trove<br />
openstack role add --project service --user trove admin<br />
openstack service create --name trove --description "Database" database<br />
openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s<br />
<br />
su -s /bin/sh -c "trove-manage db_sync" trove<br />
<br />
service trove-api restart<br />
service trove-taskmanager restart<br />
service trove-conductor restart<br />
<br />
# Install trove_dashboard<br />
cp -a /usr/local/lib/python2.7/dist-packages/trove_dashboard/enabled/* /usr/share/openstack-dashboard/openstack_dashboard/local/enabled/<br />
service apache2 restart<br />
<br />
</exec><br />
<br />
<exec seq="step73" type="verbatim"><br />
#wget -P /tmp/images http://tarballs.openstack.org/trove/images/ubuntu/mariadb.qcow2<br />
wget -P /tmp/images/ http://138.4.7.228/download/vnx/filesystems/ostack-images/trove/mariadb.qcow2<br />
glance image-create --name "trove-mariadb" --file /tmp/images/mariadb.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/mariadb.qcow2<br />
su -s /bin/sh -c "trove-manage --config-file /etc/trove/trove.conf datastore_update mysql ''" trove <br />
su -s /bin/sh -c "trove-manage --config-file /etc/trove/trove.conf datastore_version_update mysql mariadb mariadb glance_image_ID '' 1" trove<br />
<br />
# Create example database<br />
openstack flavor show m1.smaller >/dev/null 2>&amp;1 || openstack flavor create m1.smaller --ram 512 --disk 3 --vcpus 1 --id 6<br />
#trove create mysql_instance_1 m1.smaller --size 1 --databases myDB --users userA:xxxx --datastore_version mariadb --datastore mysql<br />
</exec><br />
<br />
<!-- STEP 8: Heat service --><br />
<!--cmd-seq seq="step8">step81,step82</cmd-seq--><br />
<br />
<filetree seq="step8" root="/etc/heat/">conf/controller/heat/heat.conf</filetree><br />
<filetree seq="step8" root="/root/heat/">conf/controller/heat/examples</filetree><br />
<exec seq="step8" type="verbatim"><br />
mysql -u root --password='xxxx' -e "CREATE DATABASE heat;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password xxxx heat<br />
openstack role add --project service --user heat admin<br />
openstack service create --name heat --description "Orchestration" orchestration<br />
openstack service create --name heat-cfn --description "Orchestration" cloudformation<br />
openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s<br />
openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1<br />
openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1<br />
openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1<br />
openstack domain create --description "Stack projects and users" heat<br />
openstack user create --domain heat --password xxxx heat_domain_admin<br />
openstack role add --domain heat --user-domain heat --user heat_domain_admin admin<br />
openstack role create heat_stack_owner<br />
openstack role add --project demo --user demo heat_stack_owner<br />
openstack role create heat_stack_user<br />
<br />
su -s /bin/sh -c "heat-manage db_sync" heat<br />
service heat-api restart<br />
service heat-api-cfn restart<br />
service heat-engine restart<br />
<br />
# Install Orchestration interface in Dashboard<br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get install -y gettext<br />
pip3 install heat-dashboard<br />
<br />
cd /root<br />
git clone https://github.com/openstack/heat-dashboard.git<br />
cd heat-dashboard/<br />
git checkout stable/stein<br />
cp heat_dashboard/enabled/_[1-9]*.py /usr/share/openstack-dashboard/openstack_dashboard/local/enabled<br />
python3 ./manage.py compilemessages<br />
cd /usr/share/openstack-dashboard<br />
DJANGO_SETTINGS_MODULE=openstack_dashboard.settings python3 manage.py collectstatic --noinput<br />
DJANGO_SETTINGS_MODULE=openstack_dashboard.settings python3 manage.py compress --force<br />
rm /var/lib/openstack-dashboard/secret_key<br />
service apache2 restart<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-heat" type="verbatim"><br />
source /root/bin/demo-openrc.sh<br />
<br />
# Create internal network<br />
openstack network create net-heat<br />
openstack subnet create --network net-heat --gateway 10.1.10.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.10.0/24 --allocation-pool start=10.1.10.8,end=10.1.10.100 subnet-heat<br />
<br />
mkdir -p /root/keys<br />
openstack keypair create key-heat > /root/keys/key-heat<br />
#export NET_ID=$(openstack network list | awk '/ net-heat / { print $2 }')<br />
export NET_ID=$( openstack network list --name net-heat -f value -c ID )<br />
openstack stack create -t /root/heat/examples/demo-template.yml --parameter "NetID=$NET_ID" stack<br />
<br />
</exec><br />
<br />
<br />
<!-- STEP 9: Tacker service --><br />
<cmd-seq seq="step9">step91,step92</cmd-seq><br />
<br />
<exec seq="step91" type="verbatim"><br />
apt-get -y install python-pip git<br />
pip install --upgrade pip<br />
</exec><br />
<br />
<filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/default-vim-config.yaml</filetree><br />
<filetree seq="step92" root="/root/tacker/">conf/controller/tacker/examples</filetree><br />
<exec seq="step92" type="verbatim"><br />
sed -i -e 's/.*"resource_types:OS::Nova::Flavor":.*/ "resource_types:OS::Nova::Flavor": "role:admin",/' /etc/heat/policy.json <br />
<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE tacker;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
source /root/bin/admin-openrc.sh<br />
openstack user create --domain default --password xxxx tacker<br />
openstack role add --project service --user tacker admin<br />
openstack service create --name tacker --description "Tacker Project" nfv-orchestration<br />
openstack endpoint create --region RegionOne nfv-orchestration public http://controller:9890/<br />
openstack endpoint create --region RegionOne nfv-orchestration internal http://controller:9890/<br />
openstack endpoint create --region RegionOne nfv-orchestration admin http://controller:9890/<br />
<br />
mkdir -p /root/tacker<br />
cd /root/tacker<br />
git clone https://github.com/openstack/tacker<br />
cd tacker<br />
git checkout stable/ocata<br />
pip install -r requirements.txt<br />
pip install tosca-parser<br />
python setup.py install<br />
mkdir -p /var/log/tacker<br />
<br />
/usr/local/bin/tacker-db-manage --config-file /usr/local/etc/tacker/tacker.conf upgrade head<br />
<br />
# Tacker client<br />
cd /root/tacker<br />
git clone https://github.com/openstack/python-tackerclient<br />
cd python-tackerclient<br />
git checkout stable/ocata<br />
python setup.py install<br />
<br />
# Tacker horizon<br />
cd /root/tacker<br />
git clone https://github.com/openstack/tacker-horizon<br />
cd tacker-horizon<br />
git checkout stable/ocata<br />
python setup.py install<br />
cp tacker_horizon/enabled/* /usr/share/openstack-dashboard/openstack_dashboard/enabled/<br />
service apache2 restart<br />
<br />
# Start tacker server<br />
mkdir -p /var/log/tacker<br />
nohup python /usr/local/bin/tacker-server \<br />
--config-file /usr/local/etc/tacker/tacker.conf \<br />
--log-file /var/log/tacker/tacker.log &amp;<br />
<br />
# Register default VIM<br />
tacker vim-register --is-default --config-file /usr/local/etc/tacker/default-vim-config.yaml \<br />
--description "Default VIM" "Openstack-VIM"<br />
<br />
</exec><br />
<exec seq="step93" type="verbatim"><br />
nohup python /usr/local/bin/tacker-server \<br />
--config-file /usr/local/etc/tacker/tacker.conf \<br />
--log-file /var/log/tacker/tacker.log &amp;<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-tacker" type="verbatim"><br />
source /root/bin/demo-openrc.sh<br />
<br />
# Create internal network<br />
openstack network create net-tacker<br />
openstack subnet create --network net-tacker --gateway 10.1.11.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.11.0/24 --allocation-pool start=10.1.11.8,end=10.1.11.100 subnet-tacker<br />
<br />
cd /root/tacker/examples<br />
tacker vnfd-create --vnfd-file sample-vnfd.yaml testd<br />
<br />
# Falla con error: <br />
# ERROR: Property error: : resources.VDU1.properties.image: : No module named v2.client<br />
tacker vnf-create --vnfd-id $( tacker vnfd-list | awk '/ testd / { print $2 }' ) test<br />
<br />
</exec><br />
<br />
<!--filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<exec seq="step92" type="verbatim"><br />
</exec--><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
#apt-get -y install python-pip<br />
#pip install --upgrade pip<br />
#pip install gnocchi[mysql,keystone] gnocchiclient<br />
apt-get -y install ceilometer-collector ceilometer-agent-central ceilometer-agent-notification python-ceilometerclient<br />
apt-get -y install gnocchi-common gnocchi-api gnocchi-metricd gnocchi-statsd python-gnocchiclient<br />
<br />
</exec><br />
<br />
<filetree seq="step102" root="/etc/ceilometer/">conf/controller/ceilometer/ceilometer.conf</filetree><br />
<!--filetree seq="step102" root="/etc/mongodb.conf">conf/controller/mongodb/mongodb.conf</filetree--><br />
<!--filetree seq="step102" root="/etc/apache2/sites-available/ceilometer.conf">conf/controller/ceilometer/apache/ceilometer.conf</filetree--><br />
<!--filetree seq="step102" root="/etc/apache2/sites-available/gnocchi.conf">conf/controller/ceilometer/apache/gnocchi.conf</filetree--><br />
<filetree seq="step102" root="/etc/gnocchi/">conf/controller/ceilometer/gnocchi.conf</filetree><br />
<filetree seq="step102" root="/etc/gnocchi/">conf/controller/ceilometer/api-paste.ini</filetree><br />
<exec seq="step102" type="verbatim"><br />
<br />
# Create gnocchi database<br />
mysql -u root --password='xxxx' -e "CREATE DATABASE gnocchidb default character set utf8;"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON gnocchidb.* TO 'gnocchi'@'%' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "GRANT ALL PRIVILEGES ON gnocchidb.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'xxxx';"<br />
mysql -u root --password='xxxx' -e "flush privileges;"<br />
<br />
# Ceilometer<br />
source /root/bin/admin-openrc.sh <br />
openstack user create --domain default --password xxxx ceilometer<br />
openstack role add --project service --user ceilometer admin<br />
openstack service create --name ceilometer --description "Telemetry" metering<br />
openstack user create --domain default --password xxxx gnocchi<br />
openstack role add --project service --user gnocchi admin<br />
openstack service create --name gnocchi --description "Metric Service" metric<br />
openstack endpoint create --region RegionOne metric public http://controller:8041<br />
openstack endpoint create --region RegionOne metric internal http://controller:8041<br />
openstack endpoint create --region RegionOne metric admin http://controller:8041<br />
<br />
mkdir -p /var/cache/gnocchi<br />
chown gnocchi:gnocchi -R /var/cache/gnocchi<br />
mkdir -p /var/lib/gnocchi<br />
chown gnocchi:gnocchi -R /var/lib/gnocchi<br />
<br />
gnocchi-upgrade<br />
sed -i 's/8000/8041/g' /usr/bin/gnocchi-api<br />
# Correct error in gnocchi-api startup script<br />
sed -i -e 's/exec $DAEMON $DAEMON_ARGS/exec $DAEMON -- $DAEMON_ARGS/' /etc/init.d/gnocchi-api<br />
systemctl enable gnocchi-api.service gnocchi-metricd.service gnocchi-statsd.service<br />
systemctl start gnocchi-api.service gnocchi-metricd.service gnocchi-statsd.service<br />
<br />
ceilometer-upgrade --skip-metering-database<br />
service ceilometer-agent-central restart<br />
service ceilometer-agent-notification restart<br />
service ceilometer-collector restart<br />
<br />
# Enable Glance service meters<br />
crudini --set /etc/glance/glance-api.conf DEFAULT transport_url rabbit://openstack:xxxx@controller<br />
crudini --set /etc/glance/glance-api.conf oslo_messaging_notifications driver messagingv2<br />
crudini --set /etc/glance/glance-registry.conf DEFAULT transport_url rabbit://openstack:xxxx@controller<br />
crudini --set /etc/glance/glance-registry.conf oslo_messaging_notifications driver messagingv2<br />
<br />
service glance-registry restart<br />
service glance-api restart<br />
<br />
# Enable Neutron service meters<br />
crudini --set /etc/neutron/neutron.conf oslo_messaging_notifications driver messagingv2<br />
service neutron-server restart<br />
<br />
# Enable Heat service meters<br />
crudini --set /etc/heat/heat.conf oslo_messaging_notifications driver messagingv2<br />
service heat-api restart<br />
service heat-api-cfn restart<br />
service heat-engine restart<br />
<br />
#crudini --set /etc/glance/glance-api.conf DEFAULT rpc_backend rabbit<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_notifications driver messagingv2<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_host controller<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_userid openstack<br />
#crudini --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_password xxxx<br />
<br />
#crudini --set /etc/glance/glance-registry.conf DEFAULT rpc_backend rabbit<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_notifications driver messagingv2<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_host controller<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_userid openstack<br />
#crudini --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_password xxxx<br />
<br />
<br />
</exec><br />
<br />
<br />
<exec seq="load-img" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create flavors if not created<br />
openstack flavor show m1.nano >/dev/null 2>&amp;1 || openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano<br />
openstack flavor show m1.tiny >/dev/null 2>&amp;1 || openstack flavor create --id 1 --vcpus 1 --ram 512 --disk 1 m1.tiny<br />
openstack flavor show m1.smaller >/dev/null 2>&amp;1 || openstack flavor create --id 6 --vcpus 1 --ram 512 --disk 3 m1.smaller<br />
<br />
# CentOS image<br />
# Cirros image <br />
#wget -P /tmp/images http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img<br />
wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/cirros-0.3.4-x86_64-disk-vnx.qcow2<br />
glance image-create --name "cirros-0.3.4-x86_64-vnx" --file /tmp/images/cirros-0.3.4-x86_64-disk-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/cirros-0.3.4-x86_64-disk*.qcow2<br />
<br />
# Ubuntu image (trusty)<br />
#wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/trusty-server-cloudimg-amd64-disk1-vnx.qcow2<br />
#glance image-create --name "trusty-server-cloudimg-amd64-vnx" --file /tmp/images/trusty-server-cloudimg-amd64-disk1-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
#rm /tmp/images/trusty-server-cloudimg-amd64-disk1*.qcow2<br />
<br />
# Ubuntu image (xenial)<br />
wget -P /tmp/images http://138.4.7.228/download/vnx/filesystems/ostack-images/xenial-server-cloudimg-amd64-disk1-vnx.qcow2<br />
glance image-create --name "xenial-server-cloudimg-amd64-vnx" --file /tmp/images/xenial-server-cloudimg-amd64-disk1-vnx.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
rm /tmp/images/xenial-server-cloudimg-amd64-disk1*.qcow2<br />
<br />
#wget -P /tmp/images http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2<br />
#glance image-create --name "CentOS-7-x86_64" --file /tmp/images/CentOS-7-x86_64-GenericCloud.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress<br />
#rm /tmp/images/CentOS-7-x86_64-GenericCloud.qcow2<br />
</exec><br />
<br />
<exec seq="create-demo-scenario" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create internal network<br />
#neutron net-create net0<br />
openstack network create net0<br />
#neutron subnet-create net0 10.1.1.0/24 --name subnet0 --gateway 10.1.1.1 --dns-nameserver 8.8.8.8<br />
openstack subnet create --network net0 --gateway 10.1.1.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.1.0/24 --allocation-pool start=10.1.1.8,end=10.1.1.100 subnet0<br />
<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm1 > /root/keys/vm1<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm1 --nic net-id=net0 --key-name vm1<br />
<br />
# Create external network<br />
#neutron net-create ExtNet --provider:physical_network provider --provider:network_type flat --router:external --shared<br />
openstack network create --share --external --provider-physical-network provider --provider-network-type flat ExtNet<br />
#neutron subnet-create --name ExtSubnet --allocation-pool start=10.0.10.100,end=10.0.10.200 --dns-nameserver 10.0.10.1 --gateway 10.0.10.1 ExtNet 10.0.10.0/24<br />
openstack subnet create --network ExtNet --gateway 10.0.10.1 --dns-nameserver 10.0.10.1 --subnet-range 10.0.10.0/24 --allocation-pool start=10.0.10.100,end=10.0.10.200 ExtSubNet<br />
#neutron router-create r0<br />
openstack router create r0<br />
#neutron router-gateway-set r0 ExtNet<br />
openstack router set r0 --external-gateway ExtNet<br />
#neutron router-interface-add r0 subnet0<br />
openstack router add subnet r0 subnet0<br />
<br />
<br />
# Assign floating IP address to vm1<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm1<br />
openstack server add floating ip vm1 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
<br />
<br />
# Create security group rules to allow ICMP, SSH and WWW access<br />
openstack security group rule create --proto icmp --dst-port 0 default<br />
openstack security group rule create --proto tcp --dst-port 80 default<br />
openstack security group rule create --proto tcp --dst-port 22 default<br />
<br />
</exec><br />
<br />
<exec seq="create-demo-vm2" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm2 > /root/keys/vm2<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm2 --nic net-id=net0 --key-name vm2<br />
# Assign floating IP address to vm2<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm2<br />
openstack server add floating ip vm2 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
</exec><br />
<br />
<exec seq="create-demo-vm3" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
# Create virtual machine<br />
mkdir -p /root/keys<br />
openstack keypair create vm3 > /root/keys/vm3<br />
openstack server create --flavor m1.smaller --image xenial-server-cloudimg-amd64-vnx vm3 --nic net-id=net0 --key-name vm3<br />
# Assign floating IP address to vm3<br />
#openstack ip floating add $( openstack ip floating create ExtNet -c ip -f value ) vm3<br />
openstack server add floating ip vm3 $( openstack floating ip create ExtNet -c floating_ip_address -f value )<br />
</exec><br />
<br />
<exec seq="create-vlan-demo-scenario" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
<br />
# Create vlan based networks and subnetworks<br />
#neutron net-create vlan1000 --shared --provider:physical_network vlan --provider:network_type vlan --provider:segmentation_id 1000<br />
#neutron net-create vlan1001 --shared --provider:physical_network vlan --provider:network_type vlan --provider:segmentation_id 1001<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1000 vlan1000<br />
openstack network create --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 1001 vlan1001<br />
#neutron subnet-create vlan1000 10.1.2.0/24 --name vlan1000-subnet --allocation-pool start=10.1.2.2,end=10.1.2.99 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8<br />
#neutron subnet-create vlan1001 10.1.3.0/24 --name vlan1001-subnet --allocation-pool start=10.1.3.2,end=10.1.3.99 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8<br />
openstack subnet create --network vlan1000 --gateway 10.1.2.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.2.0/24 --allocation-pool start=10.1.2.2,end=10.1.2.99 subvlan1000<br />
openstack subnet create --network vlan1001 --gateway 10.1.3.1 --dns-nameserver 8.8.8.8 --subnet-range 10.1.3.0/24 --allocation-pool start=10.1.3.2,end=10.1.3.99 subvlan1001<br />
<br />
<br />
# Create virtual machine<br />
mkdir -p tmp<br />
openstack keypair create vm3 > tmp/vm3<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm3 --nic net-id=vlan1000 --key-name vm3<br />
openstack keypair create vm4 > tmp/vm4<br />
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-vnx vm4 --nic net-id=vlan1001 --key-name vm4<br />
<br />
<br />
# Create security group rules to allow ICMP, SSH and WWW access<br />
openstack security group rule create --proto icmp --dst-port 0 default<br />
openstack security group rule create --proto tcp --dst-port 80 default<br />
openstack security group rule create --proto tcp --dst-port 22 default<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
source /root/bin/admin-openrc.sh<br />
echo "--"<br />
echo "-- Keystone (identity)"<br />
echo "--"<br />
echo "Command: openstack --os-auth-url http://controller:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue"<br />
openstack --os-auth-url http://controller:35357/v3 \<br />
--os-project-domain-name default --os-user-domain-name default \<br />
--os-project-name admin --os-username admin token issue<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Glance (images)"<br />
echo "--"<br />
echo "Command: openstack image list"<br />
openstack image list<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Nova (compute)"<br />
echo "--"<br />
echo "Command: openstack compute service list"<br />
openstack compute service list<br />
echo "Command: openstack hypervisor service list"<br />
openstack hypervisor service list<br />
echo "Command: openstack catalog list"<br />
openstack catalog list<br />
echo "Command: nova-status upgrade check"<br />
nova-status upgrade check<br />
</exec><br />
<br />
<exec seq="verify" type="verbatim"><br />
echo "--"<br />
echo "-- Neutron (network)"<br />
echo "--"<br />
echo "Command: openstack extension list --network"<br />
openstack extension list --network<br />
echo "Command: openstack network agent list"<br />
openstack network agent list<br />
echo "Command: openstack security group list"<br />
openstack security group list<br />
echo "Command: openstack security group rule list"<br />
openstack security group rule list<br />
</exec><br />
<br />
</vm><br />
<br />
<vm name="network" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-network</filesystem><br />
<!--vm name="network" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-network</filesystem--><br />
<mem>1G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.21/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.21/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="4" net="ExtNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<forwarding type="ip" /><br />
<forwarding type="ipv6" /><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts<br />
rm /root/hosts<br />
</exec><br />
<exec seq="on_boot" type="verbatim"><br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --><br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<filetree seq="on_boot" root="/root/">conf/network/bin</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
chmod +x /root/bin/*<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) --><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/neutron.conf</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/metadata_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/plugins/ml2/">conf/network/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step52" root="/etc/neutron/plugins/ml2/">conf/network/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/l3_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/dhcp_agent.ini</filetree><br />
<filetree seq="step52" root="/etc/neutron/">conf/network/neutron/dnsmasq-neutron.conf</filetree><br />
<!--filetree seq="step52" root="/etc/neutron/">conf/network/neutron/fwaas_driver.ini</filetree--><br />
<!--filetree seq="step52" root="/etc/neutron/">conf/network/neutron/lbaas_agent.ini</filetree--><br />
<exec seq="step52" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
#ovs-vsctl add-br br-ex<br />
ovs-vsctl add-port br-provider eth4<br />
<br />
service neutron-lbaasv2-agent restart<br />
service openvswitch-switch restart<br />
<br />
service neutron-openvswitch-agent restart<br />
#service neutron-linuxbridge-agent restart<br />
service neutron-dhcp-agent restart<br />
service neutron-metadata-agent restart<br />
service neutron-l3-agent restart<br />
<br />
rm -f /var/lib/neutron/neutron.sqlite<br />
</exec><br />
<br />
</vm><br />
<br />
<br />
<vm name="compute1" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-compute</filesystem><br />
<!--vm name="compute1" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64" vcpu="2"--><br />
<!--filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-compute</filesystem--><br />
<mem>2G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.31/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.31/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
# Create /dev/net/tun device <br />
#mkdir -p /dev/net/<br />
#mknod -m 666 /dev/net/tun c 10 200<br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<!-- STEP 42: Compute service (Nova) --><br />
<filetree seq="step42" root="/etc/nova/">conf/compute1/nova/nova.conf</filetree><br />
<filetree seq="step42" root="/etc/nova/">conf/compute1/nova/nova-compute.conf</filetree><br />
<exec seq="step42" type="verbatim"><br />
service nova-compute restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron) --><br />
<filetree seq="step53" root="/etc/neutron/">conf/compute1/neutron/neutron.conf</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute1/neutron/ml2_conf.ini</filetree!--><br />
<exec seq="step53" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
service openvswitch-switch restart<br />
service nova-compute restart<br />
service neutron-openvswitch-agent restart<br />
</exec><br />
<br />
<!--filetree seq="step92" root="/usr/local/etc/tacker/">conf/controller/tacker/tacker.conf</filetree><br />
<exec seq="step92" type="verbatim"><br />
</exec--><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get -y install ceilometer-agent-compute<br />
</exec><br />
<filetree seq="step102" root="/etc/ceilometer/">conf/compute2/ceilometer/ceilometer.conf</filetree><br />
<exec seq="step102" type="verbatim"><br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit True<br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit_period hour<br />
crudini --set /etc/nova/nova.conf DEFAULT notify_on_state_change vm_and_task_state<br />
crudini --set /etc/nova/nova.conf oslo_messaging_notifications driver messagingv2<br />
service ceilometer-agent-compute restart<br />
service nova-compute restart<br />
</exec><br />
<br />
</vm><br />
<br />
<vm name="compute2" type="lxc" arch="x86_64"><br />
<filesystem type="cow">filesystems/rootfs_lxc_ubuntu64-ostack-compute</filesystem><br />
<!--vm name="compute2" type="libvirt" subtype="kvm" os="linux" exec_mode="sdisk" arch="x86_64" vcpu="2"--><br />
<!--filesystem type="cow">filesystems/rootfs_kvm_ubuntu64-ostack-compute</filesystem!--><br />
<mem>2G</mem><br />
<if id="1" net="MgmtNet"><br />
<ipv4>10.0.0.32/24</ipv4><br />
</if><br />
<if id="2" net="TunnNet"><br />
<ipv4>10.0.1.32/24</ipv4><br />
</if><br />
<if id="3" net="VlanNet"><br />
</if><br />
<if id="9" net="virbr0"><br />
<ipv4>dhcp</ipv4><br />
</if><br />
<br />
<!-- Copy /etc/hosts file --><br />
<filetree seq="on_boot" root="/root/">conf/hosts</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
cat /root/hosts >> /etc/hosts;<br />
rm /root/hosts;<br />
# Create /dev/net/tun device <br />
#mkdir -p /dev/net/<br />
#mknod -m 666 /dev/net/tun c 10 200<br />
# Change MgmtNet and TunnNet interfaces MTU<br />
ifconfig eth1 mtu 1450<br />
sed -i -e '/iface eth1 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth2 mtu 1450<br />
sed -i -e '/iface eth2 inet static/a \ mtu 1450' /etc/network/interfaces<br />
ifconfig eth3 mtu 1450<br />
sed -i -e '/iface eth3 inet static/a \ mtu 1450' /etc/network/interfaces<br />
</exec><br />
<br />
<!-- Copy ntp config and restart service --><br />
<!-- Note: not used because ntp cannot be used inside a container. Clocks are supposed to be synchronized<br />
between the vms/containers and the host --> <br />
<!--filetree seq="on_boot" root="/etc/chrony/chrony.conf">conf/ntp/chrony-others.conf</filetree><br />
<exec seq="on_boot" type="verbatim"><br />
service chrony restart<br />
</exec--><br />
<br />
<!-- STEP 42: Compute service (Nova) --><br />
<filetree seq="step42" root="/etc/nova/">conf/compute2/nova/nova.conf</filetree><br />
<filetree seq="step42" root="/etc/nova/">conf/compute2/nova/nova-compute.conf</filetree><br />
<exec seq="step42" type="verbatim"><br />
service nova-compute restart<br />
#rm -f /var/lib/nova/nova.sqlite<br />
</exec><br />
<br />
<!-- STEP 5: Network service (Neutron with Option 2: Self-service networks) --><br />
<filetree seq="step53" root="/etc/neutron/">conf/compute2/neutron/neutron.conf</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/linuxbridge_agent.ini</filetree--><br />
<filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/openvswitch_agent.ini</filetree><br />
<!--filetree seq="step53" root="/etc/neutron/plugins/ml2/">conf/compute2/neutron/ml2_conf.ini</filetree!--><br />
<exec seq="step53" type="verbatim"><br />
ovs-vsctl add-br br-vlan<br />
ovs-vsctl add-port br-vlan eth3<br />
service openvswitch-switch restart<br />
service nova-compute restart<br />
service neutron-openvswitch-agent restart<br />
</exec><br />
<br />
<!-- STEP 10: Ceilometer service --><br />
<exec seq="step101" type="verbatim"><br />
export DEBIAN_FRONTEND=noninteractive<br />
apt-get -y install ceilometer-agent-compute<br />
</exec><br />
<filetree seq="step102" root="/etc/ceilometer/">conf/compute2/ceilometer/ceilometer.conf</filetree><br />
<exec seq="step102" type="verbatim"><br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit True<br />
crudini --set /etc/nova/nova.conf DEFAULT instance_usage_audit_period hour<br />
crudini --set /etc/nova/nova.conf DEFAULT notify_on_state_change vm_and_task_state<br />
crudini --set /etc/nova/nova.conf oslo_messaging_notifications driver messagingv2<br />
service ceilometer-agent-compute restart<br />
service nova-compute restart<br />
</exec><br />
<br />
</vm><br />
<br />
<br />
<host><br />
<hostif net="ExtNet"><br />
<ipv4>10.0.10.1/24</ipv4><br />
</hostif><br />
<hostif net="MgmtNet"><br />
<ipv4>10.0.0.1/24</ipv4><br />
</hostif><br />
<exec seq="step00" type="verbatim"><br />
echo "--\n-- Waiting for all VMs to be ssh ready...\n--"<br />
</exec><br />
<exec seq="step00" type="verbatim"><br />
# Wait till ssh is accesible in all VMs<br />
while ! $( nc -z controller 22 ); do sleep 1; done<br />
while ! $( nc -z network 22 ); do sleep 1; done<br />
while ! $( nc -z compute1 22 ); do sleep 1; done<br />
while ! $( nc -z compute2 22 ); do sleep 1; done<br />
</exec><br />
<exec seq="step00" type="verbatim"><br />
echo "-- ...OK\n--"<br />
</exec><br />
</host><br />
<br />
</vnx></div>
David