TutorialEDIV

From VNUML-WIKI
Revision as of 00:58, 17 September 2008 by Admin (talk | contribs)
Jump to: navigation, search

EDIV Tutorial

IMPORTANT NOTE: EDIV documentation is still under construction.

Authors:
Francisco José Martín Moreno (fjmartin at dit.upm.es)
Miguel Ferrer Cámara (mferrer at dit.upm.es)
Fermín Galán; (galan at dit.upm.es)
version 1.0, Jul 28th, 2008

Introduction

In this section a brief tutorial is presented, in order to help people using EDIV for the first time. The tutorial has three main steps: launching a VNUML simulation in a distributed environment (the BGP VNUML example); executing commands in several virtual machines through EDIV execution mode (-x) to configure the BGP routing in the distributed scenario; and finally monitor the cluster and the running scenario through the management scripts provided in the EDIV software package.

To follow this tutorial, a proper installation and configuration of EDIV is required. The way to do it is described in the EDIV documentation ('Installation' and 'User Manual' sections). In addition, familiarity with the VNUML tool is assumed.

Previous Steps

To begin with the tutorial, several files are required:

Download and put them in the same folder (e.g., /root/) and decompress the configuration file (bgp.tar) in that folder.

Building the scenario

To launch the distributed scenario root privileges should be used when using EDIV (in some GNU/Linux distributions, mainly in Ubuntu based ones, this is donde with "sudo su").

In order to launch the BGP scenario deployment in the cluster, the following command is used:

ediv_ctl.pl -t -s /root/bgp.xml -a round_robin

Firstly, the EDIV controller will do the assignment of the virtual machines to physical hosts of the cluster and will show it to the user:

**** Calling segmentator... ****

Segmentator: Using round_robin
Segmentator: Cluster physical machines -> 3
Segmentator: Virtual machine R1 to physical host zermat.dit.upm.es
Segmentator: Virtual machine R2 to physical host kofy.dit.upm.es
Segmentator: Virtual machine R3 to physical host cuco.dit.upm.es
Segmentator: Virtual machine R4 to physical host zermat.dit.upm.es
Segmentator: Virtual machine R5 to physical host kofy.dit.upm.es
Segmentator: Virtual machine R6 to physical host cuco.dit.upm.es


The EDIV controller will continue deploying the scenario and consoles of the virtual machines will appear on your desktop (because the BGP simulation example uses graphic consoles to access the virtual machines):

**** Checking simulation status ****

Checking R1 status
	 R1 still booting, waiting...
	 R1 still booting, waiting...
	 R1 still booting, waiting...
	 R1 running
Checking R4 status
	 R4 running
Checking R2 status
	 R2 running
Checking R5 status
	 R5 running
Checking R3 status
	 R3 running
Checking R6 status
	 R6 running

When the EDIV controller finishes launching the scenario, it will present a text to the user, showing the way to access the virtual machines through ssh. Finally, EDIV ends after showing a 'Succesfully finished' message:

**** Creating tunnels to access VM ****

	To access VM R1 at zermat.dit.upm.es use local port 64000
	To access VM R2 at kofy.dit.upm.es use local port 64004
	To access VM R3 at cuco.dit.upm.es use local port 64001
	To access VM R4 at zermat.dit.upm.es use local port 64003
	To access VM R5 at kofy.dit.upm.es use local port 64005
	To access VM R6 at cuco.dit.upm.es use local port 64002

	Use command ssh -2 root@localhost -p <port> to access VMs
	Or ediv_console.pl console <simulation_name> <vm_name>
	Where <port> is a port number of the previous list
	The port list can be found running ediv_console.pl info

****** Succesfully finished ******

The BGP simulation is running :)

NOTE: The graphic terminals or either ssh can be used to control the virtual machines. Graphic terminal windows can even be closed because the ssh access will work during the whole simulation.

Command sequences

The BGP example provides command sequences to configure and run the BGP routing daemon in each virtual machine of the scenario:

To execute them use the command:

ediv_ctl.pl -x start -s /root/bgp.xml 

After some time, the BPG protocol converges and connectivity can be tested as described in the BGP scenario page.

Testing the scenario

This section describes the use of some scripts provided by EDIV. Using them permits the user to know what is happening in the cluster.

Firstly, show which scenarios are running in the cluster:

ediv_query_status.pl

Previous command will display something similar to:

The simulation bgp is running at host zermat.dit.upm.es
The simulation bgp is running at host kofy.dit.upm.es
The simulation bgp is running at host cuco.dit.upm.es

Once we know that the bgp distributed scenario is running, we can know the virtual machines it launches:

ediv_query_status.pl bgp

And the corresponding information will be displayed:

The simulation bgp has the virtual machine R1 that is in host zermat.dit.upm.es
The simulation bgp has the virtual machine R2 that is in host kofy.dit.upm.es
The simulation bgp has the virtual machine R3 that is in host cuco.dit.upm.es
The simulation bgp has the virtual machine R4 that is in host zermat.dit.upm.es
The simulation bgp has the virtual machine R5 that is in host kofy.dit.upm.es
The simulation bgp has the virtual machine R6 that is in host cuco.dit.upm.es

Now we can access any of the running virtual machines through the command:

 ediv_console.pl console bgp R1

NOTE: Just a reminder, the password to access virtual machines is 'xxxx'.

Once we are inside one the virtual machines, we can get bgp routing information with the commands:

vtysh
sh ip bgp summary

And results similar to the next one will be displayed:

BGP router identifier 10.250.0.18, local AS number 65001
RIB entries 5, using 320 bytes of memory
Peers 3, using 7524 bytes of memory

Neighbor        V    AS MsgRcvd MsgSent   TblVer  InQ OutQ Up/Down  State/PfxRcd
161.0.0.2       4 65002      15      18        0    0    0 00:12:11        2
161.0.0.6       4 65003      11      14        0    0    0 00:09:26        1
161.0.0.10      4 65004      12      14        0    0    0 00:09:28        1

Total number of neighbors 3

After returning to the cluster controller (leaving the virtual machine) we can test the cluster hosts status:

ediv_monitor.pl 0

And get information similar to:

Host zermat.dit.upm.es status:

Load:   12:09:26 up 8 days,  1:37,  1 user,  load average: 0.09, 0.26, 0.33

Virtual machines running at simulation bgp_zermat.dit.upm.es:
         available vms: R1 R4


Host kofy.dit.upm.es status:

Load:   12:08:56 up  1:41,  2 users,  load average: 0.09, 0.20, 0.20

Virtual machines running at simulation bgp_kofy.dit.upm.es:
         available vms: R2 R5


Host cuco.dit.upm.es status:

Load:   12:08:57 up 29 min,  0 users,  load average: 0.00, 0.00, 0.00

Virtual machines running at simulation bgp_cuco.dit.upm.es:
         available vms: R3 R6

Purging the scenario

To purge the scenario, we need this command:

ediv_ctl.pl -P -s /root/bgp.xml

EDIV finishes after showing a 'Succesfully finished' message:

****** Succesfully finished ******

Using another algorithms

The segmentation algorithm can be changed using the -a switch. For example, if we want to use weighted round robin instead of conventional (i.e., non-weighted) round robin, then:

ediv_ctl.pl -t -s /root/bgp.xml -a weighted_round_robin

As you can observe, the assignment of virtual machines to physical host is different (if some cluster host is CPU loaded). To check how the scenario would be deployed, you can use the next script:

 ediv_segmentation_info.pl bgp.xml weighted_round_robin

Segmentator: Using weighted_round_robin
Segmentator: Dynamic CPU load of zermat.dit.upm.es is 0.32
Segmentator: Dynamic CPU load of kofy.dit.upm.es is 0.43
Segmentator: Dynamic CPU load of cuco.dit.upm.es is 0.00
Segmentator: Assigned 28.6666666666667% to zermat.dit.upm.es
Segmentator: Assigned 21.3333333333333% to kofy.dit.upm.es
Segmentator: Assigned 50% to cuco.dit.upm.es
Segmentator: 2 VMs assigned to zermat.dit.upm.es
Segmentator: 1 VMs assigned to kofy.dit.upm.es
Segmentator: 3 VMs assigned to cuco.dit.upm.es
Segmentator: Virtual machine R1 goes to physical host zermat.dit.upm.es
Segmentator: Virtual machine R2 goes to physical host zermat.dit.upm.es
Segmentator: Virtual machine R3 goes to physical host kofy.dit.upm.es
Segmentator: Virtual machine R4 goes to physical host cuco.dit.upm.es
Segmentator: Virtual machine R5 goes to physical host cuco.dit.upm.es
Segmentator: Virtual machine R6 goes to physical host cuco.dit.upm.es


Creating a restriction file

This is an example of a restriction file for the BGP example. For more information about this topic check the language reference.

First, we want all the virtual machines belonging to the net AS1-AS4 deployed together:

<net_deploy_at net="AS1-AS4" host="kofy.dit.upm.es" />

Then, we want the virtual machine R2 deployed in a certain host:

<vm_deploy_at vm="R2" host="zermat.dit.upm.es" />

We don't want the virtual machines R2 and R5 in the same host:

<antiaffinity>
  <vm>R2</vm>
  <vm>R5</vm>
</antiaffinity>

And last, we want R3 and R2 deployed together in the same host:

<affinity>
  <vm>R3</vm>
  <vm>R2</vm>
</affinity>

The complete restriction file is:

<deployment_restrictions>

<net_deploy_at net="AS1-AS4" host="kofy.dit.upm.es" />

<vm_deploy_at vm="R2" host="zermat.dit.upm.es" />
 
<antiaffinity>
  <vm>R2</vm>
  <vm>R5</vm>
</antiaffinity>

<affinity>
  <vm>R3</vm>
  <vm>R2</vm>
</affinity>
 
</deployment_restrictions>

Now we can launch this scenario using the created restriction file with the following command:

ediv_ctl.pl -t -s /root/bgp.xml -r /root/restriction_file.xml