FAQ

From VNUML-WIKI
Revision as of 03:02, 11 February 2007 by Admin (talk | contribs)
Jump to: navigation, search

VNUML Frequently Asked Questions

Authors:
David Fernández (david at dit.upm.es)
Fermín Galán (galan at dit.upm.es)
version 1.7, June 4th, 2004


Writing the VNUML specification

How can I check if my VNMUL XML specification is correct?

Whenever vnuml tool is executed, the specification is checked and will give you some error messages in case the specification is not correct. Alternatively, you can check your specification using the xmlwf command that comes with expat distribution (needed to run VNUML). The xmllint command (comes with libxml package) also could be used for the same task.


Limitations

What is the maximum virtual networks number?

There are two hard limits in the number of simultaneous virtual networks (ie, how many [../current/reference/index.html#net <net>] can vnumlparser.pl manage).

  • 64 maximum networks, if using host kernel version < 2.6.5
  • 32 maximum networks, if using brige-utils version < 0.9.7So, if you want to use as many virtual networks as your physical host could cope, use at least bridge-utils 0.9.7 (available just as tarball at http://sourceforge.net/projects/bridge/ at time of this writting) and Linux Kernel 2.6.5.


Linux Kernels for VNUML

How can I know which kernel options were used when compiling a UML linux kernel?

Just execute the kernel with "--showconfig" option. For example, to know if a UML linux kernel has IPv6 support just type:

> linux --showconfig | grep IPV6



About root filesystems

I have changed the filesystem used by a virtual machine, but when I start the simulation it seems to use the old one.

If you are using "COW" filesystems as recommended, you have to delete the old cow file before starting the simulation with the new filesystem. The reason is that cow files save a reference to the root file system they are derived from. To delete a cow file you can directly delete the file (normally located in /var/vnuml/sim-name/vm-name_cow_fs) or just use the "purge" option ("-P") of vnuml.


I am using .root_fs_tutorial. root filesystem and I see that Apache web server (or any other service) is not automatically started when the virtual machine boots, why? How can I make it start from the beginning?

Most of the services are not started from the beginning to speed up virtual machines boot up process during the scenario start-up (-t option). It is recommended to start the services you need using "<start>" commands inside your VNUML specification. For example, to start apache2, you can include the following command:

<start type="verbatim">/usr/sbin/apache2ctl start</start>

Alternatively, you can use "update-rc.d" commadn to restore the scripts that start apache2 during boot process. Just start the virtual machine (using, for example, the update-fs.xml example), enter inside it through the console or using ssh, and type:

update-rc.d apache2 defaults 


Starting the simulation (-t option)

The simulation does not to start correctly. Some (or all) of the virtual machines do not start and the program keeps saying "xxx sshd is not ready..."

There are several causes that can prevent a simulation from start correctly. If that happens you can:

  • "Purge" the simulation in case you are using -as recommended- COW filesystems. Some times the cow filesystems get corrupted and prevent some virtual machines from start. Just execute vnuml with "-P" option to delete cow files and reset other simulation related components.
  • Check that the filesystem fullfills the requirements listed in the [../current/user/index.html VNUML User Manual]. It is strongly recommended to test the kernel and the rootfilesystem you are going to use in a simple simulation scenario (with just one virtual machine, see Simple example) before trying a more complex one.
  • Have a look at the messages showed through the virtual machines consoles. To do that, you can:
    • Start the simulations from an X terminal using the following statements in every virtual machine specification section:
    •  <boot>
       <con0>xterm</con0>
       </boot>
      

      Each virtual machine console will be opened in a different xterminal.

    • Redirect the virtual machine consoles to pseudo ttys (pts) using the following statements in every virtual machine specification section:
    • <boot>
      <con0>pts</con0>
      </boot>
      

      and start the simulation using: "vnuml -t name.xml -e screen.cfg" and access the consoles using: "screen -c screen.cfg".

  • Check the management interface host-virtual machine. In the host, there should be several interfaces named vm-eth0 (where vm is each one of the virtual machines in the simulation). Each one is connected to the eth0 interface in the each virtual machine. You can check this interfaces using ping:
  • From host:

    ping (IP address on the eth0 interface in the virtual machine)
    

    From the virtual machine:

    ping (IP address on the vm-eth0 interface in the host)
    

    Conflict between IP addresses in the host enviroment and thoses used in the management interfaces is a common cause of problems. To avoid conflict, use a proper <offset>

  • Check the sshd daemon is running in the virtual machines. For example, tryto open a telnet to port 22 from the host to the virtual machine.

VNUML over different Linux distributions

I've tested VNUML over Suse 9.1 and it works, but when I release the simulation, I loose the network connectivity (eth0 interface is unconfigured).

The problem is caused by something strange in Suse 9.1 original kernel. It seems it is solved in new "kernel of the day" versions, although there is not a an official kernel/bugfix for that yet.

There is an easy workaround solution: as the problem happens only when you use "eth0" in virtual interface names, you can easily change the way VNUML names them. Just edit vnumlparser.pl file and change any occurrence of "-eth0" by "-ethX".

Since vnumparser.pl 1.4.0, the "SuSE patch" is applied automactly at installation time (make install), so you shouldn't worry about this problem.


I've problems testing VNUML over the latest linux distributions which include kernel versions newer than 2.6.5. The simulations do never start or they are not correctly released. What can I do?

The latest host kernel version VNUML (and UML) works correctly over it is 2.6.5. Tests made over newer host kernels has shown several problems, particularly simulations that never start or never die.

From the tests made so far, most of the problems are solved using the latests skas patchs on the host kernel and the latest "bb" patchs on the guest kernel.

So, the recomended environment consist on:

  • Host kernel: 2.6.9 with skas3-v7 patch applied
  • Guest kernel: 2.6.9 with bb4 patch applied

Patches can be found in BlaisorBlade UML web pages.

See next question for more information on this subject.


I have installed host kernel 2.6.9-skas3-v7 and I use a 2.6.9-bb4 guest kernel as you recomend, but when I release the simulations, a "zombie" process named "[linux-2.6.9-bb4-1m]" remains on the system. If I don't kill it manually, the next simulation fails. What can I do?

It seems to be a problem related with xterminals. There are two known solutions:

  • Do not use "xterm" in virtual machines consoles. Use "pts" instead:
  • <boot>
    <con0>pts</con0>
    </boot>
    
  • Change the line in vnumlparser.pl file that starts the virtual machines to add the following parameter:
  • xterm=gnome-terminal,-t,-x
    

    that tells uml to use a "gnome-terminal" insteal of a normal "xterm" (only valid if you have gnome installed in your system).

    To do that, just edit vnumlparser.pl file (normally in /usr/local/bin) and change the following line:

    $boot_line .= " uml_dir=$vnuml_dir/$simname/ umid=$name con=null";
    to this one (just add "xterm=gnome-terminal,-t,-x" at the end):  
    $boot_line .= " uml_dir=$vnuml_dir/$simname/ umid=$name con=null xterm=gnome-terminal,-t,-x";