Ediv tunnel cluster

From VNUML-WIKI
Jump to: navigation, search

EDIV WAN Cluster Tutorial

Authors:
Francisco José Martín Moreno (fjmartin at dit.upm.es)
Miguel Ferrer Cámara (mferrer at dit.upm.es)
Fermín Galán Márquez (galan at dit.upm.es)
version 0.1, Oct 23th, 2008

WAN distributed cluster small.jpg

OpenVPN configuration

Lets assume the topology shown in the previous image. This example shows how to configure the OpenVPN tunnel between the server host 'kofy' and the client host 'zermat'. Repeat the necessary steps to configure tunnels between all the host forming your cluster. In our example we configured the tunnels in a ring topology.

Execute steps 1 to 11 in the server and 12 in the client.

  1. OpenVPN installation:
  2. apt-get install openvpn
    
  3. Copy the example scripts provided by OpenVPN to /etc/openvpn:
  4. cp -R /usr/share/doc/openvpn/examples/ /etc/openvpn/
    
  5. Go to dir /etc/openvpn/examples/easy-rsa/2.0/ (from now on we assume you are working from that directory) and edit the vars file to show the appropriate values. We use:
  6. export KEY_SIZE=1024
    # These are the default values for fields
    # which will be placed in the certificate.
    # Don't leave any of these fields blank.
    export KEY_COUNTRY=ES
    export KEY_PROVINCE=MADRID
    export KEY_CITY=MADRID
    export KEY_ORG="DIT-UPM"
    export KEY_EMAIL="someone@dit.upm.es"
    

    Use appropriate values for your country and organization.

  7. Start the server configuration creating the certificate authority:
  8. . ./vars        
    ./clean-all     
    ./build-ca
    
  9. Server key creation:
  10. ./build-key-server kofy
    
  11. Generate the Diffie Hellman keys:
  12. ./build-dh
    
  13. Generate client OpenVPN keys. In this example we generate a key for the host client 'zermat':
  14. ./build-key client_zermat
    

    Repeat this point to generate keys for all the clients you are going to use.

  15. Rename the CA files to know to which server they belong:
  16. mv keys/ca.crt keys/ca_kofy.crt
    mv keys/ca.key keys/ca_kofy.key
    
  17. Create /etc/openvpn/server-keys folder and put the server and client keys inside:
  18. cp /etc/openvpn/examples/easy-rsa/2.0/keys/* /etc/openvpn/server-keys
    
  19. Configure the OpenVPN server creating /etc/openvpn/server.conf file:
  20. management localhost 7505 
    port 1195 
    proto udp 
    dev tap1 
    #client-to-client 
    ca /etc/openvpn/server-keys/ca_kofy.crt 
    cert /etc/openvpn/server-keys/kofy.crt 
    key /etc/openvpn/server-keys/kofy.key 
    dh /etc/openvpn/server-keys/dh1024.pem 
    mode server 
    tls-server 
    keepalive 10 120 
    comp-lzo 
    persist-key 
    persist-tun 
    status openvpn-status.log 
    verb 4 
    

    IMPORTANT NOTE: Notice that each host acts as a OpenVPN client and server in the assumed tunnel ring topology. To avoid conflicts, we always use tap1 virtual network device and 7505 management port for OpenVPN servers. tap0 virtual network device and 7504 port management port are used for OpenVPN clients.

  21. Create the folder /etc/openvpn/client-keys in the client host 'zermat' and copy the appropriate client keys inside:
  22. ssh root@zermat mkdir -p /etc/openvpn/client-keys
    scp ca_kofy.crt client_zermat.crt client_zermat.key root@zermat:/etc/openvpn/client-keys
    
  23. Configure the client 'zermat' creating the file /etc/openvpn/client.conf with the following content:
  24. management localhost 7504 
    client 
    tls-client 
    #auth-user-pass 
    dev tap0 
    proto udp 
    remote 138.4.7.197 1195	 # This is kofy IP and port, change it
                            # to reflect your OpenVPN server and port
    resolv-retry infinite 
    nobind 
    persist-key 
    persist-tun 
    ca /etc/openvpn/client-keys/ca_kofy.crt 
    cert /etc/openvpn/client-keys/client_zermat.crt 
    key /etc/openvpn/client-keys/client_zermat.key 
    comp-lzo 
    verb 3 
    

    The NOTE also applies here.

Network interfaces configuration and OpenVPN start

Execute the following commands in all the hosts forming the cluster:

openvpn --mktun --dev tap0 
ifconfig tap0 0.0.0.0 promisc up 
brctl addbr vnuml-br
brctl stp vnuml-br on 
brctl addif vnuml-br tap0 
ifconfig vnuml-br 0.0.0.0 promisc up 
openvpn --mktun --dev tap1 
ifconfig tap1 0.0.0.0 promisc up 
brctl addif vnuml-br tap1 

Brief explanation: the previous commands create two virtual network devices, tap0 and tap1 and attach them to the virtual bridge vnuml-br. The vnuml-br will be the interface used by EDIV, so all the simulations communications are made through the created tunnels. This allows the VLANs created by EDIV to operate in a distributed cluster without any networking equipment configuration.

Then launch OpenVPN with the command

/etc/init.d/openvpn start

or either launch client and server individually with commands:

openvpn /etc/openvpn/client.conf
openvpn /etc/openvpn/server.conf

NOTE: To check if the tunnels are working properly you can telnet to 7504 port of the OpenVPN client hosts or 7505 port of the OpenVPN server hosts and then issue the 'status' command. In the server, the name of the client hosts should appear:

root@kofy:/home/miguel# telnet localhost 7505
Trying 127.0.0.1...
Connected to localhost.dit.upm.es.
Escape character is '^]'.
>INFO:OpenVPN Management Interface Version 1 -- type 'help' for more info
status
OpenVPN CLIENT LIST
Updated,Thu Oct 23 12:27:52 2008
Common Name,Real Address,Bytes Received,Bytes Sent,Connected Since
zermat,138.4.7.132:52018,40586,302925,Thu Oct 23 10:41:26 2008
ROUTING TABLE
Virtual Address,Common Name,Real Address,Last Ref
00:ff:18:a4:e0:c3,zermat,138.4.7.132:52018,Thu Oct 23 10:41:28 2008
GLOBAL STATS
Max bcast/mcast queue length,1
END

EDIV Configuration

Once all the tunnels are working, modify the /etc/ediv/cluster.conf file in the EDIV controller host. Change the ifname variable in all the cluster host sections to use the vnuml-br interface that you created in the previous steps.

[zermat]
mem = 2048
cpu = 100
max_vhost = 0
ifname = vnuml-br

[kofy]
mem = 1024
cpu = 85
max_vhost = 0
ifname = vnuml-br

[centro]
mem = 2048 
cpu = 85
max_vhost = 0
ifname = vnuml-br

Now EDIV is ready to run simulations in the distributed cluster.

Automatic network interface creation during hosts boot

When you turn off the hosts forming the cluster is possible that tap0 and tap1 virtual network devices are lost. To create them automatically each time the cluster hosts boot, create the /etc/init.d/ediv_tunnel_devices file in each host of the cluster with the following content:

#!/bin/sh -e 

case "$1" in 
start) 

        openvpn --mktun --dev tap0 
        ifconfig tap0 0.0.0.0 promisc up 
        brctl addbr vnuml-br 
        brctl stp vnuml-br on 
        brctl addif vnuml-br tap0 
        ifconfig vnuml-br 0.0.0.0 promisc up 
        openvpn --mktun --dev tap1 
        ifconfig tap1 0.0.0.0 promisc up 
        brctl addif vnuml-br tap1 

  ;; 
stop) 

        ifconfig tap0 down 
        ifconfig tap1 down 
        brctl delif vnuml-br tap0 
        brctl delif vnuml-br tap1 
        ifconfig vnuml-br down 
        brctl delbr vnuml-br 

  ;; 
restart) 
  shift 
  $0 stop ${@} 
  sleep 1 
  $0 start ${@} 
  ;; 
*) 
  echo "Usage: $0 {start|stop|restart}" >&2 
  exit 1 
  ;; 
esac 

exit 0 

This script file must have execution permissions, you can set them with:

chmod 755 /etc/init.d/ediv_tunnel_devices

And then create the following symbolic links:

ln -s /etc/init.d/ediv_tunnel_devices /etc/rc2.d/S15create_ediv_tunnel_devices
ln -s /etc/init.d/ediv_tunnel_devices /etc/rc6.d/K81destroy_ediv_tunnel_devices
ln -s /etc/init.d/ediv_tunnel_devices /etc/rc0.d/K81destroy_ediv_tunnel_devices

NOTE: the previous commands assume you're using a Debian style Linux distribution (like Ubuntu), where the 2 runlevel is normal operation, and 0 and 6 runlevels are shutting down and reboot modes.