Lvs-enunciado

De EDUC@REDES
Revisión del 01:00 28 oct 2011 de David (discusión | contribuciones) (Página creada con '{{Title|Ejercicio práctico sobre balanceo de carga y fiabilidad en acceso a servidores: ENUNCIADO }} ===1. Objectives=== The objectives of this practical are: *To provide TC...')
(dif) ← Revisión anterior | Revisión actual (dif) | Revisión siguiente → (dif)
Saltar a: navegación, buscar

Ejercicio práctico sobre balanceo de carga y fiabilidad en acceso a servidores: ENUNCIADO

1. Objectives

The objectives of this practical are:

  • To provide TCP/IP based routing services using the routing software package Quagga.
  • To implement load balancing for the servers of an organization using ipvs and vrrp.
  • To implement servers’ health-checks and directors’ failovers using vrrp.


2. Terminology

Some of the following definitions have been copied from [6].

  • VIP: The Virtual IP is the IP address that will be accessed by all the clients. The clients only access this IP address.
  • Real server: A real server hosts the application accessed by client requests.
  • Server pool: A farm of real servers.
  • Virtual server: The access point to a server pool.

Virtual service; A TCP/UPD service associated with the VIP.

  • VRRP (Virtual Router Redundancy Protocol): The protocol implemented for the directors' failover.
  • Director: A machine implementing load balancing.
  • Director's failover: Situation when a director can not provide its service.
  • Health check: A check done to a real server to determine if a service is available.
  • VRRP Instance: A thread manipulating VRRPv2 specific set of IP addresses.
  • MASTER state: VRRP Instance state when it is assuming the responsibility of forwarding packets sent to the IP address(es) associated with the VRRP Instance.
  • BACKUP state: VRRP Instance state when it is capable of forwarding packets in the event that the current VRRP Instance MASTER fails.
  • LVS (Linux Virtual Service): A patched Linux kernel that adds a load balancing facility.
  • IPVS (IP Virtual Server): Implements transport-layer load balancing inside the Linux kernel.
  • ipvsadm: A tool for LVS administration.
  • Apache2: A web server.


3. Background knowledge

The understanding of this practical requires basic knowledge about Quagga [1] for the set up of dedicated routers. For load balancing [2] purposes, the user should have some knowledge about the Linux Virtual Server (LVS) [3], IPVS [4] and ipvsadm [5]. Finally, information about the keepalived project [6] is also recommended, as it is used for the implementation of load balancing with servers’ health-checks and directors’ failovers.

In order to make easier the understanding of the practical, the user only needs to execute generic commands. The exactly commands that are executed in the virtual machines can be seen in the xml specification file. Therefore, if you would like to execute the practical more slowly, step by step, you only need to follow the commands specified in the xml file corresponding for each generic command.


4. Scenario description

The scenario is illustrated in figure 1. It is made of:

  • pc1: A client that requests a web page hosted on the web servers of an organization.
  • r1: The gateway for the client. It connects him to the backbone of the scenario.
  • rb1 and rb2: The entry routers for the organization. They connect the organization to the backbone.
  • ra1 and ra2: They implement load balancing and vrrp. These are the routers where keepalived is run.
  • s1 and s2: The web servers of the organization.


Lvs Fig1.jpg

Figure 1


The green zone represents the elements that belong to the organization.

The following entries have been added to the /etc/hosts file of each of the virtual machines of the scenario. This allows the user typing symbolical names for the interfaces instead of having to write their IP addresses.

192.168.1.1 gw
192.168.1.2 ra12
192.168.1.3 ra22
192.168.1.4 s1
192.168.1.5 s2
192.168.1.6 ra32
192.168.1.17 rb12
192.168.1.18 rb22
192.168.1.19 ra11
192.168.1.20 ra21
192.168.1.21 vip
192.168.1.22 ra31
192.168.200.1 r12
192.168.200.2 rb11
192.168.200.5 r13
192.168.200.6 rb21
192.168.2.1 r11
192.168.2.2 pc1


5. Configuring the scenario

Build the virtual machines of the scenario by starting the vrrp.xml file with the VNUML tool [7]:

cd /usr/share/vnuml/examples
vnumlparser.pl -t vrrp.xml -v -u root


Note: Depending on your Linux distribution, the vnuml directory might be under /usr/share or /usr/local/share.

This instruction will boot the virtual machines and a shell will be prompted for each of them. The scenario is built as user root since root privileges are needed to do so. The user for the virtual machines is root and the password is xxxx.


6. Quagga

A system with Quagga installed acts as a dedicated router. We want to launch OSPF between routers in the backbone (r1, rb1 and rb2), so they can exchange routing information. In order to do this, the configuration files for the zebra daemon and ospfd daemon have been conveniently configured. The daemons are launched with the following instruction in the host:

vnumlparser.pl -x start@vrrp.xml -v -u root


The above instruction makes the following:

  • Copies the zebra and opspfd configuration scripts to the routers in the backbone (r1, rb1 and rb2).
  • Copies the keepalived configuration scripts to the LVS directors (ra1 and ra2).
  • Copies a script named request to the client (pc1) in order to make faster the request of a web page when testing the scenario.
  • Copies a web page named server.html to the real servers (s1 and s2).
  • Modifies the /etc/hosts file of each virtual machine to include the entries described in section 4.
  • Launches first the zebra daemon and then the ospfd daemon, in the virtual machines that form the backbone of the scenario (r1, rb1 and rb2). Thus, these virtual machines start behaving like dedicated routers that are running OSPF.
  • Launches apache2 server in the organization's web servers (s1 and s2).


Because ospfd needs to acquire interface information from zebra in order to function, zebra must be running before invoking ospfd. Also, if zebra is restarted then ospfd must be too.

Wait about 40 seconds for OSPF to converge. Then check that rb1 has learnt the route for the client's network:

Before OSPF converges:
rb1:~# route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
192.168.200.0   *               255.255.255.252 U     0      0        0 eth1
10.0.0.12       *               255.255.255.252 U     0      0        0 eth0
192.168.1.16    *               255.255.255.240 U     0      0        0 eth2


After OSPF converges:
rb1:~# route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
192.168.200.0   *               255.255.255.252 U     0      0        0 eth1
192.168.200.4   r12             255.255.255.252 UG    20     0        0 eth1
10.0.0.12       *               255.255.255.252 U     0      0        0 eth0
10.0.0.8        r12             255.255.255.252 UG    20     0        0 eth1
10.0.0.16       rb22            255.255.255.252 UG    20     0        0 eth2
192.168.1.16    *               255.255.255.240 U     0      0        0 eth2
192.168.2.0     r12             255.255.255.0   UG    20     0        0 eth1


Check also that there is connectivity between pc1 and rb1:

ping rb12


  • Why there isn't connectivity from the client to the web servers?
Since routers ra1 and ra2 do not run OSPF, the servers’ network prefix is not announced to the backbone. Therefore, r1 does not know how to route packets to the servers.


7. ipvsadm

Initially we are going to implement load balance using ipvsadm. The service is set up by executing in the host:

vnumlparser.pl -x ipvs_start@vrrp.xml -v -u root


This instruction does the following:

  • In the director (ra1): Sets up the IP address for the virtual server (vip 192.168.1.21) and adds a virtual service with scheduler round robin [5]. Then it adds real servers s1 and s2 to the virtual service.
  • In the real servers (s1 and s2): Changes the default route to the director's servers' network interface (ra12). Initially the default route of the real servers was a virtual interface that will be used later in the keepalived section.


When IPVS is started, the following messages are shown in the director:

IPVS: Registered protocols (TCP, UDP, AH, ESP)
IPVS: Connection hash table configured (size=4096, memory=32Kbytes)
IPVS: ipvs loaded.
IPVS: [rr] scheduler registered.


These messages are shown only the first time that IPVS is started, since everything was already set up from the first time (if you would change for example the scheduler, a new message indicating this would appear).

Now the virtual server is implementing load balancing between the real servers. Make requests from the client to the example web page, and you will see that each time the request is answered by a different server, according to the round robin scheduler.

Both servers have a web page named server.html but with different contents, so you can know from which server the page is coming from.

To make the request faster, you can execute a script that was copied in the client:

sh request


To see what instructions has this script:

cat request


Each time a request is made a web page is downloaded, its content is shown in the terminal and then the web page is deleted, so the next request loads again the same page without modifying its name (for more information see [9]).

The requests are made with the command wget [9]. The requests can also be done with a browser in a graphical mode, but this is discouraged because there are some problems in downloading again the same page due to the browser's internal cache. You can open a graphical window for a virtual machine (for example, pc1):

vncviewer pc1:1


Using for password:

xxxxxx


  • Why does s1 respond to the first request instead of s2?
The scheduler starts with is the last real server that was added to the virtual service, in this case s1.


After these instructions, we have set up and tested a load balancer using ipvsadm. Now we are going to clean up all the changes done on the network so the scenario is prepared to start keepalived:

vnumlparser.pl -x ipvs_stop@vrrp.xml -v -u root


With this instruction:

  • The virtual service is stopped.
  • The interface for the virtual server is turned down.
  • The default routes in the real servers are left like at the beginning (pointing to a virtual interface).


8. Keepalived

At this point the scenario configuration is exactly the same as it was before executing the ipvsadm section, so there is no problem if you decided not to do that section.

Launch keepalived in the LVS directors (ra1 and ra2):

vnumlparser.pl -x ka_start@vrrp.xml -v -u root


Keepalived will [6]:

  • Set up the IP addresses for the virtual server (vip and gw).
  • Implement load balancing in directors ra1 and ra2.
  • Handle failover between load balancers.
  • Do health-checks on services, bringing them in and out of pools.


  • The external VIP of the virtual server is 192.168.1.21
  • The internal VIP the real servers will use as a default gateway is 192.168.1.1


Check that IPVS messages also appear in director ra2. They are the same messages that appeared in ra1 when ipvs was launched in section 7. If section 7 was not done prior to this section, the IPVS messages will also appear in ra1. Again, the messages only appear the first time that keepalived is launched.

If keepalived is started with the -d flag, it dumps the configuration data in /var/log/messages:

Jan 24 18:38:55 (none) Keepalived_healthcheckers: Using MII-BMSR NIC polling thread...
Jan 24 18:38:55 (none) Keepalived_healthcheckers: Registering Kernel netlink reflector
Jan 24 18:38:55 (none) Keepalived_healthcheckers: Registering Kernel netlink command channel
Jan 24 18:38:55 (none) Keepalived_healthcheckers: Configuration is using : 10730 Bytes
Jan 24 18:38:55 (none) Keepalived_healthcheckers: ------< Global definitions >------
Jan 24 18:38:55 (none) Keepalived_healthcheckers:  Router ID = ra1
Jan 24 18:38:55 (none) Keepalived_healthcheckers:  Smtp server = 127.0.0.1
Jan 24 18:38:55 (none) Keepalived_healthcheckers:  Smtp server connection timeout = 30
Jan 24 18:38:55 (none) Keepalived_healthcheckers:  Email notification from = root@ra1
Jan 24 18:38:55 (none) Keepalived_healthcheckers: ------< SSL definitions >------
Jan 24 18:38:55 (none) Keepalived_vrrp: Using MII-BMSR NIC polling thread...
Jan 24 18:38:55 (none) Keepalived_vrrp: Registering Kernel netlink reflector
Jan 24 18:38:55 (none) Keepalived_vrrp: Registering Kernel netlink command channel
Jan 24 18:38:55 (none) Keepalived_vrrp: Registering gratutious ARP shared channel
Jan 24 18:38:55 (none) Keepalived_vrrp: Configuration is using : 40564 Bytes
Jan 24 18:38:55 (none) Keepalived_vrrp: ------< Global definitions >------
Jan 24 18:38:55 (none) Keepalived_vrrp:  Router ID = ra1
Jan 24 18:38:55 (none) Keepalived_vrrp:  Smtp server = 127.0.0.1
Jan 24 18:38:55 (none) Keepalived_vrrp:  Smtp server connection timeout = 30
Jan 24 18:38:55 (none) Keepalived_vrrp:  Email notification from = root@ra1
Jan 24 18:38:55 (none) Keepalived_vrrp: ------< VRRP Topology >------
Jan 24 18:38:55 (none) Keepalived_vrrp:  VRRP Instance = VI_1
Jan 24 18:38:55 (none) Keepalived_vrrp:    Want State = MASTER
Jan 24 18:38:55 (none) Keepalived_vrrp:    Runing on device = eth1
Jan 24 18:38:55 (none) Keepalived_vrrp:    Virtual Router ID = 51
Jan 24 18:38:55 (none) Keepalived_vrrp:    Priority = 150
Jan 24 18:38:55 (none) Keepalived_vrrp:    Advert interval = 1sec
Jan 24 18:38:55 (none) Keepalived_vrrp:    Authentication type = SIMPLE_PASSWORD
Jan 24 18:38:55 (none) Keepalived_vrrp:    Password = 1111
Jan 24 18:38:55 (none) Keepalived_vrrp:    Virtual IP = 1
Jan 24 18:38:55 (none) Keepalived_vrrp:      192.168.1.21/32 brd 192.168.1.21 dev eth1 scope global
Jan 24 18:38:55 (none) Keepalived_vrrp:  VRRP Instance = VI_GATEWAY
Jan 24 18:38:55 (none) Keepalived_vrrp:    Want State = MASTER
Jan 24 18:38:55 (none) Keepalived_vrrp:    Runing on device = eth2
Jan 24 18:38:55 (none) Keepalived_vrrp:    Virtual Router ID = 52
Jan 24 18:38:55 (none) Keepalived_vrrp:    Priority = 150
Jan 24 18:38:55 (none) Keepalived_vrrp:    Advert interval = 1sec
Jan 24 18:38:55 (none) Keepalived_vrrp:    Authentication type = SIMPLE_PASSWORD
Jan 24 18:38:55 (none) Keepalived_vrrp:    Password = 1111
Jan 24 18:38:55 (none) Keepalived_vrrp:    Virtual IP = 1
Jan 24 18:38:55 (none) Keepalived_vrrp:      192.168.1.1/32 brd 192.168.1.1 dev eth2 scope global
Jan 24 18:38:55 (none) Keepalived_vrrp: ------< VRRP Sync groups >------
Jan 24 18:38:55 (none) Keepalived_vrrp:  VRRP Sync Group = VG1, BACKUP
Jan 24 18:38:55 (none) Keepalived_vrrp:    monitor = VI_1
Jan 24 18:38:55 (none) Keepalived_vrrp:    monitor = VI_GATEWAY
Jan 24 18:38:55 (none) Keepalived_healthcheckers:  Using autogen SSL context
Jan 24 18:38:55 (none) Keepalived_healthcheckers: ------< LVS Topology >------
Jan 24 18:38:55 (none) Keepalived_healthcheckers:  System is compiled with LVS v1.2.1
Jan 24 18:38:55 (none) Keepalived_healthcheckers:  VIP = 192.168.1.21, VPORT = 80
Jan 24 18:38:55 (none) Keepalived_healthcheckers:    delay_loop = 6, lb_algo = rr
Jan 24 18:38:55 (none) Keepalived_healthcheckers:    protocol = TCP
Jan 24 18:38:55 (none) Keepalived_healthcheckers:    lb_kind = NAT
Jan 24 18:38:55 (none) Keepalived_healthcheckers:    RIP = 192.168.1.5, RPORT = 80, WEIGHT = 1
Jan 24 18:38:55 (none) Keepalived_healthcheckers:    RIP = 192.168.1.4, RPORT = 80, WEIGHT = 1
Jan 24 18:38:55 (none) Keepalived_healthcheckers: ------< Health checkers >------
Jan 24 18:38:55 (none) Keepalived_healthcheckers:  192.168.1.5:80
Jan 24 18:38:55 (none) Keepalived_healthcheckers:    Keepalive method = TCP_CHECK
Jan 24 18:38:55 (none) Keepalived_healthcheckers:    Connection port = 80
Jan 24 18:38:55 (none) Keepalived_healthcheckers:    Connection timeout = 2
Jan 24 18:38:55 (none) Keepalived_healthcheckers:  192.168.1.4:80
Jan 24 18:38:55 (none) Keepalived_healthcheckers:    Keepalive method = TCP_CHECK
Jan 24 18:38:55 (none) Keepalived_healthcheckers:    Connection port = 80
Jan 24 18:38:55 (none) Keepalived_healthcheckers:    Connection timeout = 2
Jan 24 18:38:55 (none) Keepalived_healthcheckers: Activating healtchecker for service [192.168.1.5:80]
Jan 24 18:38:55 (none) Keepalived_healthcheckers: Activating healtchecker for service [192.168.1.4:80]
Jan 24 18:38:56 (none) Keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATE
Jan 24 18:38:56 (none) Keepalived_vrrp: VRRP_Instance(VI_GATEWAY) Transition to MASTER STATE
Jan 24 18:38:57 (none) Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATE
Jan 24 18:38:57 (none) Keepalived_vrrp: VRRP_Group(VG1) Syncing instances to MASTER state
Jan 24 18:38:57 (none) Keepalived_vrrp: VRRP_Instance(VI_GATEWAY) Entering MASTER STATE


That configuration data belongs to director ra1, which is in MASTER state. Director ra2 dumps similar data but in the BACKUP state.

Check now the ipvsadm information that keepalived has set up by executing in one of the directors shell:

ipvsadm


The result is:

ra1:~# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  vip:www rr
  -> s1:www                       Masq    1      0          0
  -> s2:www                       Masq    1      0          0


Check the IP addresses list in each of the directors, having special attention to interfaces eth1 and eth2. In order to do so execute in each director's shell:

ip addr list


ra1:~# ip addr list
1: eth1: <BROADCAST,MULTICAST,PROMISC,UP> mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether fe:fd:00:00:05:01 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.19/28 brd 192.168.1.31 scope global eth1
    inet 192.168.1.21/32 scope global eth1
    inet6 fe80::fcfd:ff:fe00:501/64 scope link
       valid_lft forever preferred_lft forever
2: eth2: <BROADCAST,MULTICAST,PROMISC,UP> mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether fe:fd:00:00:05:02 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.2/28 brd 192.168.1.15 scope global eth2
    inet 192.168.1.1/32 scope global eth2
    inet6 fe80::fcfd:ff:fe00:502/64 scope link
       valid_lft forever preferred_lft forever
3: eth0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether fe:fd:0a:00:00:16 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.22/30 brd 10.0.0.23 scope global eth0
    inet6 fe80::fcfd:aff:fe00:16/64 scope link
       valid_lft forever preferred_lft forever
4: lo: <LOOPBACK,UP> mtu 16436 qdisc noqueue
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
5: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop
    link/ether 1a:ad:3b:81:a9:8d brd ff:ff:ff:ff:ff:ff
6: sit0: <NOARP> mtu 1480 qdisc noop
    link/sit 0.0.0.0 brd 0.0.0.0
7: ip6tnl0: <NOARP> mtu 1460 qdisc noop
    link/tunnel6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00


ra2:~# ip addr list
1: eth1: <BROADCAST,MULTICAST,PROMISC,UP> mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether fe:fd:00:00:06:01 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.20/28 brd 192.168.1.31 scope global eth1
    inet6 fe80::fcfd:ff:fe00:601/64 scope link
       valid_lft forever preferred_lft forever
2: eth2: <BROADCAST,MULTICAST,PROMISC,UP> mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether fe:fd:00:00:06:02 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.3/28 brd 192.168.1.15 scope global eth2
    inet6 fe80::fcfd:ff:fe00:602/64 scope link
       valid_lft forever preferred_lft forever
3: eth0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether fe:fd:0a:00:00:1a brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.26/30 brd 10.0.0.27 scope global eth0
    inet6 fe80::fcfd:aff:fe00:1a/64 scope link
       valid_lft forever preferred_lft forever
4: lo: <LOOPBACK,UP> mtu 16436 qdisc noqueue
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
5: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop
    link/ether 7e:2f:da:22:d7:46 brd ff:ff:ff:ff:ff:ff
6: sit0: <NOARP> mtu 1480 qdisc noop
    link/sit 0.0.0.0 brd 0.0.0.0
7: ip6tnl0: <NOARP> mtu 1460 qdisc noop
    link/tunnel6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00


If you want to see only the IP addresses list for an interface (for example, eth1):

ip addr list eth1


  • Why does the VIP address appear in ra1 but not in ra2?
The VIP address appears in the director that is active at the moment. Since both interfaces eth1 and eth2 are up in the MASTER director (ra1), the vip and gw addresses appear in it. Whenever one of those interfaces are turned down, the virtual ip addresses (vip and gw) are moved to the other director, supposing that it has both interfaces up. If none of the directors have both interfaces up, the virtual ip addresses will not appear and the service will be down.
Although it may be than only one interface is turned down, both virtual ip addresses (the internal and the external) will be removed from the director since they are synchronized.
Also take care that when an interface is shut down it looses its link scope IPv6 address. When the interface is turned up it gets back the IPv6 address.


Now we are going to test that load balance, failover and health-checks work properly with keepalived running.

Make requests from the client to the example web page, and you will see that each time the request is answered by a different server, according to the round robin scheduler.

sh request


For checking failover some interfaces will be shut down. When an interface is turned down, the static routes that the interface had are lost. In this scenario, LVS directors ra1 and ra2 have a static default route pointing to rb12 on interface eth1. To set up again this route execute the following command:

route add default gw rb12


Otherwise, although you turned up again the interface, the directors will not know how to send back packets to the client. The execution of this command is only needed when interface eth1 in ra1 or ra2 is turned down, but not when interface eth2.

At the same time that you turn up and down interfaces, check the changes that are produced in the IP addresses list of each director.

You can turn up and down an interface using the ifconfig command in a shell. For example:

ifconfig eth1 down
ifconfig eth1 up


  • Turn down eth1 in ra1 and check that the service keeps working through ra2.
  • Check that ra1 regains the active state when eth1 is recovered (remember to introduce again the default route). You can check it by looking at the IP addresses list.
  • Turn down eth2 in ra1. The service keeps working through ra2.
  • Turn down eth1 in ra2. The service stops working because none of the directors is up.
  • Turn up eth1 in ra2 and set its default route. The service should start working again through ra2.
  • Finally turn up eth1 in ra1 and check that the service is recovered by ra1.


  • Is it possible that two consecutive requests come from the same server although the round robin scheduler is being performed?
Yes, the first request in a director is always from the last real server that was added to the virtual service, in this case s1. Therefore, if a director falls down (or the first time the director is used), when it turns up it will always send the request to the last real server that it has, independently of the scheduling algorithm.


  • How would you avoid having to set the default route every time that eth1 is turned down in ra1 or ra2?
You could avoid that by running OSPF in ra1 and ra2. In this way they would learn the routing information dynamically. Nevertheless, OSPF is not launched in the directors of the scenario because they are not considered part of the backbone, and the server's net prefix should not be announced to external machines, so they can not connect them directly (without going through the LVS director).


To check health-checks we are going to stop the apache2 server on the real servers. You can stop and start the server executing the following commands from a server's shell:

/etc/init.d/apache2 stop
/etc/init.d/apache2 start


  • Turn down s1 and check that now the web pages come from s2 all the time.
  • Turn down s2. Now the service will be down since there isn't any server available.
  • Turn down s1. The web pages will come all the time from s1.
  • Finally turn up again s2. The service will come to its normal usage.


Stop keepalived in both directors by executing in the host's shell:

vnumlparser.pl -x ka_stop@vrrp.xml -v -u root


9. Direct Routing

The ipvsadm configuration in section 7 and the keepalived configuration file used in section 8, use both NAT in the directors. In this section, we are going to change the keepalived configuration file so the directors use direct routing instead of NAT. Therefore, the servers need to have an interface with the VIP address.

At this point the scenario configuration is exactly the same as it was after executing section 6, so there is no problem if you decided not to do sections 7 and 8.

In a typical direct routing LVS configuration, the directors receive incoming server requests through the VIP address and it makes load balancing among the available servers. Then each real server processes requests and sends responses directly to clients, bypassing the directors. In this way direct routing improves scalability since directors don't have to route outgoing packets.

Therefore, for this section the scenario has been modified to include the router ra3 so it can be used as the default gateway for the real servers.

The new scenario is shown in figure 2.


Archivo:Lvs Fig2.jpeg

Figure 2


Configure the keepalived.conf file in director ra1 and ra2 to use direct routing. To do so, change in the virtual server section the line “lb_kind NAT” for “lb_kind DR”. You can edit the file with a text editor, like for example vi [10]. The keepalived.conf file is in:

/etc/keepalived/keepalived.conf


Launch keepalived in the LVS directors (ra1 and ra2):

vnumlparser.pl -x ka_start@vrrp.xml -v -u root


Modify the scenario and prepare the real servers:

vnumlparser.pl -x dr_start@vrrp.xml -v -u root


The above instruction makes the following:

  • Turns up interfaces in ra3.
  • Changes the default route of the real servers to ra3.
  • Adds an interface in the real servers with the VIP address.


You can check that direct routing is performed by analyzing the packets that go through ra3:

tshark -i eth1


As you can see, responses are sent back from the real servers through ra3.

If you wish to use the director as the default gateway for the real servers in a direct routing configuration, some modifications must be done to the kernel. This is because the director receives a packet from an IP address that is of its own (the VIP address), and so it discards it. More information about it can be found at [11].


To delete the direct routing configuration and leave the scenario as it was after executing section 6:

vnumlparser.pl -x dr_stop@vrrp.xml -v -u root


You will also need to undo the changes that were done in the keepalived.conf files. If you don't do it, the changes will remain until the scenario is released.


10. Releasing the scenario

To stop the Quagga daemons, execute in the host's shell:

vnumlparser.pl -x stop@vrrp.xml -v -u root


To release the scenario, closing the virtual machines that were launched:

vnumlparser.pl -P vrrp.xml -v -u root


11. References