Friday, October 23, 2009

IPSec Peer Redundancy Using SLB










IPSec Peer Redundancy Using SLB


This section examines how the SLB concepts can be applied to the IPSec peer redundancy model. Figure 6-15 illustrates this model.



Figure 6-15. Architecture for Load-Balanced IPSec Connections

[View full size image]




The two VPN-GWs in this model are connected to a Catalyst 6500 switch with an MSFC card running IOS that supports SLB functionality. The two VPN-GWs constitute the gateway farm and share a single IKE identity, which is the virtual IP address on the SLB. Example 6-14 shows the configuration of the SLB device and the VPN-GWs.



Example 6-14. Configuration of the Server Load Balancer



Current configuration : 7178 bytes
!
version 12.1
service timestamps debug uptime
service timestamps log uptime
no service password-encryption
!
hostname slb-east
!
boot system flash sup-bootflash:c6sup22-jk2o3sv-mz.121-11bFIE1.bin
enable password lab
!
redundancy
main-cpu
auto-sync standard
!
vlan 1
!
vlan 10 --- vlan to the inet-gw
!
vlan 11 --- vlan to the vpn-gws
!
!
no ip domain-lookup
!
ip slb probe SERVER-PROBE ping
interval 30
faildetect 3
!
ip slb serverfarm IPSEC
failaction purge
probe SERVER-PROBE
!
real 9.1.1.35
weight 1
maxconns 5000
inservice
!
real 9.1.1.36
weight 1
maxconns 5000
inservice
!

ip slb vserver IPSEC-ESP
virtual 9.1.0.37 esp
serverfarm IPSEC
sticky 6000 group 1
idle 3650
inservice
!
ip slb vserver IPSEC-ISAKMP
virtual 9.1.0.37 udp isakmp
serverfarm IPSEC
sticky 6000 group 1
idle 3650
inservice
!
!
!
interface FastEthernet3/1
description to VPN-gateway
duplex full
speed 100
switchport
switchport access vlan 10
!
interface FastEthernet3/5
description to vpn-gw1-east
no ip address
duplex full
speed 100
switchport
switchport access vlan 11
!
interface FastEthernet3/6
description to vpn-gw2-east
no ip address
duplex full
speed 100
switchport
switchport access vlan 11
!
!
interface Vlan1
no ip address
!
interface Vlan10
ip address 9.1.0.33 255.255.255.0
!
interface Vlan11
ip address 9.1.1.33 255.255.255.0
!
router ospf 1
log-adjacency-changes
network 9.1.0.0 0.0.0.255 area 0


slb-east#show ip slb realserver

real farm name weight state conns
-------------------------------------------------------------------
9.1.1.35 IPSEC 1 OPERATIONAL 2
9.1.1.36 IPSEC 1 OPERATIONAL 2ECE

ILB-1#show ip slb conn

vserver prot client real state nat
-------------------------------------------------------------------------------
IPSEC-ESP ESP 9.1.1.155:0 9.1.1.35 ESTAB none
IPSEC-ISAKMP UDP 9.1.1.155:500 9.1.1.35 ESTAB none
IPSEC-ESP ESP 9.1.1.154:0 9.1.1.36 ESTAB none
IPSEC-ISAKMP UDP 9.1.1.154:500 9.1.1.36 ESTAB none

ILB-1#show ip slb vservers

slb vserver prot virtual state conns
----------------------------------------------------------------------
IPSEC-ESP ESP 9.1.0.37/32:0 OPERATIONAL 0
IPSEC-ISAKMP UDP 9.1.0.37/32:500 OPERATIONAL 0
________________________________________________________________________________
vpn-gw1-east#
crypto isakmp policy 1
encr 3des
authentication pre-share
group 2
cryto isakmp key cisco 9.1.1.154 255.255.255.255 no-xauth
crypto isakmp keepalive 300 5

crypto isakmp client configuration group cisco
key coke123
dns 15.15.15.15
wins 16.16.16.16
domain cisco.com
pool cisco

crypto ipsec transform-set esp-tunnel-internet esp-3des esp-sha-hmac
!
crypto dynamic-map cisco 10
set transform-set esp-tunnel-internet
reverse-route
!
crypto map crypmap local-address Loopback0
crypto map crypmap 2 ipsec-isakmp dynamic cisco
!
!
interface Loopback0 ---- this address configured is same as the virtual server
address
ip address 9.1.0.37 255.255.255.255

!
interface FastEthernet0/0
description to public ip address 9.1.1.35 255.255.255.0
crypto map crypmap
!
interface FastEthernet0/1
description to corporate
ip address 10.1.1.1 255.255.255.0
!
!
router ospf 10
log-adjacency-changes
network 10.1.1.0 0.0.0.255 area 0
redistribute static subnets
default-information originate
!
ip route 0.0.0.0 0.0.0.0 9.1.1.33




Note



The configuration of all the VPN-GWs behind the SLB will look exactly the same, except for the real and private IP addresses.




Note



Static crypto maps should not used on the VPN-GWs with set peer statements because the SLB function operates only on connections initiated from the spokes and clients. The dynamic crypto map will eliminate a significant configuration burden. The VPN-GWs must use dynamic crypto maps in order to receive IPSec connections from unknown remote peers.




Notice that the configuration of the SLB has two instances of the slb vserver command, one for IKE traffic and another for encapsulating security payload (ESP) traffic. It is important to bind both these vservers together to avoid asymmetric IKE and IPSec paths. In other words, we don't want IKE negotiation to happen with a VPN-GW1-EAST while IPSec traffic termination is directed to a separate VPN-GW2-EAST. The concept of sticky connections binds these together. A couple of other important aspects of this model are:


  • An extra VPN-GW in the gateway farm provides for redundancy. A failure of one VPN-GW in the gateway farm requires the load to be redistributed entirely among the remaining servers.

  • The max connections parameter on the SLB should take into account both IKE and IPSec connections; for example, if a real VPN-GW can terminate 1000 IPSec tunnels, the maxconnections should be configured as 2000 (1000 IKE SA and 1000 incoming IPSec SA).


The spoke configuration uses the virtual IP address as the IKE identity of the gateway farm. When the spoke sends an IKE message to this virtual IP address, the SLB receives the IKE traffic and routes it to one of the real VPN-GWs based on the load-balancing algorithm configured on the SLB. For the message to terminate and be processed at the real VPN-GW, the SLB virtual IP address must be configured on the real VPN-GW. A loopback interface is typically used for this purpose. All the real VPN-GWs in the gateway farm should be configured with the same virtual IP address because the IKE and IPSec traffic for an IPSec session could potentially terminate on any VPN-GW. This overlapping IP address scheme violates general IP network design principles, and care should be taken to not advertise this IP address in the rest of the network.



Cisco VPN 3000 Clustering for Peer Redundancy


The Cisco VPN 3000 Concentrator supports a peer redundancy model that conceptually works just like the SLB scheme discussed previously. The VPN 3000 model of peer redundancy is known as clustering. This model is shown in Figure 6-16 and is implemented by grouping together logically, into a virtual cluster, two or more VPN 3000 Concentrators on the same private LAN-to-LAN network, private subnet, and public subnet. A virtual cluster is a set of Concentrators that all serve the same group of users. The remote clients are unaware of the fact that multiple Concentrators exist, because they connect to a virtual representation of the set of Concentrators.



Figure 6-16. IPSec Clustering for Peer Redundancy

[View full size image]




All devices in the virtual cluster carry session loads. One device in the virtual cluster, the virtual cluster master, is responsible for directing incoming calls to the other devices, which are called secondary devices. The virtual cluster master monitors all the devices in the cluster and keeps track of how busy each one is, distributing the session load accordingly. The role of virtual cluster master is not tied to a physical device; it can shift among devices. For example, if the current virtual cluster master fails, one of the secondary devices in the cluster takes over that role and immediately becomes the new virtual cluster master.


Note



VPN clustering works only with Cisco VPN clients. It does not work for site-to-site connections.




The virtual cluster appears, to outside clients, as a single virtual cluster IP address. This IP address is not tied to a specific physical deviceit belongs to the current virtual cluster master, and is, therefore, considered virtual. A VPN client attempting to establish a connection will connect first to this virtual cluster IP address. The virtual cluster master then returns the public IP address of an available, and least loaded, host in the cluster. In a second transaction (transparent to the user), the client connects directly to that host. In this way, the virtual cluster master directs traffic evenly and efficiently across resources. Example 6-15 shows the configuration of the VPN 3000 for clustering.



Example 6-15. VPN 3000 Configuration for Clustering



Configuration > Interface Configuration > Configure Ethernet #2 (Public) > Interface
Setting > Enable using Static IP Addressing > Enter IP Address = 9.1.1.35
Configuration > Interface Configuration > Configure Ethernet #2 (Public) > Interface
Setting > Enable using Static IP Addressing > Enter Subnet Mask = 255.255.255.240
Configuration > Policy Management > Traffic Management > Filters > Assign Rules to a
Filter > Add a Rule to this Filter (Public) > VCA In
Configuration > Policy Management > Traffic Management > Filters > Assign Rules to a
Filter > Add a Rule to this Filter (Public) > VCA Out
Configuration > System > Load Balancing > Cluster Configuration > VPN Virtual Cluster
IP Address = 9.1.1.37
Configuration > System > Load Balancing > Cluster Configuration > Encryption = Enabled
Configuration > System > Load Balancing > Cluster Configuration > Load-Balancing Enable
= Enable
Configuration > System > Load Balancing > Cluster Configuration > IPSec Shared Secret
= cisco123
Configuration > System > Load Balancing > Device Configuration > Enable/Disable Load
Balancing = Enable
Configuration > System > Load Balancing > Device Configuration > Device Priority = 10




If a VPN 3000 in the cluster fails, the client may close the IPSec session state and immediately reconnect to the virtual cluster IP address. The virtual cluster master then directs these connections to another active device in the cluster. Should the virtual cluster master fail, a secondary device in the cluster automatically takes over as the new virtual session master. Even if several devices in the cluster fail, users can continue to connect to the cluster as long as any one device in the cluster is up and available.




Peer Redundancy Summary


This section highlighted several peer redundancy models, emphasizing the advantages as well as the disadvantages for the various models. In particular, IPSec peer redundancy may be more appropriate for client-initiated connections. The deficiencies in the route management and the complex proxy statements limit the utility of native IPSec models for site-to-site connections. Conversely, we have demonstrated that the GRE peer redundancy model is more appropriate for static environments such as site-to-site connections in which complex routing adjacencies may be required between the two IPSec peers. In all cases, the state management of the IPSec security associations may affect the performance and scalability of the redundancy model.











    No comments:

    Post a Comment