Often, surfing on networking forums and blogs, I find posts by people asking how to setup dual WAN connection and load-balancing on a single box. They are looking for solutions to have the LAN connected to Internet, VoIP traffic with an acceptable QoS level, most of them have to handle VPN tunnels and DMZ too!
Of course, a good network architect would never consider such solution for this scenario, but when budget is low (or not exists at all) there are not many ways to have all these things running!
In this catastrophic scenario Policy-based routing (PBR) can save us!
Here you can find a little PBR based solution and the GNS3 Lab.
We have a router connected to the ISP with two WAN connections:
– a Bronze link, with little bandwidth, on which we have a /30 subnet;
– a Gold link, with good performances, on which we have a /30 point-to-point subnet and an additional /24 routed subnet.
Note that ISP does not accept inbound traffic coming from a subnet that is not the one routed through the ingress interface: for example, we can’t send traffic from 220.127.116.11/24 out the Bronze link. One subnet, one link.
Our goals are:
– users on the LAN need access to Internet;
– mission critical traffic has to go out through the Gold link;
– our servers have to be reachable from the outside on their public IP addresses.
For the sake of simplicity, in our example and lab mission critical traffic will be telnet traffic. In real life it can be RTP, database or other important traffic.
At first sight, we can see there is no way to achieve Server farm fault tolerance: if the Gold link goes down, we can’t do anything to keep the subnet reachable. Ok, we can just tell the CIO to get a bigger budget for the network!
On this topology we have 5 interesting traffic flows:
LAN -> Mission critical services [Gold] LAN -> WAN [Bronze] LAN -> Server farm ServerFarm -> WAN [Gold] ServerFarm -> LAN
Traffic coming from the LAN towards WAN or Mission Critical Services needs to be translated by NAT too: remember that in the order of operations a packet is first routed, then translated, so actually we can focus just on routing packets out the box in the right way. We will take care about NAT later.
Standard routing just forwards packets on the basis of the destination network; it doesn’t care about Layer 4 properties nor source IP addresses. How can we route traffic on the basis of other elements, such as TCP destination port? If we want to route packets in the expected way we need to deploy Policy Based Routing(PBR). PBR, indeed, can take decisions on the basis of a lot of parameters: source address, destination ports, QoS marking.
Let’s proceed in an orderly fashion.
As first, this is the starting config:
interface Serial2/0 description Bronze ip address 18.104.22.168 255.255.255.252 ! interface Serial2/1 description Gold ip address 22.214.171.124 255.255.255.252 ! interface FastEthernet0/0 description LAN ip address 192.168.0.1 255.255.255.0 ! interface FastEthernet1/0 description ServerFarm ip address 126.96.36.199 255.255.255.0
We just have WAN interfaces up and running and the fastethernet interfaces pointing to the right subnet.
We set default route out to GW through the Bronze link:
ip route 0.0.0.0 0.0.0.0 Serial2/0
With this simple configuration we have already 3 of 5 flows routed in the right way:
– LAN to WAN
– LAN to ServerFarm
– ServerFarm to LAN
Now we need to start our PBR configuration! To do this, we need to create route-maps and then apply them to the ingress interfaces on which policy-based routed packets will enter.
As said, PBR can make decisions on the basis of a lot of elements, such as source address and Layer 4 properties. So, let’s define an access list to match Mission Critical Services (telnet in our example):
ip access-list extended GoldServices deny ip any 188.8.131.52 0.0.0.255 permit tcp any any eq telnet deny ip any any
The access-list just matches telnet traffic that is not directed to our Server farm.
Now we have to define a route-map matching Mission critical traffic and sending it out the Gold link…
route-map PBR_LAN permit 10 match ip address GoldServices set interface Serial2/1 Serial2/0
… then we apply it to the LAN facing interface:
interface FastEthernet0/0 description LAN ip policy route-map PBR_LAN
If a packet doesn’t match any route-map match statement it’s routed on the basis of the standard routing table (so, through the Bronze link).
Note that we used two interface names in the set interface command: if S2/1 is down, IOS will use S2/0, so we have a small level of redundancy and WAN side fault-tolerance for Mission Critical Traffic. We can achieve fault-tolerance for LAN to WAN traffic too by adding an higher metric default route:
ip route 0.0.0.0 0.0.0.0 Serial2/1 10
Now, LAN to Mission Critical Services is OK; we need to do the same for Server farm traffic:
ip access-list extended ServerFarm-To-WAN deny ip 184.108.40.206 0.0.0.255 192.168.0.0 0.0.0.255 permit ip any any ! route-map PBR_ServerFarm permit 10 match ip address ServerFarm-To-WAN set interface Serial2/1 ! interface FastEthernet1/0 description ServerFarm ip policy route-map PBR_ServerFarm
Here our access-list does match traffic coming from the server farm to destinations outside the LAN: of course we don’t want to route ServerFarm-to-LAN traffic though the WAN! Unfortunately we can’t add a second interface to the set interface command: our ISP will not accept traffic coming from 220.127.116.11 on the bronze link.
Routing is OK, let’s take care about NAT.
We have 1 inside interface (the LAN facing fastethernet) and 2 outside interfaces:
interface FastEthernet0/0 description LAN ip nat inside ! interface Serial2/0 description Bronze ip nat outside ! interface Serial2/1 description Gold ip nat outside
Here we don’t have to think about “policy-based NAT”: when NATting, policies have already been applied, and packets routed accordingly. We just have to translate them in the right way!
As first define the pool to use when translating Gold packets:
ip nat pool LAN-to-Gold 18.104.22.168 22.214.171.124 netmask 255.255.255.0
Then define 2 new route-maps, used in the ip nat inside source command:
route-map NAT_Gold permit 10 match ip address LAN match interface Serial2/1 ! route-map NAT_Bronze permit 10 match ip address LAN match interface Serial2/0
ip nat inside source route-map NAT_Gold pool LAN-to-Gold overload ip nat inside source route-map NAT_Bronze interface Serial2/0 overload
Both route-maps does match 192.168.0.0/24 traffic, but the first (NAT_Gold) takes care only of those packets routed through the s2/1 interface, while the second (NAT_Bronze) of packets routed through the Bronze interface. In this way we are sure the right inside global IP address will be used when translating.
Some tests… on the GNS3 Lab PC and Server are two routers:
traceroute and telnet from PC to “internet” (126.96.36.199 is a ISP loopback):
PC#traceroute 188.8.131.52 n Type escape sequence to abort. Tracing the route to 184.108.40.206 1 192.168.0.1 68 msec 40 msec 16 msec 2 220.127.116.11 120 msec * 52 msec PC# PC#telnet 18.104.22.168 Trying 22.214.171.124 ... Open User Access Verification Password:
On the gateway, traceroute traffic is translated to 126.96.36.199 (Bronze link) while telnet to 188.8.131.52 (Gold link):
GW#show ip nat translations Pro Inside global Inside local Outside local Outside global tcp 184.108.40.206:46878 192.168.0.10:46878 220.127.116.11:23 18.104.22.168:23 udp 22.214.171.124:49178 192.168.0.10:49178 126.96.36.199:33437 188.8.131.52:33437 udp 184.108.40.206:49179 192.168.0.10:49179 220.127.116.11:33438 18.104.22.168:33438 udp 22.214.171.124:49180 192.168.0.10:49180 126.96.36.199:33439 188.8.131.52:33439
A traceroute from the server, going through the Gold link:
Server#traceroute 184.108.40.206 n Type escape sequence to abort. Tracing the route to 220.127.116.11 1 18.104.22.168 72 msec 52 msec 12 msec 2 22.214.171.124 68 msec * 88 msec Server#
You can download GNS3-Lab from GNS3-Labs.com: http://www.gns3-labs.com/2009/04/14/gns3-topology-dual-wan-connection-on-cisco-with-policy-based-routing-pbr/
Latest posts by Pier Carlo Chiodi (see all)
- Good MANRS for IXPs route servers made easier - 11 December 2020
- Route server feature-rich and automatic configuration - 13 February 2017
- Large BGP Communities playground - 15 September 2016